Unsupported Screen Size: The viewport size is too small for the theme to render properly.

Error Using ODBC Import

  • This topic is empty.
Viewing 19 reply threads
  • Author
    Posts
    • #8854
      TomBarrand
      Member

      Hi,

      I am using the ODBC Import to take data from an SQL Server database to a MySQL database. It works fine apart from one thing.

      I have a large table in the SQL Server database that stores documents as Image datatypes. SQLYog creates the table and gives the column a longblob datatype. It imports 5389 rows then stops with an error:

      Got a packet bigger than 'max_allowed_packet' bytes.

      How do I get round this? I really need to get this working.

      Thanks for your help

    • #17161
      TomBarrand
      Member

      Hi,

      I am using the ODBC Import to take data from an SQL Server database to a MySQL database. It works fine apart from one thing.

      I have a large table in the SQL Server database that stores documents as Image datatypes. SQLYog creates the table and gives the column a longblob datatype. It imports 5389 rows then stops with an error:

      Got a packet bigger than 'max_allowed_packet' bytes.

      How do I get round this? I really need to get this working.

      Thanks for your help

    • #17162
      Ritesh
      Member
    • #17163
      TomBarrand
      Member

      Hi,

      I have looked at that article and tried what it says, but I still get the same problem.

      I am running the following version of MySQL.

      Linux (IBM/Motorola PowerPC, IBM/POWER) Standard 4.1.10a

      This is how the commands looked when I typed them in on the server:

      mysql -> mysql –max_allowed_packet=1GB

      -> mysqld –max_allowed_packet=1GB

      ->

      It doesn't look like they have taken effect. Is there something I am doing wrong?

      I have a table with hundreds of thousands of rows in in and one of the fields is holding a binary document. It is crucial that I can use your tool to migrate this table from SQL Server to MySQL.

      Thanks for your help

      Tom

    • #17164
      peterlaursen
      Participant

      try editing your my.cnf -file. Remember to stop and restart the MySQL-server for the change to take effect.

    • #17165
      TomBarrand
      Member

      Hi,

      I don't seem to have the file. I have looked in all of the folders in the MySQL install directory and also /etc and there is no my.cnf.

      I have noticed that there are some sample files, medium, small, etc. Could I just copy one of thesr to /etc and rename it to my.conf? How do I determine which sample to use?

      Thanks for your help

    • #17166
      peterlaursen
      Participant

      exactly …

      the files you mention are “templates” for the my.cnf file.

      But I have no experience with running Mysql under Linux, so I don't know have to make the server read it at satrt.

      However “mySQL Administrator” is available for linux from mysql.com.

      With this tool you can point to the my.cnf -file to be used. So I think that will do …

    • #17167
      TomBarrand
      Member

      I will try one of the templates and see what happens.

      I can't use MySQL Administrator to do it because it won't run on the version of Linux that I am using. I am using an IBM Open Power 64bit server and the tool won't run on it, so I have to connect remotely, but you can touch the startup variables remotely.

      Will let you know how I get on.

      Any ideas how I will know if it has worked? Is there a command to type to tell you what the variable is currently set to?

      Thabks

    • #17168
      peterlaursen
      Participant

      simply “show variables” will show a lot af parametres for the server – among these max_allowed_packet setting .

      More options for the “show” command:

      http://dev.mysql.com/doc/mysql/en/show.html

    • #17169
      TomBarrand
      Member

      Hi,

      I have used a template my.cnf file and this got rid of the error, but has given me another error now.

      When I had the problem before it would always stop on the record 5389 then I would get the max_allowed_packet error. After changing this value to 1GB the error went and approximately 70,000 records were imported then I got the error “The table 'Attachments' is full.

      Any ideas? Is there some setting for the maximum table size?

      The SQL Server table that I have has 1.2 million records in it.

      Thanks

    • #17170
      peterlaursen
      Participant

      You will have to study this yourself:

      http://dev.mysql.com/doc/mysql/en/table-size.html

      It could be either an OS/file system related topic or a Mysql related topic.

      How big is table “attachments” ?

    • #17171
      peterlaursen
      Participant

      I forgot this:

      From the “alter table” pane in sqlyog you can choose “advanced properties” you can set the “max. rows” and “average row length” parameter for the table – after creating it.

    • #17172
      TomBarrand
      Member

      In SQL Server the full database that I am transferring to MySQL is 85GB. I have transferred all of the tables from SQL Server to MySQL apart from the Attachments table and they take up 2GB of space so the Attachments table must be over 80GB.

      Is it possible for MySQL to hold that much data in one table?

      Cheers

    • #17173
      peterlaursen
      Participant

      Yes – I quote from the mysql documentation (some omitted)

      *************************************************************

      MySQL 3.22 had a 4GB (4 gigabyte) limit on table size. With the MyISAM storage engine in MySQL 3.23, the maximum table size was increased to 8 million terabytes (2 ^ 63 bytes). With this larger allowed table size, the maximum effective table size for MySQL databases is usually determined by operating system constraints on file sizes, not by MySQL internal limits.

      The InnoDB storage engine maintains InnoDB tables within a tablespace that can be created from several files. This allows a table to exceed the maximum individual file size. The tablespace can include raw disk partitions, which allows extremely large tables. The maximum tablespace size is 64TB.

      By default, MySQL creates MyISAM tables with an internal structure that allows a maximum size of about 4GB. You can check the maximum table size for a table with the SHOW TABLE STATUS statement or with myisamchk -dv tbl_name. See Section 13.5.4, “SHOW Syntax”.

      If you need a MyISAM table that is larger than 4GB in size (and your operating system supports large files), the CREATE TABLE statement allows AVG_ROW_LENGTH and MAX_ROWS options. See Section 13.2.6, “CREATE TABLE Syntax”. You can also change these options with ALTER TABLE after the table has been created, to increase the table's maximum allowable size. See Section 13.2.2, “ALTER TABLE Syntax”.

      ******************************************

      But before importing data to the table you should run the “alter table” statement with appropiate AVG_ROW_LENGTH and MAX_ROWS options as I wrote either from command line or from sqlyog.

    • #17174
      peterlaursen
      Participant

      Any news here ?

    • #17175
      Shadow
      Member

      Standard MyISAM tables won't be able to hold so much data, they are limited by OS file size restrictions. You may use merge tables if you want to stck with MyISAM. However, I would suggest you to switch to InnoDB, which may use several files to store a table.

    • #17176
      peterlaursen
      Participant

      >> Shadow .. it's a myth and a very common misunderstanding!

      From the mysql-documentation:

      With the MyISAM storage engine in MySQL 3.23, the maximum table size was increased to 8 million terabytes (2 ^ 63 bytes).

      But you will have to set the parametres for the table with the “create table” og “alter table” statement, since default is only about 4 GB.

    • #17177
      Shadow
      Member

      Yes, I knew that, but OS restrictions still apply. Most OSs will not allow you to create files larger than 4GB. That's why I wrote that MyISAM wouldn't do, merge tables or InnoDB engine should be used.

    • #17178
      peterlaursen
      Participant

      Neither Windows with NTFS or LINUX with kernel version 2.4 or higher and Ext3FS have this file size limit.

      But since Tom seems gone there is no use to discuss this matter ….

    • #17179
      Shadow
      Member

      Mea maxima culpa, I forgot NTFS 😮 . On Monday I personally deleted an overgrown log file of 6GB…

      How do you know he/she is not following our little conversation? 😀

Viewing 19 reply threads
  • You must be logged in to reply to this topic.