forums › forums › SQLyog › Using SQLyog › Problem About Execute Batch File
- This topic is empty.
-
AuthorPosts
-
-
January 12, 2006 at 4:21 am #9435sub-zeroMember
Dear Sir,
Our company bought SQLyog V3.71 for Database backup and sync. I usually use Tool –> export Data As Batch Scripts make a ***.sql file for backup with no problem.
Yesterday something wrong with our Database server ( Redhat 9 + Mysql 4.1.13 ), I tried to restore data from backup file (almost 980M) using Tool –> Execute Batch file , it was hang up for long long time with message said” Query(s) Executed(7) — Total Bytes(0). No any other action after 2 hours,3hours….
I tried SQLyog V5.01 today with same problem, hang up 4 hours with message ( 194k/983000K) till now.
I tried MySql like this,
mysql> source ***.sql
Query OK, 0 rows affected (0.01 sec)
Database changed
Query OK, 0 rows affected (0.04 sec)
Query OK, 0 rows affected (0.01 sec)
Query OK, 1870 rows affected (0.13 sec)
Records: 1870 Duplicates: 0 Warnings: 0
Query OK, 0 rows affected (0.02 sec)
Query OK, 0 rows affected (0.01 sec)
ERROR 1153 (08S01): Got a packet bigger than 'max_allowed_packet' bytes
Query OK, 0 rows affected (0.27 sec)
Query OK, 0 rows affected (0.09 sec)
I have tried to set max_allowed_packet=700M, it still have this problem, when I set it to max_allowed_packet=800M( the server only have 768M RAM), it said ” out of memory”.
Is this MySQL or SQLyog problem?? Because I can not get error when I export Data from database use SQLyog.
Thanks for your kind help.
-
January 12, 2006 at 5:20 am #20210RiteshMember
Did you take the backup using the option – “Insert As Bulk Insert Statements”?
-
January 12, 2006 at 10:29 am #20211peterlaursenParticipant
and …
do you have a lot of VERY BIG variable types such as LONGBLOBS in each row of data?
On which platform/OS is the server running? There is no option here to dynamically swap physical memory to 'virtual memory' ?
-
January 12, 2006 at 1:52 pm #20212sub-zeroMember
Thanks for your help first.
I use SQLyog V3.71 export data as SQL statements, and with default setting,these item have been select during backup.
1. include “USE dbname-” statement
2. add create database
3. include “drop table” statement
4. Lock all tables for read
5. Create Bulk Insert Statements
The Database server is Redhat 9.0+ MySQL V4.1.13
Yes, I got these info said it was LONGBLOBS type.
filedata longblob
-
January 12, 2006 at 3:02 pm #20213peterlaursenParticipant
http://dev.mysql.com/doc/refman/4.1/en/sto…quirements.html
says storage requirement for a longblob is 2^32+4 Bytes. The 4 bytes are for adressing – the 2^32 bytes is for data.
2^32 Bytes = (2^32)/(2^10)/(2^10)/(2^10) GB = 4 GB.
That is what data ONE SINGLE Longblob can hold.
And with 'Bulk Insert' you have more rows in one insert statement.
How is your swap partition mounted? I don't know if it will help to increase it (on Windows I believe it would). Maybe CalEvans knows ??
-
January 12, 2006 at 3:42 pm #20214sub-zeroMember
Thanks .
Here is our server's info:
[root@edms root]# free
total used free shared buffers cached
Mem: 771804 46216 725588 0 880 10780
-/+ buffers/cache: 34556 737248
Swap: 1566328 29668 1536660
[root@edms root]#
I found another topic similar with my problem here:
I have two computers, both running SQLYog 4.03. Windows XP Home and W98 SE.
If I run a batch file, it will pause every 64k, and sometimes stop with an error message. This happens with different hosts.
At the moment, I have to split larger files – I just split a 482k file into 9, to stay under this limit.
Any suggestions?
Is there any tools can split the **.sql into small pieces??
Thanks again for your help.
-
January 12, 2006 at 4:12 pm #20215peterlaursenParticipant
I think it is different situations …
YOUR problem is that there is one single INSERT statement that needs a bigger buffer than you can allocate. It is the max_allowed_packet issue.
THE OTHER PERSONs problem is a problem with the file size – not the row/record size. I don't know the reason for that. It would be nice if you had posted a link to that thread. But it could be a PHP configuration issue.
It looks like you have free swap memory, but I don't know if this buffer can be swapped on Linux.
It should not be hard manually to find that insert statement the causes the trouble. Probably is the first one in the biggest table (the one with LONGBLOBs).
Try this SQL
Code:create test_table select * from that_table_that_troubles limit x,1;where x is a value of a row that you know is particularly big. That will create a new table with that row only. Export that table. How big is it?
I know no tool to convert 'Bulk Inserts' to 'Single Row Inserts'. It is not a big deal to write some program that does if you are familiar with coding. But that was not the idea, i know!
-
January 12, 2006 at 4:29 pm #20216peterlaursenParticipant
BTW:
I have PM'ed CalEvans and asked for his assistance. He probably knows more about MySQL on Linux than anybody else here. So if there is some way to configure -away this problems he knows.
-
January 13, 2006 at 2:07 pm #20217sub-zeroMember
Thanks for your help.
I am trying to add swap file in Redhat system.
Maybe it can work.
-
January 16, 2006 at 1:36 pm #20218sub-zeroMember
Hi, Thanks for all your kind help, I have solved this problem.
Firstly I selected the right files of my.cnf, please read these files carefully, you can find my-huge.cnf,my-large.cnf… have some different setting according your RAM and MySQL using, as for me, I tried only my-large.cnf can work because we only have 768M RAM.
Secondly, I have to change the max_allow_ packard=700M, I do not know why only this value can work.
Thirdly, I changed the swap size from 1.5G to 2.5G using swapfile on Redhat9, you can search it on http://www.redhat.com.
Lastly, I execute backup Batch file using webmin mySQL manager.
Hope this is usefully for others.
Another things, do I need to unselect “Create Bulk Insert Statements' when I backup datas next time??
:))
-
January 16, 2006 at 2:02 pm #20219RiteshMemberQuote:Another things, do I need to unselect “Create Bulk Insert Statements' when I backup datas next time??
Yes, if you have lot of rows in your table.
-
January 16, 2006 at 10:12 pm #20220peterlaursenParticipant
BTW: did you consider to upgrade SQLyog?
Also the Backup Tool has been quite a lot improved since 3.71.
Though it will not change this.
-
-
AuthorPosts
- You must be logged in to reply to this topic.