forums › forums › SQLyog › SQLyog: Bugs / Feature Requests › Sql Dump Fails To Restore
- This topic is empty.
-
AuthorPosts
-
-
August 14, 2009 at 6:40 pm #29466Brian H.Member
I did a little more testing and discovered that if I do a mysqldump to include the views (and everything else in the D:cool:, the dumpfile is created without errors. If I take that dump file and import it into a NEWly created database with mysql command line tools, it imports correctly, with no errors.
When I then go into SQLyog and try to export the VIEWS from the newly created database, I get the same error as before. Below are the commands I used to test in mysql:
# mysqldump -uuser -p –skip-opt –all –complete-insert mrm>mrmfulltest.sql
(in mysql) CREATE DATABASE exporttest;
# mysql -uuser -p exporttest
-
November 8, 2009 at 9:31 pm #11744smineParticipant
- used Backup Database As SQL Dump (Ctrl+Alt+E) and file was created successfully.
- used Restore From SQL Dump (Ctrl+Shift+Q) and received “Error Code: 1153 – Got a packet bigger than 'max_allowed_packet' bytes“
Tools>Show>Variables says the export server has max_allowed_packet=1047552 and the import server has max_allowed_packet=1047552. the insert statement that failed is 1091575 bytes long. the longest insert statement in the file is 1177199 bytes.
why would SQLyog create a dump that it cannot restore? i spent hours editing the file to reduce the length of insert statements, and the import worked.
i know you always ask us to send you the file. please not this time, it is 160MB. 😉
-
November 9, 2009 at 8:15 pm #29957smineParticipantnithin wrote on Nov 8 2009, 10:18 PM:If you have checked the option “CREATE BULK INSERT STATEMENT” in EXPORT dialog, Can you please export that table by unchecking that Bulk-Insert option?
And please try to import this.
yes, this worked. (well, i think it worked. i don't want to wait 3 hours for the import to finish. it has been running for about 25 minutes with no errors and has passed the earlier point of failure.)
i don't think i deliberately turned on BULK INSERT, i think it was the default, but maybe i am wrong.
-
November 9, 2009 at 8:53 pm #29958peterlaursenParticipant
You are right: BULK INSERTS is default – for the same performance reason as you mentioned here. Large backups with SINGLE INSERTS are hopeless to restore.
My best guess/theory still is that we calculate column length() client-side and not server-side. The length of a DOUBLE(24.8) with a value of “2.1” would be calculated as 3 bytes long when calculated client-side, but 10 bytes long (“2” + “.” + [8 decimals]) when calculated server-side. For each occurrence of such type in a BULK INSERT statement we will accumulate a calculation error of 7 bytes (if my theory is correct).
But please let developers check this theory of mine tomorrow.
-
November 9, 2009 at 11:13 pm #29959peterlaursenParticipant
I also want to comment on your extensive use of DOUBLE types. DOUBLE's (and FLOAT's) have significant numerical non-precision. Most often a DECIMAL is a better option if values stored are not in 'microscropical' or 'astronomical' range. For MONEY/CURRENCY and similar data I think a DECIMAL(19,4) is normal recommendation.
FLOATING POINT types in MySQL create problems. Refer to: http://dev.mysql.com/doc/refman/5.0/en/pro…with-float.html
But no excuse for a possible bug in SQLyog of course!
-
November 9, 2009 at 11:35 pm #29960smineParticipant
good point about problems with DOUBLE. some of our values are monetary, others are conversion factors and ratio values which may benefit from greater precision.
these databases are rather old and pre-date MySQL 5.0.3, so even using DECIMAL back then still would have produced floating point errors.
i will pass along to our designers/architects the information you provided. it's unlikely we will make changes in existing tables, but it may be helpful for new tables.
thanks!
-
November 10, 2009 at 7:41 am #29961nithinMember
Hello,
We are analyzing this issue.
Quote:it has been running for about 25 minutes with no errors and has passed the earlier point of failure.You meant to say sql dump with Single Insert is imported fine for 25 mins and you stopped? But we cant tell the 'failure point' is already passed or not, it may fail in any point.
Please clarify this statement of your 1st post
Quote:the insert statement that failed is 1091575 bytes long. the longest insert statement in the file is 1177199 bytes.this size you mentioned is size of single INSERT statement(for single row) or BULK INSERT statement? And How did you get this?
And please do one more thing..We have an option to specify the BULK INSERT size in preference itself, give 500(K) in preference and do EXPORT(with BULK INSERT option checked) and Import the dump.
please see the screen shot how to set this option in preference.
-
November 10, 2009 at 5:01 pm #29962smineParticipantnithin wrote on Nov 9 2009, 11:41 PM:You meant to say sql dump with Single Insert is imported fine for 25 mins and you stopped? But we cant tell the 'failure point' is already passed or not, it may fail in any point.
in 25 mins it imported more rows of Single Insert than sets of values with Bulk Insert. it always fails on first Bulk Insert, but with Single Insert it inserted data past all the values in the first Bulk Insert.
nithin wrote on Nov 9 2009, 11:41 PM:Please clarify this statement of your 1st postthis size you mentioned is size of single INSERT statement(for single row) or BULK INSERT statement? And How did you get this?
the size is for Bulk Insert (e.g. insert into (column_list) values (value_list_1),(value_list_2),(value_list_3),…). i opened the dump file in a text editor and moved the cursor to the end of the insert statement. the text editor told me the line length.
nithin wrote on Nov 9 2009, 11:41 PM:And please do one more thing..We have an option to specify the BULK INSERT size in preference itself, give 500(K) in preference and do EXPORT(with BULK INSERT option checked) and Import the dump.this was successful!
the old setting was “Get server default”. i changed to 500 and did Export with Bulk Insert, then Import. it took about 6 mins for about 350,000 rows (and 9 indexes) in 211 insert statements. the longest insert statement line length in the dump file is about 809000 bytes. total size of dump file about 156700KB.
i didn't know about all of the custom settings. very useful!
-
November 11, 2009 at 2:20 pm #29963nithinMember
Hello,
There was an issue in calculating the size of INSERT. It was always not considering the length of 1st 'values set' in INSERT statements.
It has been fixed now in this special build. Now you should be able to EXPORT/IMPORT with 'Get Server Default' option checked in Preference. So the failed case must work for you now.
http://www.webyog.com/downloads/betas/not_…/SQLyog82.1.exe
Could you please verify the fix in your environment and let's know the status?
Thanks
Nithin
-
November 11, 2009 at 5:24 pm #29964smineParticipantnithin wrote on Nov 11 2009, 06:20 AM:There was an issue in calculating the size of INSERT. … It has been fixed now in this special build. … Could you please verify the fix in your environment and let's know the status?
it works! thank you!
i am curious how you determine how many value sets can fit on a bulk insert statement. using actual data size of export, the lines appear to be able to fit many more sets on each statement. but using max possible lengths of data fields, it seems like there are too many sets on each statement.
-
November 12, 2009 at 5:15 am #29965nithinMember
Hello ,
thanks for confirming the fix
Quote:how you determine how many value sets can fit on a bulk insert statementIt depends on 'max_allowed_packet' or custom bulk size. Before adding each row values to INSERT we are checking whether it cross the 'max_allowed_packet' size or not. If it crosses we add this to new INSERT statement.
But at-least one value set will be in INSERT statement regardless of this checking.
The size of each row can find using the query..
SELECT LENGTH(`col_1`) + LENGTH(`col_2`) +…..+LENGTH(`col_n`) FROM table;
Instead of query in code we use corresponding MySQL API.
-
November 12, 2009 at 8:48 am #29966peterlaursenParticipant
Also note that the character-length and byte-length is not necessarily the same. Non-ascii characters are multibyte in uft8. So the length as displayed in an editor may fool you.
-
-
AuthorPosts
- You must be logged in to reply to this topic.