forums › forums › SQLyog › Using SQLyog › .sql Files Being Truncated To 2.0Gb
- This topic is empty.
-
AuthorPosts
-
-
November 12, 2010 at 2:23 pm #12153timhavensMember
I have a table which has about 81M rows in it. my resulting .sql file ends up with about 4M insert statements and the file is truncated to 2.0Gb, actually cutting off the last insert it wrote partway through it.
I've verified that file size 'ulimit' is not an issue on this Centos Server (it's set to unlimited, and I've created some very large files outside of sqlyog).
What is causing this?
-
November 12, 2010 at 2:48 pm #31518peterlaursenParticipant
If you are using HTTP tunnel I think it could be a PHP setting. What is PHP “MAX_PACKET” setting Of coruse only relevant wiht HTTP tunnel)?
Also you could try.
1) setting BULK (try 512 K or 1M for instance) and CHUNK size in preferences.( refer: http://www.webyog.com/faq/content/24/101/en/about-chunks-and-bulks.html). See image:
[attachment=1450:bulkchunk.jpg]
2) export without using BULK option at all (it may be very slow).
-
November 12, 2010 at 2:49 pm #31519timhavensMember'timhavens' wrote:
I have a table which has about 81M rows in it. my resulting .sql file ends up with about 4M insert statements and the file is truncated to 2.0Gb, actually cutting off the last insert it wrote partway through it.
I've verified that file size 'ulimit' is not an issue on this Centos Server (it's set to unlimited, and I've created some very large files outside of sqlyog).
What is causing this?
PS: this is running on 64Bit Centos Linux and the sja version is 7.11, I'm testing 8.63 now but it's currently running as I write this.
This looks a lot like a limitation of the sja software, either imposed because of some licensing issue, or mis-configuration on my part.
If anyone has info as to why backup files might be truncated to 2.0Gb please advise.
-
November 12, 2010 at 2:52 pm #31520timhavensMember'peterlaursen' wrote:
If you are using HTTP tunnel I think it could be a PHP setting. What is PHP “MAX_PACKET” setting Of coruse only relevant wiht HTTP tunnel)?
Also you could try.
1) setting BULK (try 512 K or 1M for instance) and CHUNK size in preferences.( refer: http://www.webyog.com/faq/content/24/101/en/about-chunks-and-bulks.html). See image:
[attachment=1450:bulkchunk.jpg]
2) export without using BULK option at all (it may be very slow).
bulksize is set to 0.
This is not using php or a http tunnel, sja has direct access to the db, and is run using an xml file as it's config from the command line.
-
November 12, 2010 at 2:58 pm #31521peterlaursenParticipant
Please do not make assumptions about such license restrictions. If there were such we would have documented them, Why do you think such without having any reason to do so? Why do you use our tool at all if you think we only want to fool you? STOP this please. And also adult people would nomally give the recipient a little time to reply to an inquiry.
All restrictions with SJA for Linus as compared to the Windows version, are listed here: http://www.webyog.com/faq/content/15/47/en/is-there-any-difference-between-sja-for-linux-and-sja-for-windows.html
.. and there are no other!
Please read my last post and try defining a BULK < 2 MB.
-
November 12, 2010 at 3:04 pm #31522timhavensMember'peterlaursen' wrote:
Please do not make assumptions about such license restrictions. If there were such we would have documented them, Why do you think such without having any reason to do so? Why do you use our tool at all if you think we only want to fool you? STOP this please. And also adult people would nomally give the recipient a little time to reply to an inquiry.
All restrictions with SJA for Linus as compared to the Windows version, are listed here: http://www.webyog.com/faq/content/15/47/en/is-there-any-difference-between-sja-for-linux-and-sja-for-windows.html
.. and there are no other!
Please read my last post and try defining a BULK < 2 MB.
It was not my intention to insult you at all. I had NOT read about any such restriction, and had seen no mention of it, but it seemed to me like it might be a logical reasoning. Again this was not intended as an insult. I apologize if it seemed like one to you.
I have defined a bulk of 0, which is less than 2Mb. Are you saying that I need to define something > 0 and < 2Mb? Tim
-
November 12, 2010 at 3:09 pm #31523peterlaursenParticipant
I am not sure that '0' will work, I think it could be ignored (such value has no meaning really). Please try with 500 or 1000 (setting is in K:cool:.
We do not truncate in SJA. At least not to our best knowledge or intention.
-
November 12, 2010 at 3:51 pm #31524timhavensMember'peterlaursen' wrote:
I am not sure that '0' will work, I think it could be ignored (such value has no meaning really). Please try with 500 or 1000 (setting is in K:cool:.
We do not truncate in SJA. At least not to our best knowledge or intention.
Okay so here is the job I'm trying to run, I've changed the database name to 'database' and table names to 'table#' for this example.
Notice table4 in this example is 81 Million rows.
I've added bulk and chunk values to the export job now, and test ran it.
What happens is that when the file reaches 2Gb file size the job continues to run, and queries continue to be pulled from the database, and incremented by the chunk value, so I can see that sja is continuing to run the job, and pull queries from the database, but it stops writing to the .sql file at a filesize of '2147483647' or 2.0Gb according to the OS. See details in the code snippet below.
If there is anything else I should be doing with this export please suggest what I might be doing wrong. I've used SQLYog for several years now, and find it to be an excellent tool! I just don't know what I could be doing wrong.
I don't understand how sja continues to pull incremental queries as if it's continuing the job, but no longer writes to the .sql file.
I have some files over 20Gb on this same system, I'm fairly confident that it's not an OS limitation.
I hope this code snippet shows up properly.
Code:XML FOR EXPORT JOB:
192.168.1.1
admin_user admin_passwd 3306 0
0
table1
table2
table3
table4
table5
table6
yes
yes
yes
yes
500
yes
100000
yes
no
no
no
no
yes
no
no
yes
no
utf8
no
yes
SQLYog
[email protected]
[email protected]
[email protected]
localhost 25 SQLYog Backup: DB: database
SHELL (command line):
Exporting table
…Export successful…
Exporting table…Export successful…
Exporting table…Export successful…
Exporting table… MySQL Queries Being run:
select * from ``.` ` order by ` `,` `,` `,` `,` `,` ` limit 13900000,100000
select * from ``.` ` order by ` `,` `,` `,` `,` `,` ` limit 14200000,100000
select * from ``.` ` order by ` `,` `,` `,` `,` `,` ` limit 15900000,100000 FILE SIZE (stops at 2gb)
ls -lh
-rw-r–r– 1 root root 2.0G Nov 12 15:27 tbl_.sql
ls -l
-rw-r–r– 1 root root 2147483647 Nov 12 15:27 tbl_.sql -
November 12, 2010 at 3:59 pm #31525peterlaursenParticipant
Please tell:
1) On what OS do server and SJA run? Note that a single 32 bit process (even) on 64 bit Windows is restricted to use 2 GB of memory (there is a 'tweak' that will raise this to 3 GB but no more is possible). There will be similar memory restrictions on 64 bit Linux too for 32 bit programs.
2) It is not clear to me if you are using SJA for Linux on the server or the SJA for Windows bundled with SQLyog on a Windows client. Please specify (and my aplogy if you already told and I was not able to understand).
-
November 12, 2010 at 4:08 pm #31526timhavensMember'peterlaursen' wrote:
Please tell:
1) On what OS do server and SJA run? Note that a single 32 bit process (even) on 64 bit Windows is restricted to use 2 GB of memory (there is a 'tweak' that will raise this to 3 GB but no more is possible). There will be similar memory restrictions on 64 bit Linux too for 32 bit programs.
2) It is not clear to me if you are using SJA for Linux on the server or the SJA for Windows bundled with SQLyog on a Windows client. Please specify (and my aplogy if you already told and I was not able to understand).
(LINUX) CentOS release 5.2 (Final) x86_64 on the db backup server (where sja is running, and writing it's .sql file) this is a 64Bit system.
and
(LINUX) CentOS release 5.3 (Final) x86_64 on the MySQL 5.0.67 server (a seperate server) this is also a 64Bit system.
BOTH are LINUX.
-
November 12, 2010 at 4:15 pm #31527peterlaursenParticipant
I think you have met a limitation of a 32 bit program on a 64 bit platform (but I am surprised that it just 'stores nowhere' – I'd rather expect the program to be terminated (more or less gracefully) by the OS).
The only solution would be to build a 64 bit SJA. It may not be a big deal. But I can only update about this after the weekend.
-
November 12, 2010 at 4:35 pm #31528timhavensMember'peterlaursen' wrote:
I think you have met a limitation of a 32 bit program on a 64 bit platform (but I am surprised that it just 'stores nowhere' – I'd rather expect the program to be terminated (more or less gracefully) by the OS).
The only solution would be to build a 64 bit SJA. It may not be a big deal. But I can only update about this after the weekend.
Okay thank you for your attention to this issue. I would build it myself if I had the source. I've download the SVN code, but I don't see a make file for it. So I'll wait for you to try next week then.
Thanks for your help today.
-
November 15, 2010 at 1:48 pm #31529timhavensMember'peterlaursen' wrote:
I think you have met a limitation of a 32 bit program on a 64 bit platform (but I am surprised that it just 'stores nowhere' – I'd rather expect the program to be terminated (more or less gracefully) by the OS).
The only solution would be to build a 64 bit SJA. It may not be a big deal. But I can only update about this after the weekend.
Please keep me posted on any news regarding a 64bit Linux version of sja. Thank You.
-
November 15, 2010 at 3:08 pm #31530peterlaursenParticipant
This is not a 32 bit memory limitation. We do not populate the memory with the full dump. And it it was I also think we would crash if it was the problem.
I think it is your file system that will only accept files up to 2GB. I think I remember that Ext3FS had such limitation (just like Windows FAT32 file system has a 4 GB limitation). Now
1) What is your file system?
2) Do you know positively that you can save file > 2 GB (try a large movie, a DVD image etc.) ?
If this is the problem you will need to mount a partition with another filesystem supporting large files.
(It could be a quota setting problem for the user too – but I doubt this in this case).
-
November 15, 2010 at 3:14 pm #31531peterlaursenParticipant
add: actually we write to disk for every 8KB .. but you can check this yourself by monitoring memory with some system tool. I did notice that you told that you had larger files. But on same partition?
Please also use some file system monitor and see the file grow while backing up. You will see it stop growing at 2 GB simply?
-
November 15, 2010 at 3:27 pm #31532timhavensMember'peterlaursen' wrote:
This is not a 32 bit memory limitation. We do not populate the memory with the full dump. And it it was I also think we would crash if it was the problem.
I think it is your file system that will only accept files up to 2GB. I think I remember that Ext3FS had such limitation (just like Windows FAT32 file system has a 4 GB limitation). Now
1) What is your file system?
2) Do you know positively that you can save file > 2 GB (try a large movie, a DVD image etc.) ?
If this is the problem you will need to mount a partition with another filesystem supporting large files.
(It could be a quota setting problem for the user too – but I doubt this in this case).
I have files up to 23Gb on these systems. We are able to handle very large files. I believe on Linux that if you have a 32bit application it is bound by 32bit limitations even though the system IS a 64 bit Linux system.
So to answer your questions:
1.) ext3
2.) yes I have files over 23Gb on both systems. 'sja' is the only app I've seen on these systems having issues like this.
2.a.) The MySQL server has some tables files which are in excess of 35Gb!
I am able to cat the 2Gb .sql file 'sja' makes like this and end upw ith a 4Gb file: cat test.sql >> test4g.sql; cat test.sql >> test4g.sql
test4g.sql is then a 4Gb file. These systems handle very large files just fine.
The problem, as you yourself considered last week is most likely that this is 32bit app running on a 64bit system, and is bound by 32bit app restrictions.
I need to either resolve this matter in the next few days, or move on to another solution unfortunately.
Thanks again for your time.
-
November 15, 2010 at 3:34 pm #31533timhavensMember'peterlaursen' wrote:
add: actually we write to disk for every 8KB .. but you can check this yourself by monitoring memory with some system tool. I did notice that you told that you had larger files. But on same partition?
Please also use some file system monitor and see the file grow while backing up. You will see it stop growing at 2 GB simply?
yes very large files on the same file system, same directory.
I have very large files in the exact same directory this is not a File System limitation issue.
-
November 15, 2010 at 3:38 pm #31534timhavensMember'timhavens' wrote:
yes very large files on the same file system, same directory.
I have very large files in the exact same directory this is not a File System limitation issue.
also as I previously mentioned in detail, I can see the backup file grow to 2Gb, and I can also monitor queries on the database, and see that 'sja' is continuing to request more data, but stops writing to the backup file at 2Gb. 'sja' never reports an error regarding the file not being able to be written to, and in fact it simply continues querying until the large database table has been queried through, and then moves on to the next table in the backup job. However it stops writing large files once they reach 2Gb.
-
November 15, 2010 at 3:40 pm #31535timhavensMember'peterlaursen' wrote:
This is not a 32 bit memory limitation. We do not populate the memory with the full dump. And it it was I also think we would crash if it was the problem.
I think it is your file system that will only accept files up to 2GB. I think I remember that Ext3FS had such limitation (just like Windows FAT32 file system has a 4 GB limitation). Now
1) What is your file system?
2) Do you know positively that you can save file > 2 GB (try a large movie, a DVD image etc.) ?
If this is the problem you will need to mount a partition with another filesystem supporting large files.
(It could be a quota setting problem for the user too – but I doubt this in this case).
MANY years ago EXT3 had a 2Gb limitation, however it no longer does with 64Bit systems.
-
November 15, 2010 at 3:56 pm #31536peterlaursenParticipant
I can still only think of truncation happening at file system level. Are you able to save files >2 Gb *as same user as SJA* ?
-
November 15, 2010 at 3:58 pm #31537timhavensMember'peterlaursen' wrote:
I can still only think of truncation happening at file system level. Are you able to save files >2 Gb *as same user as SJA* ?
'root' is running these jobs. So yes, root is able to pretty much do anything he wants 🙂
-
November 15, 2010 at 7:19 pm #31538peterlaursenParticipant
I can still only think of file system as the problem. We will discuss this in the morning.
-
November 16, 2010 at 11:38 am #31539SupratikMember
In order to effectively investigate this, I require some more information. Can you please reply back with the following information:
1. Are you able to create a file with size more than 2GB ?
Please execute the following command and update this post with the output of these commands.
This will create an empty file of 2.8G. Please execute the command under the same user which is running SJA
$ dd if=/dev/zero of=emptyfile bs=1MB count=3000
$ ls -lh
2. Do you have disk quota enabled in your system ?
Please post the output of the following command.
# quota [username] (Please use the username using which the SJA command is running.)
3. What is the output of “ulimit -a” ?
4. Are you able to export data using “mysqldump” command ?
mysqldump -h 192.168.1.1 -u admin_user -padmin_passwd -p -R -B database > database.sql
The information that you provide will be invaluable in allowing us to isolate the cause of the problem.
Warm Regards
Supratik
-
November 16, 2010 at 1:20 pm #31540timhavensMember'Supratik' wrote:
In order to effectively investigate this, I require some more information. Can you please reply back with the following information:
1. Are you able to create a file with size more than 2GB ?
Please execute the following command and update this post with the output of these commands.
This will create an empty file of 2.8G. Please execute the command under the same user which is running SJA
$ dd if=/dev/zero of=emptyfile bs=1MB count=3000
$ ls -lh
2. Do you have disk quota enabled in your system ?
Please post the output of the following command.
# quota [username] (Please use the username using which the SJA command is running.)
3. What is the output of “ulimit -a” ?
4. Are you able to export data using “mysqldump” command ?
mysqldump -h 192.168.1.1 -u admin_user -padmin_passwd -p -R -B database > database.sql
The information that you provide will be invaluable in allowing us to isolate the cause of the problem.
Warm Regards
Supratik
1.)
[root test]# dd if=/dev/zero of=emptyfile bs=1MB count=3000
3000+0 records in
3000+0 records out
3000000000 bytes (3.0 G:cool: copied, 51.0355 seconds, 58.8 MB/s
[root test]# ls -lh
total 2.8G
-rw-r–r– 1 root root 2.8G Nov 16 13:14 emptyfile
2.)
[root test]# quota root
[root test]#
3.)
[root test]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 139264
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 139264
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
4.)
Yes I am able to use mysqldump, and did this yesterday against the entire database.
4 of the database .sql files ended up being between 20Gb and 35Gb.
I then tar/gzip'd them all down to a 3.9Gb file.
-
November 17, 2010 at 5:51 am #31541nithinMember
Thanks for updating.
We are looking into this issue. I tried the same thing and once the file-size reached 2GB, the sja console displayed a message saying the file size had exceeded and stopped the process.
We will update you once we checked this in detail.
Thank you
-
November 17, 2010 at 1:41 pm #31542nithinMember
Hello,
We are providing you a special binary where we expect the limited file-size issue is resolved.
To download click the below link.
Please try this into your environment and let us know the status.
Thank you
Nithin
-
November 17, 2010 at 2:09 pm #31543timhavensMember'nithin' wrote:
Hello,
We are providing you a special binary where we expect the limited file-size issue is resolved.
To download click the below link.
Please try this into your environment and let us know the status.
Thank you
Nithin
Thank you, I will test it out this morning and report the results here.
-
November 17, 2010 at 6:03 pm #31544timhavensMember'timhavens' wrote:
Thank you, I will test it out this morning and report the results here.
I have been running an export job on a table with about 78M rows in it…and it is now at 2.1Gb and continuing to write. It's taking a very long time (compared to mysqldump) but it's still running and writing so this appears to be ont he right track now.
I'll keep you posted on if it continues past the next milestone 4Gb 🙂 If it gets past that point I expect that it will eventually complete without any more issues.
You're fix appears to be working thus far, since the job is further than it has been every other attempt.
Tim
-
November 17, 2010 at 8:02 pm #31545peterlaursenParticipant
I will explain:
We did not set a compiler flag/define a macro allowing for 'large files' in the build script. Even though the 2GB file size limit only applies to (now outdated) Ext3FS file system the gcc compiler (at least on the CentOS system where we build SJA) still 'assumes' Ext3FS file system's limitations unless this flag/macro is defined.
-
November 18, 2010 at 5:16 am #31546nithinMemberQuote:It's taking a very long time (compared to mysqldump)
It should not be like this, because we did all kind of bench marking and we never felt like that. Have you selected the “bulk insert” option and other options correctly?
I just modified the job file you attached to this thread. Please find the difference, and try to run this job file.
Let us know the status.
[attachment=1457:sample_job.xml]
-
November 18, 2010 at 1:39 pm #31547timhavensMember'nithin' wrote:
It should not be like this, because we did all kind of bench marking and we never felt like that. Have you selected the “bulk insert” option and other options correctly?
I just modified the job file you attached to this thread. Please find the difference, and try to run this job file.
Let us know the status.
[attachment=1457:sample_job.xml]
I tried to download your attachement but I'm getting this error:
Code:[#10173] We could not find the attachment you were attempting to view. -
November 18, 2010 at 3:45 pm #31548peterlaursenParticipant
Ok .. it seems we have an issue with permissions/quotas on the server here. Nithin's file truncated and new files will not upload. We will check it tomorrow.
If you want the file now please create a support ticket here: http://webyog.com/support
-
November 19, 2010 at 11:08 am #31549SupratikMember
The issue with our forums is now fixed.
Please download the attached job file and let us know the status.
Warm Regards
Supratik
-
-
AuthorPosts
- You must be logged in to reply to this topic.