forums › forums › SQLyog › Sync tools, Migration, Scheduled Backup and Notifications › Chunk Size In Data Sync
- This topic is empty.
-
AuthorPosts
-
-
May 22, 2014 at 4:22 pm #13222osoloMember
Hi,
Does anybody know if there is a way to increase the chunk size in the data sync?
I believe that the data sync tool compares checksums of chunks of about 1000-2000 rows at a time. If you have millions of rows, and the databases are mostly the same, this takes a very looooooooooooong time.
I would like to increase the chunk size to about 200000 rows, or better yet, have the tool be adaptive (if a lot of chunks are the same, automatically keep increasing the chunk size).
Thanks!
-
May 23, 2014 at 6:38 am #34958sathishMember
Please refer this FAQ: http://faq.webyog.com/content/27/114/en/introduction-to-the-_sqlyog-job-agent_-sja.html
Data sync jobs additionally supports a ‘-r’ parameter (it is ignored with other types of jobs). It has only an effect when a non-empty source-table is synced to an empty target-table and defines how big CHUNKS should be fetched from source server (for a (source) HTTP connection it is 1000 rows by default if -r option is not specified explicitly). -r2000 (note that the number should not be quoted) will copy CHUNKS of 2000 rows from source. If no -r parameter is specified SQLyog will fetch “all rows” from source server in one ‘SELECT’ query operation what may cause memory exhaustion on the client machine. Also if use (or no use) of the -r parameter results in larger CHUNKS than target ‘max_allowed_packet’ setting on target, this ‘max_allowed_packet’ will still be respected. So in short if your sync job syncs a large non-empty source-table to an empty target-table you could use the -r parameter to control memory consumption on the client. Note that the -r parameter is a command line option only that is not supported from the GUI wizard. -
May 23, 2014 at 12:30 pm #34959osoloMember
Thank you. This is exactly what I was looking for.
-
-
AuthorPosts
- You must be logged in to reply to this topic.