- This topic is empty.
January 31, 2013 at 3:05 pm #12914
for a new project I'm dealing with a database now that contains a table with a large number of records (almost 5 million, 3 ints and 2 decimals per row) and when creating an Sqldump with bulk inserts the export goes progressively slower. Unfortunately I cannot transfer the database in another way.
The dump starts with a steady 10.000 records per second or so, but near the end it's more like 1000 records per 10 seconds – it's been running for 4 hours now, and there's still 500K rows to go.
It would be nice if there were a way to speed things up, or to fix whatever's causing this.
January 31, 2013 at 3:30 pm #34130
1) Are you sure it is not a server slowness you are encountering? The can be several server configuration setting affecting this. Is another client installed on same machine as SQLyog faster?
2) What are your BULK and CHUNK settings Refer http://faq.webyog.com/content/24/101/en/about-chunks-and-bulks.html
3) What happens to the client machine's memory consumption? Does it increase for the SQLyog process and has the system atarted swapping? Please check in Task manager.
January 31, 2013 at 3:48 pm #34131
1) yes, otherwise the first rows would be just as slow. Its running on my hex-core desktop PC with 16GB memory, so that should be fine.
2) BULK is 'server default', and CHUNK is 1000 rows
3) I didn't check it at the start, but right now it's at 16MB, so nothing spectacular there
February 1, 2013 at 4:10 am #34132
“otherwise the first rows would be just as slow”. Why do you think so? There can be buffer/cache issues lots of places (in MySQL, on the remote network or whatever). Anyway we will check this.
February 1, 2013 at 8:48 am #34133sathishMember
We are not able to reproduce this at our local server. It takes less than 30 seconds to export 5 million rows. Could you please let us know what is the MySQL server version you are using? Also if possible you can provide a sample dump to the table where this is reproducible. You can create a private support ticket by sending a mail to @webyog.com">email@example.com
February 1, 2013 at 9:29 am #34134
Also if you you send you MySQL configuration (my.cnf/my.ini) it might help. There could be a specific configuration setting. Is this a server you manage yourself or a shared server?
February 1, 2013 at 10:05 am #34135
BTW: also tell if you are using tunnelling. In particular with HTTP-tunnelling to a remote and shared server there could be some restrictions on large transfers in the networking settings on the remote systems.
February 1, 2013 at 1:08 pm #34136
MySQL is version 5.5, and the relevant entries in the INI are these (probably all default):
#*** INNODB Specific options ***
As I said it's not a server, it's my workstation this is happening on. So no tunneling required (I don't ever use that anyway).
Sending an example dump qould require another 4 hours, so I'm not that keen on doing that. The table looks like below, and contains 4.8 million rows. Nothing fancy.
CREATE TABLE `vertices` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`areaid` int(11) DEFAULT NULL,
`sortorder` int(11) DEFAULT NULL,
`y` decimal(12,9) DEFAULT NULL,
`x` decimal(12,9) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `areaid` (`areaid`)
February 6, 2013 at 4:33 am #34137
Still it is not reproducible here with neither SQLyog 10.51 nor 11.0 beta 2. We inserted 5 mio. rows to a table with your table definition. There is no slowdown after a while as you experience and the result arrives in 30 seconds.
We are using MySQL 5.5.18 64 bit. What is the *exact* server version you use (“SELECT VERSION();” will tell)? Yours is older/newer? 64 bit or 32 bit?
February 6, 2013 at 10:18 am #34138
I'm using 64 bit MySQL version 5.5.15. If there's anything you'd like me to test, I'd be more than happy to, of course.
February 6, 2013 at 11:48 am #34139
Well .. we are stuck and don't know how to do progress here. But MySQL 5.5.15 is now pretty old and you could consider to upgrade to 5.5.29 (latest 5.5 available). But only the exercise itself can tell if it has anything to do with what we discuss here.
February 6, 2013 at 12:11 pm #34140
Okay, I'll see if I can upgrade without losing all my databases… 😉
I thought it was a more or less known issue, but now it looks like it's not. Luckily I don't have to export these tables very often, so in a wost case scenario I might just have to learn and live with it. At least the dump isn't slowing my system down, so I can just let it run in the background.
Thanks for now, and if I find anything, I'll be sure to post it here!
February 6, 2013 at 1:48 pm #34141
Hmm…I just backed up all my databases, resulting in completely comparable .sql files as the SQL dump option generates, and that took a little over 2 minutes for 36 databases in 330MB worth of SQL. I used the default settings, including “Get server default” for BULK and “Don't break up in chunks”. This backup includes the aforementioned table, however a modified version with only 650K rows.
Is that a completely different method of generating the SQL?
February 7, 2013 at 6:20 am #34142
“This backup includes the aforementioned table, however a modified version with only 650K rows.”. Are you telling that the dump of this particular table is incomplete or did you delete rows in between?
There should not be different methods for generating the SQL.
The problem is that we are uncertain if we understand all what you are telling. We would like to see it happen before our own eyes and would ask for a shared session with you. It will be dificult to do progress without. Could you please send a mail to @webyog.com">firstname.lastname@example.org so that we can arrange this in private?
- You must be logged in to reply to this topic.