Unsupported Screen Size: The viewport size is too small for the theme to render properly.

Fr: Increase Speed On Sqldump

forums forums SQLyog SQLyog: Bugs / Feature Requests Fr: Increase Speed On Sqldump

  • This topic is empty.
Viewing 13 reply threads
  • Author
    Posts
    • #12914
      Zaph
      Participant

      Hi,

      for a new project I'm dealing with a database now that contains a table with a large number of records (almost 5 million, 3 ints and 2 decimals per row) and when creating an Sqldump with bulk inserts the export goes progressively slower. Unfortunately I cannot transfer the database in another way.

      The dump starts with a steady 10.000 records per second or so, but near the end it's more like 1000 records per 10 seconds – it's been running for 4 hours now, and there's still 500K rows to go.

      It would be nice if there were a way to speed things up, or to fix whatever's causing this.

      Thanks!

    • #34130
      peterlaursen
      Participant

      1) Are you sure it is not a server slowness you are encountering? The can be several server configuration setting affecting this. Is another client installed on same machine as SQLyog faster?

      2) What are your BULK and CHUNK settings Refer http://faq.webyog.com/content/24/101/en/about-chunks-and-bulks.html

      3) What happens to the client machine's memory consumption? Does it increase for the SQLyog process and has the system atarted swapping? Please check in Task manager.

    • #34131
      Zaph
      Participant

      1) yes, otherwise the first rows would be just as slow. Its running on my hex-core desktop PC with 16GB memory, so that should be fine.

      2) BULK is 'server default', and CHUNK is 1000 rows

      3) I didn't check it at the start, but right now it's at 16MB, so nothing spectacular there

    • #34132
      peterlaursen
      Participant

      “otherwise the first rows would be just as slow”. Why do you think so? There can be buffer/cache issues lots of places (in MySQL, on the remote network or whatever). Anyway we will check this.

    • #34133
      sathish
      Member

      Hello,

      We are not able to reproduce this at our local server. It takes less than 30 seconds to export 5 million rows. Could you please let us know what is the MySQL server version you are using? Also if possible you can provide a sample dump to the table where this is reproducible. You can create a private support ticket by sending a mail to [email protected]

      Regards,

      Sathish

    • #34134
      peterlaursen
      Participant

      Also if you you send you MySQL configuration (my.cnf/my.ini) it might help. There could be a specific configuration setting. Is this a server you manage yourself or a shared server?

    • #34135
      peterlaursen
      Participant

      BTW: also tell if you are using tunnelling. In particular with HTTP-tunnelling to a remote and shared server there could be some restrictions on large transfers in the networking settings on the remote systems.

    • #34136
      Zaph
      Participant

      MySQL is version 5.5, and the relevant entries in the INI are these (probably all default):

      default-storage-engine=INNODB

      sql-mode=”STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION”

      max_connections=100

      query_cache_size=0

      table_cache=256

      tmp_table_size=35M

      thread_cache_size=8

      #*** INNODB Specific options ***

      innodb_additional_mem_pool_size=3M

      innodb_flush_log_at_trx_commit=1

      innodb_log_buffer_size=2M

      innodb_buffer_pool_size=107M

      innodb_log_file_size=54M

      innodb_thread_concurrency=14

      As I said it's not a server, it's my workstation this is happening on. So no tunneling required (I don't ever use that anyway).

      Sending an example dump qould require another 4 hours, so I'm not that keen on doing that. The table looks like below, and contains 4.8 million rows. Nothing fancy.

      CREATE TABLE `vertices` (

      `id` int(11) NOT NULL AUTO_INCREMENT,

      `areaid` int(11) DEFAULT NULL,

      `sortorder` int(11) DEFAULT NULL,

      `y` decimal(12,9) DEFAULT NULL,

      `x` decimal(12,9) DEFAULT NULL,

      PRIMARY KEY (`id`),

      KEY `areaid` (`areaid`)

      )

    • #34137
      peterlaursen
      Participant

      Still it is not reproducible here with neither SQLyog 10.51 nor 11.0 beta 2. We inserted 5 mio. rows to a table with your table definition. There is no slowdown after a while as you experience and the result arrives in 30 seconds.

      We are using MySQL 5.5.18 64 bit. What is the *exact* server version you use (“SELECT VERSION();” will tell)? Yours is older/newer? 64 bit or 32 bit?

    • #34138
      Zaph
      Participant

      I'm using 64 bit MySQL version 5.5.15. If there's anything you'd like me to test, I'd be more than happy to, of course.

    • #34139
      peterlaursen
      Participant

      Well .. we are stuck and don't know how to do progress here. But MySQL 5.5.15 is now pretty old and you could consider to upgrade to 5.5.29 (latest 5.5 available). But only the exercise itself can tell if it has anything to do with what we discuss here.

    • #34140
      Zaph
      Participant

      Okay, I'll see if I can upgrade without losing all my databases… 😉

      I thought it was a more or less known issue, but now it looks like it's not. Luckily I don't have to export these tables very often, so in a wost case scenario I might just have to learn and live with it. At least the dump isn't slowing my system down, so I can just let it run in the background.

      Thanks for now, and if I find anything, I'll be sure to post it here!

    • #34141
      Zaph
      Participant

      Hmm…I just backed up all my databases, resulting in completely comparable .sql files as the SQL dump option generates, and that took a little over 2 minutes for 36 databases in 330MB worth of SQL. I used the default settings, including “Get server default” for BULK and “Don't break up in chunks”. This backup includes the aforementioned table, however a modified version with only 650K rows.

      Is that a completely different method of generating the SQL?

    • #34142
      peterlaursen
      Participant

      “This backup includes the aforementioned table, however a modified version with only 650K rows.”. Are you telling that the dump of this particular table is incomplete or did you delete rows in between?

      There should not be different methods for generating the SQL.

      The problem is that we are uncertain if we understand all what you are telling. We would like to see it happen before our own eyes and would ask for a shared session with you. It will be dificult to do progress without. Could you please send a mail to [email protected] so that we can arrange this in private?

Viewing 13 reply threads
  • You must be logged in to reply to this topic.