Unsupported Screen Size: The viewport size is too small for the theme to render properly.

Forum Replies Created

Viewing 15 posts - 1 through 15 (of 18 total)
  • Author
    Posts
  • in reply to: Move To Another Pc #35636
    Zaph
    Participant

    Solved – thanks again, Abhishek!

    in reply to: Restore Session And Window Focus #35124
    Zaph
    Participant

    Just a reminder that both still exist – #2 still occurs on Windows 10. If I have an explorer window open and focused for example, and click my SQLYog shortcut on the taskbar, SQLYog is displayed directly behind that window.

    in reply to: Move To Another Pc #35635
    Zaph
    Participant

    Thanks – I just sent you an email!

    in reply to: Move To Another Pc #35633
    Zaph
    Participant

    I sent one (not two, my bad memory) to sales@webyog.com at january 12th, 14:39 CET in response to an expiration email.

    in reply to: Sql Dump Slow On Tables With Many Rows #35394
    Zaph
    Participant

    I wonder if it could besome memory exhaustion on the server causing swapping or some user quota being exceeded.. Please try to reduce the CHUNK setting – refer http://faq.webyog.com/content/24/101/en/about-chunks-and-bulks.html

     

    Actually it turned out that raising the CHUNK setting is the solution. Queries using LIMIT [offset], [count] get progressively slower as [offset] increases. Larger CHUNKs means less searching, means faster execution.

     

    And when I say faster, I mean faster – where the original attempt with CHUNK=25 took almost 14 hours, a test today with CHUNK=10000 dumped the same database (resulting in a 1.25GB .sql-file) in a mere 14 minutes (!).

     

    So, as far as I’m concerned: problem solved. Still a little puzzled why no one said “Oh, that’s easy – just increase CHUNK!”

    in reply to: Sql Dump Slow On Tables With Many Rows #35393
    Zaph
    Participant

    Tried both repair and optimize on myisam and innodb tables, to no avail. Email sent!

    in reply to: Sql Dump Slow On Tables With Many Rows #35391
    Zaph
    Participant

    Yes, I’d be happy to. Next tuesday, wednesday and thursday afternoons (CET) would be fine, so just let me know.

    in reply to: Sql Dump Slow On Tables With Many Rows #35389
    Zaph
    Participant

    Hmm…another table containing 850K rows and a spatial column slows down even faster, making it practically impossible to download – I guesstimate that that one would take more than 24 hours.

    in reply to: Sql Dump Slow On Tables With Many Rows #35388
    Zaph
    Participant

    Yes, I experience this on all servers I work with – therefore I suspect it’s not a server issue. edit: it even happens when I dump the same table from my local MySQL database, so it’s also not network-related.

     

    Additional info – as I was curious I let it run: SQLYog memory usage remained a very decent 50MB (out of 16G:cool: and CPU usage was next to 0%. The entire database was dumped in 13,5 hours with this one table accounting for 13,25 hours. After that table was finally done, the next ones were dumped with the same speed as one would expect, so it’s a per-table and not a per-connection performance issue.

    in reply to: Restore Session And Window Focus #35123
    Zaph
    Participant

    1. Thanks!

     

    2. If that were the case I’d have known, but it happens consistently. Even when I start SQLYog and do nothing while I wait for it to appear. It looks lke the restore session window is being treated as the main window and the actual application gets pushed to the back or something. Running on Windows 7 by the way.

    in reply to: Fr: Increase Speed On Sqldump #34141
    Zaph
    Participant

    Hmm…I just backed up all my databases, resulting in completely comparable .sql files as the SQL dump option generates, and that took a little over 2 minutes for 36 databases in 330MB worth of SQL. I used the default settings, including “Get server default” for BULK and “Don't break up in chunks”. This backup includes the aforementioned table, however a modified version with only 650K rows.

    Is that a completely different method of generating the SQL?

    in reply to: Fr: Increase Speed On Sqldump #34140
    Zaph
    Participant

    Okay, I'll see if I can upgrade without losing all my databases… 😉

    I thought it was a more or less known issue, but now it looks like it's not. Luckily I don't have to export these tables very often, so in a wost case scenario I might just have to learn and live with it. At least the dump isn't slowing my system down, so I can just let it run in the background.

    Thanks for now, and if I find anything, I'll be sure to post it here!

    in reply to: Fr: Increase Speed On Sqldump #34138
    Zaph
    Participant

    I'm using 64 bit MySQL version 5.5.15. If there's anything you'd like me to test, I'd be more than happy to, of course.

    in reply to: Fr: Increase Speed On Sqldump #34136
    Zaph
    Participant

    MySQL is version 5.5, and the relevant entries in the INI are these (probably all default):

    default-storage-engine=INNODB

    sql-mode=”STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION”

    max_connections=100

    query_cache_size=0

    table_cache=256

    tmp_table_size=35M

    thread_cache_size=8

    #*** INNODB Specific options ***

    innodb_additional_mem_pool_size=3M

    innodb_flush_log_at_trx_commit=1

    innodb_log_buffer_size=2M

    innodb_buffer_pool_size=107M

    innodb_log_file_size=54M

    innodb_thread_concurrency=14

    As I said it's not a server, it's my workstation this is happening on. So no tunneling required (I don't ever use that anyway).

    Sending an example dump qould require another 4 hours, so I'm not that keen on doing that. The table looks like below, and contains 4.8 million rows. Nothing fancy.

    CREATE TABLE `vertices` (

    `id` int(11) NOT NULL AUTO_INCREMENT,

    `areaid` int(11) DEFAULT NULL,

    `sortorder` int(11) DEFAULT NULL,

    `y` decimal(12,9) DEFAULT NULL,

    `x` decimal(12,9) DEFAULT NULL,

    PRIMARY KEY (`id`),

    KEY `areaid` (`areaid`)

    )

    in reply to: Fr: Increase Speed On Sqldump #34131
    Zaph
    Participant

    1) yes, otherwise the first rows would be just as slow. Its running on my hex-core desktop PC with 16GB memory, so that should be fine.

    2) BULK is 'server default', and CHUNK is 1000 rows

    3) I didn't check it at the start, but right now it's at 16MB, so nothing spectacular there

Viewing 15 posts - 1 through 15 (of 18 total)