forums › forums › SQLyog › Sync tools, Migration, Scheduled Backup and Notifications › SJA job running for 11 hours
- This topic is empty.
-
AuthorPosts
-
-
May 23, 2005 at 8:03 am #9000peterlaursenParticipant
I just finished a SJA job resulting in this
Table SrcRows TgtRows Inserted Updated Deleted
========================= ======= ======= ======
`mp3_filer` 37118 37101 17 29820 0
It ran for about 11 hours before finishing. Database (MYD-file) is about 8 MB (it's not a big one – And still there are corporate databases i the GB- and TB-range … ). I use HTTP-tunnelling at the target. But It finally came through perfectly
:wub: (just had to try this icon !)
Average transmission speed was about 25 kbit/sec (when sending a dump-file to the server I can transmit with about 200 kbit/sec – also with HTTP-tunnelling). So it's not a bandwith limitation. And the total amount datatransfer by SJA during the job is not less than 2 GB (> 250 times as much the data involved). I also notice that SJA transmits the same amount of data at the same speed (though I have an Asymmetric DSL-line) in both directions.
I know perfectly well that this situation with so many UPDATES is not what SJA primarly is intended for. If you use a sync-toll, you use it regularly. But still it seems that the “plan” chosen by SJA is not optimal with many updates, and maybe different “plans” could be implemented in the code ??
This is not meant to be either a bug report or a request, but mostly for your information (in case it should surprise you). There are some practical limitations with the SJA (as with any tool), but can anything be said about where the limitation is and on what it depends ??
Would it improve the situation to define some indexes with the table ?? And which ones then ??
PS:
Actually it is to avoid this situation I need EITHER
1) the ability to use an PK based on a varchar-field containing a full windows' filename
OR
2) the ability to use a unique index that's not a PK
(the data is imported with ODBC from another application that does not use a autoincremet PK, so I have to add it after the import. And this way the same data will will get a new index-value if some data in the beginning of the Access-database from where the data origins – And yes I know that I could create an ODBC-import script that does not overwrite exixting data … but using the filename for syncronisation would be much easier in this case 😀
-
May 23, 2005 at 9:33 am #17902RiteshMember
It also depends upon your table structure and data. If there are too many blob columns then it takes time to generate checksums for the row.
-
May 23, 2005 at 10:25 am #17903peterlaursenParticipant
@ ritesh
There are a lot of big varchars and some mediumtext's – most of which are empty or only contain relatively few characters.
So your theory about BLOBs and the like could very well be true. But wil SJA have to “unfold” those variables. A varchar[255] containing the string 'abc' takes 4 bytes on the server – one “control byte” and three data bytes. Why calculate checksums of 255 bytes when there is only 3 bytes of data? Same argumentation goes for BLOBS. At least skip calculating checksums where data fields are empty! If any optimization like this is possible it should be implemented …
It's is more than 2 times as fast to do INSERT as UPDATE. Why ? Could it have something with the php-configuration at the server to do ?
-
May 23, 2005 at 12:56 pm #17904RiteshMemberQuote:It's is more than 2 times as fast to do INSERT as UPDATE
There is valid reason to that. Its how MySQL (for any database) are designed. Reading the MySQL docs wil throw more light on the subject.
-
May 23, 2005 at 12:58 pm #17905ShadowMember
INSERTs are usually a lot faster than UPDATEs when BLOB fields are affected. Why? BLOB values are not stored in the table structure, only a pointer is placed there. Therefore MySql has to (1) locate the record to be updated, (2) get BLOB pointer(s), (3) locate BLOB value(s), update BLOB value(s). Inserting data is lot simpler…
-
-
AuthorPosts
- You must be logged in to reply to this topic.