forums › forums › SQLyog › SQLyog: Bugs / Feature Requests › utilitee for part sql file
- This topic is empty.
-
AuthorPosts
-
-
April 21, 2005 at 12:39 pm #8931just4funMember
Can you make small utilitie for part *.sql dumb by exp. 800 kb
HTTP tun. on my machine can't execude dumb > 1mb (or connection lost, or simple upload 700 + kb and then do nothing)
in this situation i must manualy part my 9 mb dumb by 500 kb =( (or use PMA and part by 1.5 mb)
sorry, and thx.
-
April 21, 2005 at 12:55 pm #17495peterlaursenParticipant
I have exactly the same problem right !
Didn't have it before.
I'm struggling with a 12 MB sqldump/upload right now!
What are they thinking at the ISP's …
I guess I'd better find a professional one!
-
April 21, 2005 at 2:14 pm #17496peterlaursenParticipant
I guess I found something interesting!!!
The problem is that the individual SQL-statements (insert into's blocks) are too long are too long!
It's not the file as such, but the SQL-statements. Thats for HTTTP-tunneling with my ISP
In my case they contain each about 3000-4000 records and takes up about 1 MB within the file each.
If I divide each statement into 3 or four the whole dump runs …
RITESH .. why must the statements be so long ??? probably it's fine with direct connection on port 3306 on a LAN or fast DSL.
But it seems to be a probelm with tunnelling.
Think about a setting to let the user decide or a popup “Use this SQL with tunnelling ?”
I'll verify and report back!
-
April 21, 2005 at 2:41 pm #17497peterlaursenParticipant
Confirmed !
The individual SQL statments in the “dump are MUCH too big for tunnelling (to the server) with a typical 128 kbs DSL line.
Divding each statement into 3.4 pieces works form here!
12,5 MB uploaded then!
Probably the idea was the the client should not negotiate the connection too often, but with tunneling that would be better!
-
April 28, 2005 at 2:46 am #17498RiteshMember
While generating the dump, uncheck Generate bulk insert stmts. in the option dialog. This will result in individual INSERT INTO… query being generated for each row of data.
Hope that helps.
-
April 28, 2005 at 2:51 am #17499peterlaursenParticipant
oh … it was there 🙂
-
November 18, 2005 at 10:49 pm #17500vygiMember
It's an old topic… but nevertheless:
maybe it would be possible to get a new config parameter “max bulk statement size” (set it eg. to 128 KB by default), and then divide bulk insert statements into that pieces?
Right now I've got timeout error because it took too log to upload a 500KB bulk insert statement. It worked when I've manually divided it into two ca. 250 KB parts.
Of course It is possible to export single statements but than it takes much longer to execute them.
Regards,
Vygi
-
November 19, 2005 at 11:33 am #17501peterlaursenParticipant
Actually I have requested too that 'Bulk Size' could be user settable.
-
November 19, 2005 at 7:35 pm #17502vygiMemberpeterlaursen wrote on Nov 19 2005, 12:33 PM:Actually I have requested too that 'Bulk Size' could be user settable.[post=”7935″]<{POST_SNAPBACK}>[/post]
Yes, it should be configurable.
BTW, max query size depends not only on MySQL server settings.
In my case, remote server was able to process up to 1 MB at once but has reported time out error because of low upload speed.
-
November 19, 2005 at 7:37 pm #17503peterlaursenParticipant
And if you use HTTP-tunnelling it also involves php-configuration.
-
-
AuthorPosts
- You must be logged in to reply to this topic.