Forum Replies Created
-
AuthorPosts
-
nithinMember
This issue has been fixed internally and will be included into the next public release that we expect on coming week.
As a workaround for the time being you can 'delete' the line by using the 'Delete' key after selecting the line by mouse left button.
nithinMemberThanks for quick response.
We have reproduced this at our end. We are looking into this.
This happened because the 'current directory'(the installation path) has got changed to the 'path of back up file' you selected' and there were no SQLyog configuration file(SQLyog.ini) which the application is looking into to continue the back up process. Since it cannot locate the Configuration file there the application gets terminated without completing the back up.
As a workaround for the time being you can give the absolute path, like
D:APPApplicationsSQLyogSQLyogCommunity.exe -dir”D:APPApplicationsSQLyogdata”.
nithinMemberWe are not able to reproduce this case at our end. In order to nail the issue please elaborate on the following questions.
-Have you got this “SQLyog getting terminated” message while starting the back up dialog/in between export/at the end ?
-Please elaborate more on how you are using SQLyog as a “portable version”.(Are you using an external drive like pen-drive?)
BTW We also have an option to go for screen sharing session. Please send a mail with your timezone to [email protected].
nithinMemberThanks for reporting the issue.
This issue has been fixed internally and will be included into the upcoming release.
Let us know whether the fix is urgent for you so that I can provide you special binary, you can create a ticket by sending mail to [email protected].
nithinMemberQuote:There are still only 1221 rows , although the job shows there`wp_posts` 31370 1221 30149 0 0
are 30149 new rows inserted…
It happened because you selected the option “Generate Script”. So it wont sync the data to target database unless you will not import the “Generated SQL script” at target server. If you want to sync it immediately you have to select the option “Direct sync”
The job file you attached contains
directsync” /> if you select the option 'Direct sync'. You can edit the job file and change to 'directsync'. See the screen-shot attached.
-'Direct sync': This option will sync immediately as the sync tools finds differences in the table.
-'Generate script only': It generates SQL sync script for later execution
-'Sync & Generate script': This options can be selected if you want a log of what the 'direct sync' did on the database(s).
nithinMemberHello,
I remember we have already discussed about the case “Error no.1” that you reported last month.
http://www.webyog.com/forums/index.php?showtopic=5430
Instead of showing “Error No.1” for all the errors returned by Tunnel we plan to log the exact error in one of the upcoming versions. For you it happened because the target table is empty and SQLyog frames BULK INSERT query for 1000 rows returned from source and the memory allocated to PHP is less to handle this received BULK query. So please use the option “-r” as explained in previous post.
Code:– Save the data sync job file
– Run the job from the command prompt >sja dsync_job.xml -r20
(this -r option retrieves 20 rows at a time from the source server and frame BULK INSERT query to the tunneler, by default it was 1000 rows. You can try different values also)Your first post of this thread says: `wp_posts` 31370 1221 30149 0 0
No. rows in source:31370
No. rows in target:1221
But your latest replay it became: `wp_posts` 30062 0 Error No. 1
It means no. of rows in target table is 0.
Can you tell us whether you have emptied the target table since the first time of operation? Or is the target server a live server which undergoes continuous insertions and deletions?
Regards
Nithin
nithinMemberQuote:But after sitting for a while, when you execute a query it will not complete, the program freezes, cannot kill the query, and have to open the task manager and kill the application/process.As I understand, by 'crash' you mean to say that SQLyog 'hangs' while executing the query. Please confirm this. If that is the case then this issue looks like the client(SQLyog) is loosing the connection to the server and the client is not able to detect this. If this is the case we know this already and this affects Vista and upper versions.
We have already reported this issue to MySQL.
http://bugs.mysql.com/bug.php?id=31109
Can you provide us temporary access to your server? So that we can check this from our end.
You can send details to mail-id : [email protected]
nithinMemberFYI: If you do not want to delete the extra rows in the target table you have to select the option 'Don't delete extra rows in target database'.
Your sync output looks like you have not selected this option and as a result the extra rows in target are deleted.
See the screen shot attached.
Also the “-r” option is required only if the target table is empty. I suggested this option because your 1st post tells the target table is empty.
`node_revisions` 33024 0 Error No. 1
The version 8.62 change-log tells:
— SJA (Data sync) now supports an additional -r parameter that tells how big CHUNKS should be when copying to an empty table.
nithinMemberQuote:'The data sync script has been generated at etc…'ran it twice with same results. perhaps i am doing something wrong here, dunno.
You are getting the 'same result' because you selected the option 'Generate script only' that generate SQL sync scirpt for later execution. You can select 'Direct sync' option to sync the changes immediately.
See the screen-shot attached.
–'Direct sync': This option will sync immediately as the sync tools finds differences in the table.
–'Generate script only': It generates SQL sync scirpt for later execution
–'Sync & Generate script': This options can be selected if you want a log of what the 'direct sync' did on the database(s).
nithinMemberWe are looking into this issue. We are almost replicated this at our end.
The error is because the Tunneler is not able to handle the large size query(Bulk insert statement)sent, one guess is lack of memory allocated to PHP at server side.
As a work around you can make the Bulk insert query size small by the following:-
– Save the data sync job file
– Run the job from the command prompt >sja dsync_job.xml -r20
(this -r option retrieves 20 rows at a time from the source server and frame BULK INSERT query to the tunneler, by default it was 1000 rows. You can try different values also)
– It was working at our end. Please try this and let us know the status.
We will check this issue in detail Tomorrow and update you.
nithinMemberHello,
Please tell the SQLyog version?
Quote:`node_revisions` 33024 0 Error No. 1HTTP Error. Could not connect to the tunneling URL.
With HTTP, the error details are not explained properly.We will take it up in an upcoming version.
This problem can happen due to network problems also. Does this always happen when you try to sync this particular table? Please confirm.
I could find that your target table is empty and in such situation we frame the *Bulk Insert query and execute against the target. Can you check the “max_allowed_packet” for both the target and the source. If the “max_allowed_packet” size is less in case of the target server then the INSERT for single row itself can fail to target.
So please give us the following:-
– Execute query SHOW VARIABLES LIKE 'max_allowed_packet'; for both source and target and paste the output here.
– Do you have any BLOB/TEXT column for the table? Please execute the following query in the source to find the longest value stored:
SELECT max(length(long_column_name)) FROM the_table;
– Can you provide us the table structure only?
You can create a support ticket and we will continue from there
nithinMemberWe have improved both Horizontal and Vertical scrolling in Data-Tab now. It will be included into the next public release(probably in version 9.0).
Also in the latest version (v8.71) there's new feature *Form View has been implemented. This would be more helpful in-case of processing with too many columns.
December 10, 2010 at 9:45 am in reply to: Bug Schema Designer Doesn't Keep Track Of Changes To Tables Rename #31662nithinMemberThis issue has been fixed internally as described in last post. It will be included into the next public release.
nithinMemberIssue is confirmed. We will fix this and update the status shortly.
nithinMemberThe dumps are pointing at the place where the MySQL server version check does.
– What's the MySQL server version you connect? (To find exact version, please execute this query from MySQL Command-prompt – SELECT VERSION();)
– Have you ever connected any version of SQLyog to this particular server?
– Whats the OS where SQLyog is installed?
-
AuthorPosts