Unsupported Screen Size: The viewport size is too small for the theme to render properly.

Error In Sync Job

  • This topic is empty.
Viewing 9 reply threads
  • Author
    Posts
    • #12196

      i have a job that fails to sync one table. The local database connected,

      and the remote database is also connectable/retrievable, and i am using the tunnel.

      if i just simply connect to remote database, connection is successful

      using the sqlyogtunnel.php – which appears at the top of the connection

      window. i can see the tables and view the data.

      of course the local database is connected as well.

      i've set the timeout now to a large number of seconds, but as soon as the

      timeout occurs – 162 seconds in this case – i obtain a log file.

      I don't see how this error 'could not connect' is the real error,

      since i can view the remote data base when i do a connection without

      the sync.

      what does this error mean and what do you suggest?

      Sync started at Sun Dec 12 13:34:00 2010

      `node_revisions` 33024 0 Error No. 1

      HTTP Error. Could not connect to the tunneling URL.

      btw i am able to connect via browser to the tunnel file.

    • #31683
      nithin
      Member

      Hello,

      Please tell the SQLyog version?

      Quote:
      `node_revisions` 33024 0 Error No. 1

      HTTP Error. Could not connect to the tunneling URL.

      With HTTP, the error details are not explained properly.We will take it up in an upcoming version.

      This problem can happen due to network problems also. Does this always happen when you try to sync this particular table? Please confirm.

      I could find that your target table is empty and in such situation we frame the *Bulk Insert query and execute against the target. Can you check the “max_allowed_packet” for both the target and the source. If the “max_allowed_packet” size is less in case of the target server then the INSERT for single row itself can fail to target.

      So please give us the following:-

      – Execute query SHOW VARIABLES LIKE 'max_allowed_packet'; for both source and target and paste the output here.

      – Do you have any BLOB/TEXT column for the table? Please execute the following query in the source to find the longest value stored:

      SELECT max(length(long_column_name)) FROM the_table;

      – Can you provide us the table structure only?

      You can create a support ticket and we will continue from there

      http://webyog.com/support/ttx.cgi

    • #31684
      'nithin' wrote:

      Hello,

      Please tell the SQLyog version?

      With HTTP, the error details are not explained properly.We will take it up in an upcoming version.

      This problem can happen due to network problems also. Does this always happen when you try to sync this particular table? Please confirm.

      I could find that your target table is empty and in such situation we frame the *Bulk Insert query and execute against the target. Can you check the “max_allowed_packet” for both the target and the source. If the “max_allowed_packet” size is less in case of the target server then the INSERT for single row itself can fail to target.

      So please give us the following:-

      – Execute query SHOW VARIABLES LIKE 'max_allowed_packet'; for both source and target and paste the output here.

      – Do you have any BLOB/TEXT column for the table? Please execute the following query in the source to find the longest value stored:

      SELECT max(length(long_column_name)) FROM the_table;

      – Can you provide us the table structure only?

      You can create a support ticket and we will continue from there

      http://webyog.com/support/ttx.cgi

      Hi – thanks for your quick response.

      It does happen on this table, but this is the only one in this project where it does happen.

      I have used it fine on other db's, other tables…

      i am trying to migrate a table that originated on another remote server,

      that now has been replicated on my localhost. So i am pushing the rows to the target now…

      I am using v8.14

      SHOW VARIABLES LIKE 'max_allowed_packet' on my local host is 1048576

      SHOW VARIABLES LIKE 'max_allowed_packet' on my remote host is 16776192

      — so it is larger in the target database.

      there is longtext – here is the structure – it is a drupal table.

      CREATE TABLE node_revisions (

      nid int(10) unsigned NOT NULL default '0',

      vid int(10) unsigned NOT NULL auto_increment,

      uid int(11) NOT NULL default '0',

      title varchar(255) NOT NULL default '',

      body longtext NOT NULL,

      teaser longtext NOT NULL,

      log longtext NOT NULL,

      `timestamp` int(11) NOT NULL default '0',

      format int(11) NOT NULL default '0',

      PRIMARY KEY (vid),

      KEY nid (nid),

      KEY uid (uid)

      ) ENGINE=MyISAM DEFAULT CHARSET=utf8;

      SELECT max(length(body)) FROM node_revisions was 15806 in the remote table

      in the local db it was 16130 in the body table

    • #31685
      nithin
      Member

      We are looking into this issue. We are almost replicated this at our end.

      The error is because the Tunneler is not able to handle the large size query(Bulk insert statement)sent, one guess is lack of memory allocated to PHP at server side.

      As a work around you can make the Bulk insert query size small by the following:-

      – Save the data sync job file

      – Run the job from the command prompt >sja dsync_job.xml -r20

      (this -r option retrieves 20 rows at a time from the source server and frame BULK INSERT query to the tunneler, by default it was 1000 rows. You can try different values also)

      – It was working at our end. Please try this and let us know the status.

      We will check this issue in detail Tomorrow and update you.

    • #31686
      'nithin' wrote:

      We are looking into this issue. We are almost replicated this at our end.

      The error is because the Tunneler is not able to handle the large size query(Bulk insert statement)sent, one guess is lack of memory allocated to PHP at server side.

      As a work around you can make the Bulk insert query size small by the following:-

      – Save the data sync job file

      – Run the job from the command prompt >sja dsync_job.xml -r20

      (this -r option retrieves 20 rows at a time from the source server and frame BULK INSERT query to the tunneler, by default it was 1000 rows. You can try different values also)

      – It was working at our end. Please try this and let us know the status.

      We will check this issue in detail Tomorrow and update you.

      i saved the job file as sync-node-revisions-table.xml and from the cmd prompt ran it. see attached file for image of output.

      'The data sync script has been generated at etc…'

      ran it twice with same results. perhaps i am doing something wrong here, dunno.

    • #31687

      perhaps i was fooled again, it appears the job successfully inserted rows.

    • #31688
      nithin
      Member
      Quote:
      'The data sync script has been generated at etc…'

      ran it twice with same results. perhaps i am doing something wrong here, dunno.

      You are getting the 'same result' because you selected the option 'Generate script only' that generate SQL sync scirpt for later execution. You can select 'Direct sync' option to sync the changes immediately.

      See the screen-shot attached.

      'Direct sync': This option will sync immediately as the sync tools finds differences in the table.

      'Generate script only': It generates SQL sync scirpt for later execution

      'Sync & Generate script': This options can be selected if you want a log of what the 'direct sync' did on the database(s).

    • #31689
      nithin
      Member

      FYI: If you do not want to delete the extra rows in the target table you have to select the option 'Don't delete extra rows in target database'.

      Your sync output looks like you have not selected this option and as a result the extra rows in target are deleted.

      See the screen shot attached.

      Also the “-r” option is required only if the target table is empty. I suggested this option because your 1st post tells the target table is empty.

      `node_revisions` 33024 0 Error No. 1

      The version 8.62 change-log tells:

      — SJA (Data sync) now supports an additional -r parameter that tells how big CHUNKS should be when copying to an empty table.

    • #31690
      'nithin' wrote:

      FYI: If you do not want to delete the extra rows in the target table you have to select the option 'Don't delete extra rows in target database'.

      Your sync output looks like you have not selected this option and as a result the extra rows in target are deleted.

      See the screen shot attached.

      Also the “-r” option is required only if the target table is empty. I suggested this option because your 1st post tells the target table is empty.

      `node_revisions` 33024 0 Error No. 1

      The version 8.62 change-log tells:

      — SJA (Data sync) now supports an additional -r parameter that tells how big CHUNKS should be when copying to an empty table.

      by definition 'extra rows' i take to mean duplicate rows even if the target table does not have a primary index.

      that there is some parsing of the row to determine if it is a duplicate row…?

      also is there an interface features the row control similar to the -r option in the command line?

      thanks again for your help.

    • #31691
      peterlaursen
      Participant

      Q: is there some parsing of the row to determine if it is a duplicate row…?

      A:

      1) No.  

      2) You also should be very careful with using a Data Sync tool on tables without a Primary key because existing rows are never UPDATEd – new rows are INSERTEd.  All Data Sync tools work like this because identifying rows by the PK is the only way to identify them uniquely.

      Q: is there an interface features the row control similar to the -r option in the command line?

      A: Currently not.

Viewing 9 reply threads
  • You must be logged in to reply to this topic.