Forum Replies Created
-
AuthorPosts
-
joejk2Member
Many thanks for all your help.
I will look at changing the keys next time I get a moment.
Thanks again.
joe
joejk2MemberHmmmmm….
As far as I am aware my databases are not corrupted. MediaWiki seems to be content with them (the install is fully functional). Is there any reason why SJA would attempt an INSERT rather than an UPDATE?
joe
joejk2MemberPeter.
Many thanks again for your reply. Thank you for the link to the pdf which I tried my best to understand.
1) I can't find any duplicated keys throughout any of the tables. Indeed two of the tables (site_stats and searchindex) only have one row of data!
2) I am using sja501 (linux version)
3) I am trying to use sja to sync two live installations of MediaWiki – hence it is not trivial to provide the create statements for the tables.
I have attached dumps for the SOURCE and TARGET. Error No. 1062 is returned on attempting to sync tablesThis forum is not allowing me two upload so the files are at AngryFruit.co.uk:
- SOURCE_OBJECTCACHE
- SOURCE_SEARCHINDEX
- SOURCE_SITE_STATS
The sja.log is
Error No. 1062^M
Duplicate entry 'SOURCE:pcache:idhash:1-0!1!0!0!!en!2' for key 1^M
Error No. 1062^M
Duplicate entry '1' for key 1^M
Error No. 1062^M
Duplicate entry '1' for key 1^M
I'm sure I'm missing something simple and I apologise in advance for this! Many thanks for any help you can offer.
Joe
joejk2MemberPeter,
Many thanks for your reply. Please excuse my confusion in this matter – I am very new to all this.
1) I am not using HTTP tunnelling. Rather I am using mysql's standard port 3306. I have included my jobfile below.
2) In those tables that do not correctly synchronize there are no rows with identical rows. I have included the description of these tables below and the data from the first
Hope my answer is vaguely useful! Many thanks again.
joe
sja.log is:[u]
Error No. 1062^M
Duplicate entry 'fergal:messages' for key 1^M
Error No. 1062^M
Duplicate entry '1' for key 1^M
Error No. 1062^M
Duplicate entry '1' for key 1^M
JOBFILE
127.0.0.1 root ***** 3306 fergal ip_address root ***** 3306 fergal TABLES THAT RETURN ERROR NO 1062
mysql> describe f_objectcache;
+
+
+
+
+
+
+| Field | Type | Null | Key | Default | Extra |
+
+
+
+
+
+
+| keyname | varchar(255) | NO | PRI | | |
| value | mediumblob | YES | | NULL | |
| exptime | datetime | YES | MUL | NULL | |
+
+
+
+
+
+
+3 rows in set (0.00 sec)
mysql> select keyname from f_objectcache;
+
+| keyname |
+
+| fergal:messages |
| fergal:pcache:idhash:1-0!1!0!0!!en!2 |
| fergal:pcache:idhash:1-0!3!0!1!0!en!2 |
| fergal:pcache:idhash:1306-0!1!0!0!!en!2 |
| fergal:pcache:idhash:1306-0!3!0!1!0!en!2 |
| fergal:pcache:idhash:1313-0!1!0!0!!en!2 |
| fergal:pcache:idhash:1327-0!1!0!0!!en!2 |
| fergal:pcache:idhash:1360-0!1!0!0!!en!2 |
+
+mysql> describe f_searchindex;
+
+
+
+
+
+
+| Field | Type | Null | Key | Default | Extra |
+
+
+
+
+
+
+| si_page | int(8) unsigned | NO | PRI | 0 | |
| si_title | varchar(255) | NO | MUL | | |
| si_text | mediumtext | NO | MUL | | |
+
+
+
+
+
+
+3 rows in set (0.01 sec)
mysql> select si_page from f_searchindex;
+
+| si_page |
+
+| 1 |
| 4 |
| 989 |
| 1280 |
| 1281 |
| 1282 |
| 1283 |
| 1284 |
| 1285 |
| 1286 |
| 1287 ……
mysql> describe f_site_stats;
+
+
+
+
+
+
+| Field | Type | Null | Key | Default | Extra |
+
+
+
+
+
+
+| ss_row_id | int(8) unsigned | NO | PRI | 0 | |
| ss_total_views | bigint(20) unsigned | YES | | 0 | |
| ss_total_edits | bigint(20) unsigned | YES | | 0 | |
| ss_good_articles | bigint(20) unsigned | YES | | 0 | |
| ss_total_pages | bigint(20) | YES | | -1 | |
| ss_users | bigint(20) | YES | | -1 | |
| ss_admins | int(10) | YES | | -1 | |
+
+
+
+
+
+
+7 rows in set (0.00 sec)
mysql> select ss_row_id from f_site_stats;
+
+| ss_row_id |
+
+| 1 |
+
+joejk2MemberDear All.
This is an amazing piece of software – thank you very much!
I am encountering the same 'Error No. 1062' – complaining about duplicate entries in my primary key. I have tried globally setting the sql_mode = 'NO_AUTO_VALUE_ON_ZERO' but the problem remains.
Any further clues would be greatly appreciated. Many thanks,
joe
-
AuthorPosts