@@ -160,11 +160,13 @@ Checking a replica set status
160
160
• Connecting to the instance...
161
161
• Connected to master_master:instance001
162
162
163
+ master_master:instance001>
164
+
163
165
2. Check that both instances are writable using ``box.info.ro ``:
164
166
165
167
- ``instance001 ``:
166
168
167
- .. code-block :: console
169
+ .. code-block :: tarantoolsession
168
170
169
171
master_master:instance001> box.info.ro
170
172
---
@@ -173,7 +175,7 @@ Checking a replica set status
173
175
174
176
- ``instance002 ``:
175
177
176
- .. code-block :: console
178
+ .. code-block :: tarantoolsession
177
179
178
180
master_master:instance002> box.info.ro
179
181
---
@@ -183,30 +185,30 @@ Checking a replica set status
183
185
3. Execute ``box.info.replication `` to check a replica set status.
184
186
For ``instance002 ``, ``upstream.status `` and ``downstream.status `` should be ``follow ``.
185
187
186
- .. code-block :: console
188
+ .. code-block :: tarantoolsession
187
189
188
190
master_master:instance001> box.info.replication
189
191
---
190
192
- 1:
191
193
id: 1
192
- uuid: 4cfa6e3c-625e-b027-00a7-29b2f2182f23
194
+ uuid: c3bfd89f-5a1c-4556-aa9f-461377713a2a
193
195
lsn: 7
196
+ name: instance001
197
+ 2:
198
+ id: 2
199
+ uuid: dccf7485-8bff-47f6-bfc4-b311701e36ef
200
+ lsn: 0
194
201
upstream:
195
202
status: follow
196
- idle: 0.21281599999929
203
+ idle: 0.93246499999987
197
204
peer: replicator@127.0.0.1:3302
198
- lag: 0.00031614303588867
205
+ lag: 0.00016188621520996
199
206
name: instance002
200
207
downstream:
201
208
status: follow
202
- idle: 0.21800899999653
209
+ idle: 0.8988360000003
203
210
vclock: {1: 7}
204
211
lag: 0
205
- 2:
206
- id: 2
207
- uuid: 9bb111c2-3ff5-36a7-00f4-2b9a573ea660
208
- lsn: 0
209
- name: instance001
210
212
...
211
213
212
214
To see the diagrams that illustrate how the ``upstream `` and ``downstream `` connections look,
@@ -244,7 +246,7 @@ To check that both instances get updates from each other, follow the steps below
244
246
245
247
2. On ``instance002 ``, use the ``select `` operation to make sure data is replicated:
246
248
247
- .. code-block :: console
249
+ .. code-block :: tarantoolsession
248
250
249
251
master_master:instance002> box.space.bands:select()
250
252
---
@@ -260,26 +262,36 @@ To check that both instances get updates from each other, follow the steps below
260
262
:language: lua
261
263
:dedent:
262
264
263
- 4. Get back to ``instance001 `` and use ``select `` to make sure new records are replicated.
265
+ 4. Get back to ``instance001 `` and use ``select `` to make sure new records are replicated:
266
+
267
+ .. code-block :: tarantoolsession
268
+
269
+ master_master:instance001> box.space.bands:select()
270
+ ---
271
+ - - [1, 'Roxette', 1986]
272
+ - [2, 'Scorpions', 1965]
273
+ - [3, 'Ace of Base', 1987]
274
+ - [4, 'The Beatles', 1960]
275
+ ...
264
276
265
277
5. Check that :ref: `box.info.vclock <box_introspection-box_info >` values are the same on both instances:
266
278
267
279
- ``instance001 ``:
268
280
269
- .. code-block :: console
281
+ .. code-block :: tarantoolsession
270
282
271
283
master_master:instance001> box.info.vclock
272
284
---
273
- - {2: 5 , 1: 9 }
285
+ - {2: 2 , 1: 12 }
274
286
...
275
287
276
288
- ``instance002 ``:
277
289
278
- .. code-block :: console
290
+ .. code-block :: tarantoolsession
279
291
280
292
master_master:instance002> box.info.vclock
281
293
---
282
- - {2: 5 , 1: 9 }
294
+ - {2: 2 , 1: 12 }
283
295
...
284
296
285
297
@@ -341,32 +353,33 @@ To insert conflicting records to ``instance001`` and ``instance002``, follow the
341
353
Then, check ``box.info.replication `` on ``instance001 ``.
342
354
``upstream.status `` should be ``stopped `` because of the ``Duplicate key exists `` error:
343
355
344
- .. code-block :: console
356
+ .. code-block :: tarantoolsession
345
357
346
358
master_master:instance001> box.info.replication
347
359
---
348
360
- 1:
349
361
id: 1
350
- uuid: 4cfa6e3c-625e-b027-00a7-29b2f2182f23
351
- lsn: 9
362
+ uuid: c3bfd89f-5a1c-4556-aa9f-461377713a2a
363
+ lsn: 13
364
+ name: instance001
365
+ 2:
366
+ id: 2
367
+ uuid: dccf7485-8bff-47f6-bfc4-b311701e36ef
368
+ lsn: 2
352
369
upstream:
353
370
peer: replicator@127.0.0.1:3302
354
- lag: 143.52251672745
371
+ lag: 115.99977827072
355
372
status: stopped
356
- idle: 3.9462469999999
373
+ idle: 2.0342070000006
357
374
message: Duplicate key exists in unique index "primary" in space "bands" with
358
- old tuple - [5, "Pink Floyd", 1965] and new tuple - [5, "incorrect data", 0]
375
+ old tuple - [5, "Pink Floyd", 1965] and new tuple - [5, "incorrect data",
376
+ 0]
359
377
name: instance002
360
378
downstream:
361
379
status: stopped
362
- message: 'unexpected EOF when reading from socket, called on fd 12 , aka 127.0.0.1:3301,
363
- peer of 127.0.0.1:59258 : Broken pipe'
380
+ message: 'unexpected EOF when reading from socket, called on fd 24 , aka 127.0.0.1:3301,
381
+ peer of 127.0.0.1:58478 : Broken pipe'
364
382
system_message: Broken pipe
365
- 2:
366
- id: 2
367
- uuid: 9bb111c2-3ff5-36a7-00f4-2b9a573ea660
368
- lsn: 6
369
- name: instance001
370
383
...
371
384
372
385
The diagram below illustrates how the ``upstream `` and ``downstream `` connections look like:
@@ -385,48 +398,62 @@ Reseeding a replica
385
398
To resolve a replication conflict, ``instance002 `` should get the correct data from ``instance001 `` first.
386
399
To achieve this, ``instance002 `` should be rebootstrapped:
387
400
388
- 1. In the ``config.yaml `` file, change ``database.mode `` of ``instance002 `` to ``ro ``:
401
+ 1. Select all the tuples in the :ref: `box.space._cluster <box_space-cluster >` system space to get a UUID of ``instance002 ``:
402
+
403
+ .. code-block :: tarantoolsession
404
+
405
+ master_master:instance001> box.space._cluster:select()
406
+ ---
407
+ - - [1, 'c3bfd89f-5a1c-4556-aa9f-461377713a2a', 'instance001']
408
+ - [2, 'dccf7485-8bff-47f6-bfc4-b311701e36ef', 'instance002']
409
+ ...
410
+
411
+ 2. In the ``config.yaml `` file, change the following ``instance002 `` settings:
412
+
413
+ * Set ``database.mode `` to ``ro ``.
414
+ * Set ``database.instance_uuid `` to a UUID value obtained in the previous step.
389
415
390
416
.. code-block :: yaml
391
417
392
418
instance002 :
393
419
database :
394
420
mode : ro
421
+ instance_uuid : ' dccf7485-8bff-47f6-bfc4-b311701e36ef'
395
422
396
- 2 . Reload configurations on both instances using the `` reload() `` function provided by the :ref: `config < config-module >` module :
423
+ 3 . Reload configurations on both instances using the :ref: `config:reload() < config_api_reference_reload >` function :
397
424
398
425
- ``instance001 ``:
399
426
400
- .. code-block :: console
427
+ .. code-block :: tarantoolsession
401
428
402
429
master_master:instance001> require('config'):reload()
403
430
---
404
431
...
405
432
406
433
- ``instance002 ``:
407
434
408
- .. code-block :: console
435
+ .. code-block :: tarantoolsession
409
436
410
437
master_master:instance002> require('config'):reload()
411
438
---
412
439
...
413
440
414
- 3 . Delete write-ahead logs and snapshots stored in the ``var/lib/instance002 `` directory.
441
+ 4 . Delete write-ahead logs and snapshots stored in the ``var/lib/instance002 `` directory.
415
442
416
443
.. NOTE ::
417
444
418
445
``var/lib `` is the default directory used by tt to store write-ahead logs and snapshots.
419
446
Learn more from :ref: `Configuration <tt-config >`.
420
447
421
- 4 . Restart ``instance002 `` using the :ref: `tt restart <tt-restart >` command:
448
+ 5 . Restart ``instance002 `` using the :ref: `tt restart <tt-restart >` command:
422
449
423
450
.. code-block :: console
424
451
425
452
$ tt restart master_master:instance002
426
453
427
- 5 . Connect to ``instance002 `` and make sure it received the correct data from ``instance001 ``:
454
+ 6 . Connect to ``instance002 `` and make sure it received the correct data from ``instance001 ``:
428
455
429
- .. code-block :: console
456
+ .. code-block :: tarantoolsession
430
457
431
458
master_master:instance002> box.space.bands:select()
432
459
---
@@ -448,33 +475,33 @@ After :ref:`reseeding a replica <replication-master-master-reseed-replica>`, you
448
475
1. Execute ``box.info.replication `` on ``instance001 ``.
449
476
``upstream.status `` is still stopped:
450
477
451
- .. code-block :: console
478
+ .. code-block :: tarantoolsession
452
479
453
480
master_master:instance001> box.info.replication
454
481
---
455
482
- 1:
456
483
id: 1
457
- uuid: 4cfa6e3c-625e-b027-00a7-29b2f2182f23
458
- lsn: 9
484
+ uuid: c3bfd89f-5a1c-4556-aa9f-461377713a2a
485
+ lsn: 13
486
+ name: instance001
487
+ 2:
488
+ id: 2
489
+ uuid: dccf7485-8bff-47f6-bfc4-b311701e36ef
490
+ lsn: 2
459
491
upstream:
460
492
peer: replicator@127.0.0.1:3302
461
- lag: 143.52251672745
493
+ lag: 115.99977827072
462
494
status: stopped
463
- idle: 1309.943383
495
+ idle: 1013.688243
464
496
message: Duplicate key exists in unique index "primary" in space "bands" with
465
497
old tuple - [5, "Pink Floyd", 1965] and new tuple - [5, "incorrect data",
466
498
0]
467
499
name: instance002
468
500
downstream:
469
501
status: follow
470
- idle: 0.47881799999959
471
- vclock: {2: 6 , 1: 9 }
502
+ idle: 0.69694700000036
503
+ vclock: {2: 2 , 1: 13 }
472
504
lag: 0
473
- 2:
474
- id: 2
475
- uuid: 9bb111c2-3ff5-36a7-00f4-2b9a573ea660
476
- lsn: 6
477
- name: instance001
478
505
...
479
506
480
507
The diagram below illustrates how the ``upstream `` and ``downstream `` connections look like:
@@ -497,13 +524,14 @@ After :ref:`reseeding a replica <replication-master-master-reseed-replica>`, you
497
524
498
525
3. Reload configuration on ``instance001 `` only:
499
526
500
- .. code-block :: console
527
+ .. code-block :: tarantoolsession
501
528
502
529
master_master:instance001> require('config'):reload()
503
530
---
504
531
...
505
532
506
- 4. Change ``database.mode `` values back to ``rw `` for both instances and restore ``iproto.listen `` for ``instance001 ``:
533
+ 4. Change ``database.mode `` values back to ``rw `` for both instances and restore ``iproto.listen `` for ``instance001 ``.
534
+ The ``database.instance_uuid `` option can be removed for ``instance002 ``:
507
535
508
536
.. literalinclude :: /code_snippets/snippets/replication/instances.enabled/master_master/config.yaml
509
537
:language: yaml
@@ -515,15 +543,15 @@ After :ref:`reseeding a replica <replication-master-master-reseed-replica>`, you
515
543
516
544
- ``instance001 ``:
517
545
518
- .. code-block :: console
546
+ .. code-block :: tarantoolsession
519
547
520
548
master_master:instance001> require('config'):reload()
521
549
---
522
550
...
523
551
524
552
- ``instance002 ``:
525
553
526
- .. code-block :: console
554
+ .. code-block :: tarantoolsession
527
555
528
556
master_master:instance002> require('config'):reload()
529
557
---
@@ -532,30 +560,30 @@ After :ref:`reseeding a replica <replication-master-master-reseed-replica>`, you
532
560
6. Check ``box.info.replication ``.
533
561
``upstream.status `` should be ``follow `` now.
534
562
535
- .. code-block :: console
563
+ .. code-block :: tarantoolsession
536
564
537
565
master_master:instance001> box.info.replication
538
566
---
539
567
- 1:
540
568
id: 1
541
- uuid: 4cfa6e3c-625e-b027-00a7-29b2f2182f23
542
- lsn: 9
569
+ uuid: c3bfd89f-5a1c-4556-aa9f-461377713a2a
570
+ lsn: 13
571
+ name: instance001
572
+ 2:
573
+ id: 2
574
+ uuid: dccf7485-8bff-47f6-bfc4-b311701e36ef
575
+ lsn: 2
543
576
upstream:
544
577
status: follow
545
- idle: 0.21281300000192
578
+ idle: 0.86873800000012
546
579
peer: replicator@127.0.0.1:3302
547
- lag: 0.00031113624572754
580
+ lag: 0.0001060962677002
548
581
name: instance002
549
582
downstream:
550
583
status: follow
551
- idle: 0.035179000002245
552
- vclock: {2: 6 , 1: 9 }
584
+ idle: 0.058662999999797
585
+ vclock: {2: 2 , 1: 13 }
553
586
lag: 0
554
- 2:
555
- id: 2
556
- uuid: 9bb111c2-3ff5-36a7-00f4-2b9a573ea660
557
- lsn: 6
558
- name: instance001
559
587
...
560
588
561
589
0 commit comments