Mar
13
2019
--

Live MySQL Slave Rebuild with Percona Toolkit

MySQL slave data out of sync

MySQL slave data out of syncRecently, we had an edge case where a MySQL slave went out-of-sync but it couldn’t be rebuilt from scratch. The slave was acting as a master server to some applications and it had data was being written to it. It was a design error, and this is not recommended, but it happened. So how do you synchronize the data in this circumstance? This blog post describes the steps taken to recover from this situation. The tools used to recover the slave were pt-slave-restartpt-table-checksum, pt-table-sync and mysqldiff.

Scenario

To illustrate this situation, it was built a master x slave configuration with sysbench running on the master server to simulate a general application workload. The environment was set with a Percona Server 5.7.24-26 and sysbench 1.0.16.

Below are the sysbench commands to prepare and simulate the workload:

# Create Data
sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \
  --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \
  --table_size=5000 --tables=100 --threads=1 --events=0 --time=60 \
  --rand-type=uniform /usr/share/sysbench/oltp_read_only.lua prepare
# Simulate Workload
sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \
  --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \
  --table_size=5000 --tables=100 --threads=10 --events=0 --time=6000 \
  --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua --report-interval=1 run

With the environment set, the slave server was stopped, and some operations to desynchronize the slave were performed to reproduce the problem.

Fixing the issue

With the slave desynchronized, a restart on the replication was executed. Immediately, the error below appeared:

Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

To recover the slave from this error, we had to point the slave to an existing binary log with a valid binary log position. To get a valid binary log position, the command shown below had to be executed on the master:

master [localhost] {msandbox} ((none)) > show master status\G
*************************** 1. row ***************************
File: mysql-bin.000007
Position: 218443612
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.01 sec)

Then, a CHANGE MASTER command was run on the slave:

slave1 [localhost] {msandbox} (test) > change master to master_log_file='mysql-bin.000007', MASTER_LOG_POS=218443612;
Query OK, 0 rows affected (0.00 sec)
slave1 [localhost] {msandbox} (test) > start slave;
Query OK, 0 rows affected (0.00 sec)

Now the slave had a valid binary log file to read, but since it was inconsistent, it hit another error:

Last_SQL_Errno: 1032
               Last_SQL_Error: Could not execute Delete_rows event on table test.sbtest8; Can't find record in 'sbtest8', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000005, end_log_pos 326822861

Working past the errors

Before fixing the inconsistencies, it was necessary to keep the replication running and to skip the errors. For this, the pt-slave-restart tool will be used. The tool needs to be run on the slave server:

pt-slave-restart --user root --socket=/tmp/mysql_sandbox45008.sock --ask-pass

The tool skips errors and starts the replication threads. Below is an example of the output of the pt-slave-restart:

$ pt-slave-restart --user root --socket=/tmp/mysql_sandbox45009.sock --ask-pass
Enter password:
2019-02-22T14:18:01 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007        1996 1146
2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007        8698 1146
2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007       38861 1146

Finding the inconsistencies

With the tool running on one terminal, the phase to check the inconsistencies began. First things first, an object definition check was performed using mysqldiff utility. The mysqldiff tool is part of MySQL utilities. To execute the tool:

$ mysqldiff --server1=root:msandbox@localhost:48008 --server2=root:msandbox@localhost:48009 test:test --difftype=sql --changes-for=server2

And below are the differences found between the master and the slave:

1-) A table that doesn’t exist

# WARNING: Objects in server1.test but not in server2.test:
# TABLE: joinit

2-) A wrong table structure

# Comparing `test`.`sbtest98` to `test`.`sbtest98` [FAIL]
# Transformation for --changes-for=server2:
#
ALTER TABLE `test`.`sbtest98`
DROP INDEX k_98,
DROP COLUMN x,
ADD INDEX k_98 (k);

Performing the recommendations on the slave (creating the table and the table modification) the object’s definition was now equal. The next step was to check data consistency. For this, the pt-table-checksum was used to identify which tables are out-of-sync. This command was run on the master server.

$ pt-table-checksum -uroot -pmsandbox --socket=/tmp/mysql_sandbox48008.sock --replicate=percona.checksums --create-replicate-table --empty-replicate-table --no-check-binlog-format --recursion-method=hosts

And an output example:

01 master]$ pt-table-checksum --recursion-method dsn=D=percona,t=dsns --no-check-binlog-format --nocheck-replication-filter --host 127.0.0.1 --user root --port 48008 --password=msandbox
Checking if all tables can be checksummed ...
Starting checksum ...
  at /usr/bin/pt-table-checksum line 332.
Replica lag is 66 seconds on bm-support01.bm.int.percona.com.  Waiting.
Replica lag is 46 seconds on bm-support01.bm.int.percona.com.  Waiting.
Replica lag is 33 seconds on bm-support01.bm.int.percona.com.  Waiting.
           TS ERRORS  DIFFS     ROWS  DIFF_ROWS  CHUNKS SKIPPED    TIME TABLE
02-26T16:27:59      0      0     5000          0       1       0   0.037 test.sbtest1
02-26T16:27:59      0      0     5000          0       1       0   0.039 test.sbtest10
02-26T16:27:59      0      1     5000          0       1       0   0.033 test.sbtest100
02-26T16:27:59      0      1     5000          0       1       0   0.034 test.sbtest11
02-26T16:27:59      0      1     5000          0       1       0   0.040 test.sbtest12
02-26T16:27:59      0      1     5000          0       1       0   0.034 test.sbtest13

Fixing the data inconsistencies

Analyzing the DIFFS column it is possible to identify which tables were compromised. With this information, the pt-table-sync tool was used to fix these inconsistencies. The tool synchronizes MySQL table data efficiently, performing non-op changes on the master so they can be replicated and applied on the slave. The tools need to be run on the slave server. Below is an example of the tool running:

$ pt-table-sync --execute --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock

It is possible to perform a dry-run of the tool before executing the changes to check what changes the tool will apply:

$ pt-table-sync --print --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock
REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('1', '1654', '97484653464-60074971666-42998564849-40530823048-27591234964-93988623123-02188386693-94155746040-59705759910-14095637891', '15000678573-85832916990-95201670192-53956490549-57402857633') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;
REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('2', '3007', '31679133794-00154186785-50053859647-19493043469-34585653717-64321870163-33743380797-12939513287-31354198555-82828841987', '30122503210-11153873086-87146161761-60299188705-59630949292') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;

After executing the pt-table-sync, we recommend that you run the pt-table-checksum again and check if the DIFFS column shows the value of 0.

Conclusion

This blog post was intended to cover all possible issues that could happen on a slave when it goes out-of-sync such as DDL operations, binary log purge and DML operations. This process involves many steps and it could take a long time to finish, especially in large databases. Note that this process might take longer than the backup/restore process. However, in situations like the one mentioned above, it might be the only solution to recover a slave.


Image based on Photo by Randy Fath on Unsplash

 

Nov
04
2014
--

Data inconsistencies on MySQL replicas: Beyond pt-table-checksum

Percona Toolkit’s pt-table-checksum is a great tool to find data inconsistencies between a MySQL master and its replicas. However it is sometimes not enough to know that there are inconsistencies and let pt-table-sync fix the issue: you may want to know which exact rows are different to identify the statements that created the inconsistency. This post shows one way to achieve that goal.

The issue

Let’s assume you have 2 servers running MySQL 5.5: db1 the master and db2 the replica. You want to upgrade to MySQL 5.6 using an in-place upgrade and to play safe, you will upgrade db2 (the slave) first. If all goes well you will promote it and upgrade db1.

A good thing to do after upgrading db2 is to check for potential data inconsistencies with pt-table-checksum. Once checksumming is done, you can run the following query on db2 to see if there is any data drift:

mysql> SELECT db, tbl,
       SUM(this_cnt) AS total_rows,
       COUNT(*) AS chunks
FROM percona.checksums
WHERE (master_cnt <> this_cnt
       OR master_crc <> this_crc
       OR ISNULL(master_crc) <> ISNULL(this_crc))
GROUP BY db, tbl;
+------+-------+------------+--------+
| db   | tbl   | total_rows | chunks |
+------+-------+------------+--------+
| mydb | items |    3745563 |     17 |
+------+-------+------------+--------+

This indicates that inconsistencies can be found in mydb.items in 17 chunks. Now the question is: which rows are different on db1 and db2?

The solution

The previous query shows that we will find inconsistencies in 17 of the chunks pt-table-checksum used. But what is a chunk?

mysql> SELECT *
       FROM percona.checksums
       WHERE this_crc != master_crc
       AND tbl='items'G
***************** 1. row *****************
            db: mydb
           tbl: items
         chunk: 28
    chunk_time: 0.123122
   chunk_index: PRIMARY
lower_boundary: 7487511
upper_boundary: 7563474
      this_crc: 2c11da8d
      this_cnt: 75964
    master_crc: 66a1c22c
    master_cnt: 75964
            ts: 2014-10-22 01:21:26
[...]

So the first chunk with inconsistencies is chunk #28, which is the set of rows where the primary key is >= 7487511 and <= 7563474.

Let’s export all these rows on db1 and db2 instance ::

# db1
mysql> SELECT * INTO outfile '/tmp/items_db1.txt'
       FROM mydb.items
       WHERE id BETWEEN 7487511 AND 7563474;
# db2
mysql> SELECT * INTO outfile '/tmp/items_db2.txt'
       FROM mydb.items
       WHERE id BETWEEN 7487511 AND 7563474;

Then let’s use diff to isolate non-matching rows

# Using diff to compare rows
# diff items_db1.txt items_db2.txt
75872,75874c75872,75874
< 7563382 2127002 3 0 2014-10-22 02:51:33
< 7563383 2127002 4 0 2014-10-22 02:51:33
< 7563384 2127002 5 0 2014-10-22 02:51:33
---
> 7563382 2127002 3 0 2014-10-22 02:51:34
> 7563383 2127002 4 0 2014-10-22 02:51:34
> 7563384 2127002 5 0 2014-10-22 02:51:34
[...]

We can see that some datetime fields are off by 1 second on the 5.6 instance.

In this case, the binlogs showed queries like:

INSERT INTO items ([...],posted_at) VALUES ([...],'2014-10-22 02:51:33.835249');

MySQL 5.5 rounds '2014-10-22 02:51:33.835249' to '2014-10-22 02:51:33' (ignoring the fractional part), while MySQL 5.6 rounds it to '2014-10-22 02:51:34'.

Now it’s easy to fix the application so that it works both with MySQL 5.5 and 5.6 and then continue testing MySQL 5.6.

Conclusion

The method shown above is an easy way to find the exact records that are inconsistent between the MySQL master and a replica. It is not useful if you only want to resync the slave (in this case, just run pt-table-sync) but it can be a first step in understanding how inconsistencies are created.

The post Data inconsistencies on MySQL replicas: Beyond pt-table-checksum appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com