Mar
13
2019
--

Live MySQL Slave Rebuild with Percona Toolkit

MySQL slave data out of sync

MySQL slave data out of syncRecently, we had an edge case where a MySQL slave went out-of-sync but it couldn’t be rebuilt from scratch. The slave was acting as a master server to some applications and it had data was being written to it. It was a design error, and this is not recommended, but it happened. So how do you synchronize the data in this circumstance? This blog post describes the steps taken to recover from this situation. The tools used to recover the slave were pt-slave-restartpt-table-checksum, pt-table-sync and mysqldiff.

Scenario

To illustrate this situation, it was built a master x slave configuration with sysbench running on the master server to simulate a general application workload. The environment was set with a Percona Server 5.7.24-26 and sysbench 1.0.16.

Below are the sysbench commands to prepare and simulate the workload:

# Create Data
sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \
  --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \
  --table_size=5000 --tables=100 --threads=1 --events=0 --time=60 \
  --rand-type=uniform /usr/share/sysbench/oltp_read_only.lua prepare
# Simulate Workload
sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \
  --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \
  --table_size=5000 --tables=100 --threads=10 --events=0 --time=6000 \
  --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua --report-interval=1 run

With the environment set, the slave server was stopped, and some operations to desynchronize the slave were performed to reproduce the problem.

Fixing the issue

With the slave desynchronized, a restart on the replication was executed. Immediately, the error below appeared:

Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

To recover the slave from this error, we had to point the slave to an existing binary log with a valid binary log position. To get a valid binary log position, the command shown below had to be executed on the master:

master [localhost] {msandbox} ((none)) > show master status\G
*************************** 1. row ***************************
File: mysql-bin.000007
Position: 218443612
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.01 sec)

Then, a CHANGE MASTER command was run on the slave:

slave1 [localhost] {msandbox} (test) > change master to master_log_file='mysql-bin.000007', MASTER_LOG_POS=218443612;
Query OK, 0 rows affected (0.00 sec)
slave1 [localhost] {msandbox} (test) > start slave;
Query OK, 0 rows affected (0.00 sec)

Now the slave had a valid binary log file to read, but since it was inconsistent, it hit another error:

Last_SQL_Errno: 1032
               Last_SQL_Error: Could not execute Delete_rows event on table test.sbtest8; Can't find record in 'sbtest8', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000005, end_log_pos 326822861

Working past the errors

Before fixing the inconsistencies, it was necessary to keep the replication running and to skip the errors. For this, the pt-slave-restart tool will be used. The tool needs to be run on the slave server:

pt-slave-restart --user root --socket=/tmp/mysql_sandbox45008.sock --ask-pass

The tool skips errors and starts the replication threads. Below is an example of the output of the pt-slave-restart:

$ pt-slave-restart --user root --socket=/tmp/mysql_sandbox45009.sock --ask-pass
Enter password:
2019-02-22T14:18:01 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007        1996 1146
2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007        8698 1146
2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007       38861 1146

Finding the inconsistencies

With the tool running on one terminal, the phase to check the inconsistencies began. First things first, an object definition check was performed using mysqldiff utility. The mysqldiff tool is part of MySQL utilities. To execute the tool:

$ mysqldiff --server1=root:msandbox@localhost:48008 --server2=root:msandbox@localhost:48009 test:test --difftype=sql --changes-for=server2

And below are the differences found between the master and the slave:

1-) A table that doesn’t exist

# WARNING: Objects in server1.test but not in server2.test:
# TABLE: joinit

2-) A wrong table structure

# Comparing `test`.`sbtest98` to `test`.`sbtest98` [FAIL]
# Transformation for --changes-for=server2:
#
ALTER TABLE `test`.`sbtest98`
DROP INDEX k_98,
DROP COLUMN x,
ADD INDEX k_98 (k);

Performing the recommendations on the slave (creating the table and the table modification) the object’s definition was now equal. The next step was to check data consistency. For this, the pt-table-checksum was used to identify which tables are out-of-sync. This command was run on the master server.

$ pt-table-checksum -uroot -pmsandbox --socket=/tmp/mysql_sandbox48008.sock --replicate=percona.checksums --create-replicate-table --empty-replicate-table --no-check-binlog-format --recursion-method=hosts

And an output example:

01 master]$ pt-table-checksum --recursion-method dsn=D=percona,t=dsns --no-check-binlog-format --nocheck-replication-filter --host 127.0.0.1 --user root --port 48008 --password=msandbox
Checking if all tables can be checksummed ...
Starting checksum ...
  at /usr/bin/pt-table-checksum line 332.
Replica lag is 66 seconds on bm-support01.bm.int.percona.com.  Waiting.
Replica lag is 46 seconds on bm-support01.bm.int.percona.com.  Waiting.
Replica lag is 33 seconds on bm-support01.bm.int.percona.com.  Waiting.
           TS ERRORS  DIFFS     ROWS  DIFF_ROWS  CHUNKS SKIPPED    TIME TABLE
02-26T16:27:59      0      0     5000          0       1       0   0.037 test.sbtest1
02-26T16:27:59      0      0     5000          0       1       0   0.039 test.sbtest10
02-26T16:27:59      0      1     5000          0       1       0   0.033 test.sbtest100
02-26T16:27:59      0      1     5000          0       1       0   0.034 test.sbtest11
02-26T16:27:59      0      1     5000          0       1       0   0.040 test.sbtest12
02-26T16:27:59      0      1     5000          0       1       0   0.034 test.sbtest13

Fixing the data inconsistencies

Analyzing the DIFFS column it is possible to identify which tables were compromised. With this information, the pt-table-sync tool was used to fix these inconsistencies. The tool synchronizes MySQL table data efficiently, performing non-op changes on the master so they can be replicated and applied on the slave. The tools need to be run on the slave server. Below is an example of the tool running:

$ pt-table-sync --execute --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock

It is possible to perform a dry-run of the tool before executing the changes to check what changes the tool will apply:

$ pt-table-sync --print --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock
REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('1', '1654', '97484653464-60074971666-42998564849-40530823048-27591234964-93988623123-02188386693-94155746040-59705759910-14095637891', '15000678573-85832916990-95201670192-53956490549-57402857633') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;
REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('2', '3007', '31679133794-00154186785-50053859647-19493043469-34585653717-64321870163-33743380797-12939513287-31354198555-82828841987', '30122503210-11153873086-87146161761-60299188705-59630949292') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;

After executing the pt-table-sync, we recommend that you run the pt-table-checksum again and check if the DIFFS column shows the value of 0.

Conclusion

This blog post was intended to cover all possible issues that could happen on a slave when it goes out-of-sync such as DDL operations, binary log purge and DML operations. This process involves many steps and it could take a long time to finish, especially in large databases. Note that this process might take longer than the backup/restore process. However, in situations like the one mentioned above, it might be the only solution to recover a slave.


Image based on Photo by Randy Fath on Unsplash

 

Oct
14
2014
--

Recover orphaned InnoDB partition tablespaces in MySQL

A few months back, Michael wrote about reconnecting orphaned *.ibd files using MySQL 5.6. I will show you the same procedure, this time for partitioned tables. An InnoDB partition is also a self-contained tablespace in itself so you can use the same method described in the previous post.

To begin with, I have an example table with a few orphaned partitions and we will reconnect each partition one by one to the original table.

mysql [localhost] {msandbox} (recovery) > SHOW CREATE TABLE t1 G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
[...]
KEY `h_date` (`h_date`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (year(h_date))
(PARTITION p0 VALUES LESS THAN (2006) ENGINE = InnoDB,
PARTITION p1 VALUES LESS THAN (2010) ENGINE = InnoDB,
PARTITION px VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
1 row in set (0.00 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t1;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
-rw-rw----.  1 revin revin 176M Oct  8 08:41 t1#P#p0.ibd
-rw-rw----.  1 revin revin 612M Oct  8 08:41 t1#P#p1.ibd
-rw-rw----.  1 revin revin 932M Oct  8 08:42 t1#P#px.ibd

The first step is to create a dummy table and remove partitioning so that we can reattach individual partitions to this table.

mysql [localhost] {msandbox} (recovery) > CREATE TABLE t1_t LIKE t1;
Query OK, 0 rows affected (0.02 sec)
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t REMOVE PARTITIONING;
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t DISCARD TABLESPACE;
Query OK, 0 rows affected (0.05 sec)


Once the tablespace for our dummy table has been discarded, we copy  one of the partitions to take the place of the dummy table’s tablespace. For example, we copy 

t1#P#p0.ibd

  as 

t1_t.ibd

  into the MySQL data directory, of course taking into account the permissions afterward. The next step is to import the tablespace to the dummy table.

mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t IMPORT TABLESPACE;
Query OK, 0 rows affected, 1 warning (7.34 sec)


And for the secret sauce, we will exchange our dummy table recently imported tablespace to replace the target partition in our original table.

mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1 EXCHANGE PARTITION px WITH TABLE t1_t;
Query OK, 0 rows affected (6.42 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t1;
+----------+
| COUNT(*) |
+----------+
|  8523686 |
+----------+
1 row in set (2.50 sec)


You can do the same with subpartitions, too! Here’s my slightly different table with subpartitions where I reconnect one of the orphaned tablespaces

t2#P#px#SP#pxsp1.ibd

 .

mysql [localhost] {msandbox} (recovery) > SHOW CREATE TABLE t2 G
*************************** 1. row ***************************
       Table: t2
Create Table: CREATE TABLE `t2` (
[...]
  KEY `h_date` (`h_date`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (year(h_date))
SUBPARTITION BY HASH (u_id)
SUBPARTITIONS 2
(PARTITION p0 VALUES LESS THAN (2006) ENGINE = InnoDB,
 PARTITION p1 VALUES LESS THAN (2010) ENGINE = InnoDB,
 PARTITION px VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
1 row in set (0.00 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t2;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.94 sec)
-rw-rw----.  1 revin revin  92M Oct  8 08:44 t2#P#p0#SP#p0sp0.ibd
-rw-rw----.  1 revin revin  92M Oct  8 08:44 t2#P#p0#SP#p0sp1.ibd
-rw-rw----.  1 revin revin 304M Oct  8 08:44 t2#P#p1#SP#p1sp0.ibd
-rw-rw----.  1 revin revin 316M Oct  8 08:44 t2#P#p1#SP#p1sp1.ibd
-rw-rw----.  1 revin revin 480M Oct  8 08:45 t2#P#px#SP#pxsp0.ibd
-rw-rw----.  1 revin revin 460M Oct  8 08:45 t2#P#px#SP#pxsp1.ibd
mysql [localhost] {msandbox} (recovery) > CREATE TABLE t2_t LIKE t2;
Query OK, 0 rows affected (0.02 sec)
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2_t REMOVE PARTITIONING;
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2_t DISCARD TABLESPACE;
Query OK, 0 rows affected (0.04 sec)


Once again, after copying 

t2#P#px#SP#pxsp1.ibd

  to replace 

t2_t.ibd

  in the MySQL data directory and setting the correct permissions, we can import it into our dummy table and exchange the same to our original table. In this case, on a table with subpartitions, a tablespace is a combined unit of partition and subpartition, hence in our partition name above P#px is our main partition and SP#pxsp1 the subpartition name. For our exchange command below, we will only need the later portion.

mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2_t IMPORT TABLESPACE;
Query OK, 0 rows affected, 1 warning (2.49 sec)
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2 EXCHANGE PARTITION pxsp1 WITH TABLE t2_t;
Query OK, 0 rows affected (3.11 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t2;
+----------+
| COUNT(*) |
+----------+
|  4546036 |
+----------+
1 row in set (0.94 sec)


But wait there’s more: Do you know that in MySQL 5.7, you can take the full shortcut? Directly import all partitions back to the original table, sweet! A quick example below on MySQL 5.7.5, I created the same t2 table above,

DISCARD TABLESPACE

 , copy the partition tablespaces from my test 5.6 instance and

IMPORT TABLESPACE

  And done! :-)

mysql [localhost] {msandbox} (test) > SELECT COUNT(*) FROM t2;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
mysql [localhost] {msandbox} (test) > ALTER TABLE t2 DISCARD TABLESPACE;
Query OK, 0 rows affected (0.02 sec)
mysql [localhost] {msandbox} (test) > ! cp -v /sbx/msb/msb_5_6_210/data/test/t2#P#* /sbx/msb/msb_5_7_5/data/test/
`/sbx/msb/msb_5_6_210/data/test/t2#P#p0#SP#p0sp0.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#p0#SP#p0sp0.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#p0#SP#p0sp1.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#p0#SP#p0sp1.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#p1#SP#p1sp0.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#p1#SP#p1sp0.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#p1#SP#p1sp1.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#p1#SP#p1sp1.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#px#SP#pxsp0.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#px#SP#pxsp0.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#px#SP#pxsp1.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#px#SP#pxsp1.ibd'
mysql [localhost] {msandbox} (test) > ALTER TABLE t2 IMPORT TABLESPACE;
Query OK, 0 rows affected, 6 warnings (11.36 sec)
mysql [localhost] {msandbox} (test) > SHOW WARNINGS G
*************************** 1. row ***************************
  Level: Warning
   Code: 1810
Message: InnoDB: IO Read error: (2, No such file or directory) Error opening './test/t2#P#p0#SP#p0sp0.cfg', will attempt to import without schema verification
[...]


 

The post Recover orphaned InnoDB partition tablespaces in MySQL appeared first on MySQL Performance Blog.

Jun
17
2014
--

MySQL Backup Service from Percona

The Percona Backup Service managed service launched today. It ensures properly configured backups run successfully as scheduled on customer provided backup storage – whether on premise, in the cloud, or a hybrid. Backup issues or production data recovery are efficiently handled by Percona Managed Services technicians with deep knowledge of MySQL.

As we state in our white papers, “MySQL backup and recovery are a foundational piece of any infrastructure. A well-tested backup and recovery system can be the difference between a minor outage and the end of a business.” While MySQL backups are “foundational,” they still require constant management, and the most important use of a backup, namely recovery, is often complex.

The Percona MySQL Backup Service is available for companies using any variant of single-node or clustered MySQL — on premise or in the cloud. This includes MySQL, Percona Server, MariaDB, and Percona XtraDB Cluster.

Reliable MySQL Backup Services

Developed by MySQL experts with decades of operational experience, the Percona MySQL Backup Service is based on widely adopted open source database backup software solutions such as Percona XtraBackup. Monitoring software is installed to ensure MySQL backups run as expected and alert the Percona Managed Services team to any issues. Percona experts on duty 24x7x365 resolve problems before the availability of backups is impacted and can implement a full, partial, or point in time recovery to minimize downtime and data loss on production servers.

MySQL backup data sets are secure and compliant with regulatory requirements. 256 bit encryption meets or exceeds common security and compliance requirements. Internal procedures ensure that backups are managed with extreme care and are only stored safely on secure servers. Backups are available for on-demand recovery to comply with HIPAA, SOX, or PCI requirements.

The status of current and past backups is easily accessible through the Percona MySQL Backup Service customer portal. The portal includes instructions on how to use the backups to restore data for routine purposes such as restoration of development databases.

Efficient MySQL Data Recovery

Percona Backup Service technical experts respond within 30 minutes to ensure that production data is recovered quickly and as completely as possible. We can recover:

  • Data back to the last full capture image
  • Specific tables which saves significant time when only a single table or set of tables needs to be recovered
  • Full data to a specific point in time which ensures an application can be recovered to the same state as when the data was lost

Unlike database-as-a-service solutions, the Percona Backup Service can recover specific tables and full data to a specific point in time because we create additional backup points.

Cost Effective and Highly Flexible

Our MySQL backup-as-a-service solution costs less than managing backups in-house. Our 24x7x365 team serves multiple customers so the cost of the Percona MySQL Backup Service is lower than having someone on staff to manage backups. We also use a proprietary backup approach which significantly reduces the size of backup data sets, requiring much less storage space than conventional methods.

The Percona MySQL Backup Service accommodates any combination of MySQL server and backup server locations. We can work with MySQL server and backup servers on premise or in the cloud and can even configure the process to store one backup set on premise and another in the cloud for additional protection. This configuration flexibility means we can meet a wide range of data availability and budget needs.

MySQL Backup Service Features

The Percona Backup Service is designed so backups will run smoothly and reliably. The following features are included:

  • Customer portal for anytime access to current and past backup status as well as instructions on how to restore the MySQL backups for non-production purposes
  • Efficient data recovery for production issues with full, partial, and point in time recovery options
  • A high level of security with 256 bit encryption and backups only stored on the customer’s secure servers
  • Regulatory compliance with backups available for on-demand recovery to comply with HIPAA, SOX, and PCI requirements
  • Lower cost than managing MySQL backups in-house with 24x7x365 monitoring and issue resolution combined with a proprietary backup approach which significantly reduces the size of the backup data set versus conventional methods
  • Ability to accommodate any combination of MySQL server and backup server locations – on premise, in the cloud, or a hybrid
  • Flexible configuration options which enable the service to meet a wide range of data availability and budget requirements

Learn More

Learn more about our MySQL Backup Service solution as well as register for our upcoming “MySQL Backup and Recovery Best Practices” webinar on June 18, 2014. Contact us now to learn more about the Percona Backup Service and how we can ensure your backups are ready when you need them!

The post MySQL Backup Service from Percona appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com