Recently, I was working with a client that asked about using Percona Xtrabackup to take a snapshot of a particular MySQL schema and then reload it with a different schema name on the same server. It caught me off guard because I’d never really thought about it – typically, I’ve used Xtrabackup simply to clone a server (for replication or migration) or migrate an existing schema to a new server. However, given the import/export functionality of Xtrabackup combined with Percona Server (Exporting and Importing Tables), it did seem possible.
Further discussion with the client clarified the use case: clients make a mistake and need to compare their old data to their current data on the current live server. Mysqldump works well for this, but can quite slow on larger schemas.
One of the downsides to mysqldump is the need to scan the full tables and in turn, load that data into and pollute the buffer pool. Note that this can be somewhat mitigated using innodb_old_blocks_time, but that is outside the scope of this post. Similarly, reloading the data will be a very IO intense operation (redo logs, binlogs, etc) as well further polluting the buffer pool. Enter Xtrabackup…
As a quick refresher, Xtrabackup works by copying the dirty tablespace files while streaming the redo-logs to ensure that all transactions are also captured. The –apply-logs phase simply utilizes the built in crash recovery and applies the redo-logs to the dirty tablespace and voila, you have a consistent binary backup at a point in time.
When running Percona Server, you can utilize the –export flag during the –apply-logs phase and then re-import those files to a running server. However, you need to have existing table structures in place. Fortunately, this can be done easily using mysqldump –no-data.
Now, enough with the theory, here is the procedure I used:
- Locate an existing snapshot that contains the schema you are interested in (/tmp/snapshot/2013-06-03_11-30/orig)
- Get the table structures: mysqldump –no-data orig > /tmp/orig.schema.sql
- Create the new target database: mysqladmin create orig_old
- Load the schema into the target database: mysql orig_old < /tmp/orig.schema.sql
- Ensure innodb_import_table_from_xtrabackup = 1 (dynamic variable)
- Prepare the backup using the –export flag: innobackupex –apply-log –export /tmp/snapshot/2013-06-03_11-30
- For each table, run: ALTER TABLE tblname DISCARD TABLESPACE
- Copy the .exp and .ibd files from the snapshot to the new instance: cp /tmp/snapshot/2013-06-03_11-30/orig/*[.exp|.ibd] /var/lib/mysql/orig_old
- Make sure that the files are owned by mysql: chown mysql:mysql /var/lib/mysql/orig_old/*
- For each table, run: ALTER TABLE tblname IMPORT TABLESPACE
Now, you have a old version of the schema running side by side with the current version on the same server. This will allow you to compare and restore values (potentially corrupted via user error or other issues) with SQL rather than needing to import from a remote server. This can allow for more targeted restores, easier comparison, and allow remote users to compare on a live system without needed to grant access to another “backup” server.
As noted by one of my colleagues (thanks Bill Karwin!), the cumbersome part of this process is the DISCARD/IMPORT TABLESPACE step as that is done manually for each table (currently a blueprint in innobackupex). He also included this helpful script to generate all of those statements for steps 7 and 10 in two scripts:
mysql -N -B <<’EOF’ > discard-ddl.sqlSELECT CONCAT(‘ALTER TABLE `’, table_name, ‘` DISCARD TABLESPACE;’) AS _ddlFROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA=’orig’ AND ENGINE=’InnoDB’;EOFmysql -N -B <<’EOF’ > import-ddl.sqlSELECT CONCAT(‘ALTER TABLE `’, table_name, ‘` IMPORT TABLESPACE;’) AS _ddlFROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA=’orig’ AND ENGINE=’InnoDB’;EOF
For full disclosure, here are the versions I was using on my test VM:
- Percona Server 5.5.30
- Xtrabackup 2.0.6
Some other use cases that come to mind for this technique would be:
- Targeted schema migration (to new schema name) using –include=”^orig[.]“ (i.e. functional partitioning)
- Combine with –replicate-rewrite-db
- Daily backups from production to staging with different db names (i.e. orig_prod restored to orig_staging)
- I’m sure there are others as well…
The post Migrating between MySQL schemas with Percona Xtrabackup appeared first on MySQL Performance Blog.