Announcement: Experimental Build of Percona XtraBackup 8.0

Percona XtraBackup 8.0

Percona XtraBackup 8.0Experimental Build of Percona XtraBackup 8.0 released

An experimental alpha version of Percona XtraBackup 8.0.1 is now available in the Percona experimental software repositories.

A few things to note about this release:

  • We removed the deprecated innobackupex in this release
  • Due to the new MySQL redo log and data dictionary formats the Percona XtraBackup 8.0.x versions will only be compatible with MySQL 8.0.x and the upcoming Percona Server for MySQL 8.0.x
  • For experimental migrations from earlier database server versions, you will need to backup and restore and using XtraBackup 2.4 and then use mysql_upgrade from MySQL 8.0.x

PXB 8.0.1 alpha is available for the following platforms:

  • RHEL/Centos 6.x
  • RHEL/Centos 7.x
  • Ubuntu 14.04 Trusty*
  • Ubuntu 16.04 Xenial
  • Ubuntu 18.04 Bionic
  • Debian 8 Jessie*
  • Debian 9 Stretch

Information on how to configure the Percona repositories for apt and yum systems and access the Percona experimental software is here.

* We might drop these platforms before GA release.

The post Announcement: Experimental Build of Percona XtraBackup 8.0 appeared first on Percona Database Performance Blog.


Percona XtraBackup 2.4.12 Is Now Available


Percona XtraBackupPercona announces the GA release of Percona XtraBackup 2.4.12 on June 22, 2018. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

New features and improvements:

  • Percona XtraBackup now prints used arguments to standard output. Bug fixed PXB-1494.

Bugs fixed

  • xtrabackup --copy-back didn’t read which encryption plugin to use from plugin-load setting of the my.cnf configuration file. Bug fixed PXB-1544.
  • xbstream was exiting with zero return code when it failed to create one or more target files instead of returning error code 1. Bug fixed PXB-1542.
  • Meeting a zero sized keyring file, Percona XtraBackup was removing and immediately recreating it, which could affect external software noticing this file had undergo manipulations. Bug fixed PXB-1540.
  • xtrabackup_checkpoints files were encrypted during a backup, which caused additional difficulties to take incremental backups. Bug fixed PXB-202.

Other bugs fixed: PXB-1526 “Test failing with MySQL 5.7.21”.

Release notes with all the improvements for version 2.4.12 are available in our online documentation. Please report any bugs to the issue tracker.

The post Percona XtraBackup 2.4.12 Is Now Available appeared first on Percona Database Performance Blog.


This Week in Data with Colin Charles 35: Percona Live 18 final countdown and a roundup of recent news

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live is just over a week away — there’s an awesome keynote lineup, and you really should register. Also don’t forget to save the date as Percona Live goes to Frankfurt, Germany November 5-7 2018! Prost!

In acquisitions, we have seen MariaDB acquire MammothDB and Idera acquire Webyog.

Some interesting Amazon notes: Amazon Aurora Continues its Torrid Growth, More than Doubling the Number of Active Customers in the Last Year (not sure I’d describe it as torrid but this is great for MySQL and PostgreSQL), comes with a handful of customer mentions. In addition, there have already been 65,000 database migrations on AWS. For context, in late November 2017, it was 40,000 database migrations.


Link List

Upcoming appearances


I look forward to feedback/tips via e-mail at or on Twitter @bytebot.


The post This Week in Data with Colin Charles 35: Percona Live 18 final countdown and a roundup of recent news appeared first on Percona Database Performance Blog.


Percona XtraBackup 2.4.10 Is Now Available

Percona_XtraBackup LogoVert_CMYK

Percona XtraBackup 2.4Percona announces the GA release of Percona XtraBackup 2.4.10 on March 30, 2018. This release is based on MySQL 5.7.19. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

Starting from now, Percona XtraBackup issue tracking system was moved from launchpad to JIRA.

Bugs Fixed:

  • xbcrypt with the --encrypt-key-file option was failing due to regression in Percona XtraBackup 2.4.9. Bug fixed bug PXB-518.
  • Simultaneous usage of both the --lock-ddl and --lock-ddl-per-table options caused Percona XtraBackup lock with the backup process never completed. Bug fixed PXB-792.
  • Compilation under Mac OS X was broken. Bug fixed PXB-796.
  • A regression of the maximum number of pending reads and the unnoticed earlier possibility of a pending reads related deadlock caused Percona XtraBackup to stuck in prepare stage. Bug fixed PXB-1467.
  • Percona XtraBackup skipped tablespaces with a corrupted first page instead of aborting the backup. Bug fixed PXB-1497.

Other bugs fixed: PXB-513.

Release notes with all the bugfixes for version 2.4.10 are available in our online documentation. Please report any bugs to the issue tracker.

The post Percona XtraBackup 2.4.10 Is Now Available appeared first on Percona Database Performance Blog.


Percona Blog Poll: What Percona Software Are You Using?

Percona Software

Percona SoftwareThis blog post contains a poll that helps us find out what Percona software the open source database community is using.

Nearly 20 years ago, Netscape released the source code for its Netscape Communicator web browser. This marked one of the biggest moments in “open source” history. The formation of The Open Source Initiative happened shortly after that. Bruce Perens, one of the working group’s founders, adapted his Free Software Guidelines as the official Open Source Definition.

Since then, open source software has gone from being the exception in large projects and enterprises, to being a normal part of huge deployments and daily business activities. Open source software is used by some of the biggest online companies: Facebook, YouTube, Twitter, etc. Many of these companies depend on open source software as part of their business model.

Percona’s mission is to champion unbiased open source database solutions. As part of this mission, we provide open source software, completely free of charge and for reuse. We developed our Percona Server for MySQL and Percona Server for MongoDB solutions to not only be drop-in replacements for existing open source software, but often incorporate “enterprise” features from upstream.

We’ve also recognized a need for a database clustering and backup solutions, and created Percona XtraDB Cluster and Percona XtraBackup to address those concerns.

Beyond database software, Percona has created management and monitoring tools like Percona Monitoring and Management that not only help DBAs with day-to-day tasks, but also use metrics to find out how best to configure, optimize and architect a database environment to best meet the needs of applications and websites.

What we’d like to know is which of our software products are you currently using in your database environment? Are you using just database software, just management and monitoring tools, or a combination of both? As Percona makes plans for the year, we’d like to know what the community is using, what they find helpful, and how we can best allocate our resources to address those needs. We are always looking for the best ways to invest in and grow the Percona software and tools people use.

Complete the survey below by selecting all the options that apply.

Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.

Thanks in advance for your responses – this helps us see which of our software is being deployed in the community.

The post Percona Blog Poll: What Percona Software Are You Using? appeared first on Percona Database Performance Blog.


This Week in Data with Colin Charles 17: AWS Re:Invent, a New Book on MySQL Cluster and Another Call Out for Percona Live 2018

Colin Charles

Colin Charles Open Source Database evangelist for PerconaJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

The CFP for Percona Live Santa Clara 2018 closes December 22, 2017: please consider submitting as soon as possible. We want to make an early announcement of talks, so we’ll definitely do a first pass even before the CFP date closes. Keep in mind the expanded view of what we are after: it’s more than just MySQL and MongoDB. And don’t forget that with one day less, there will be intense competition to fit all the content in.

A new book on MySQL Cluster is out: Pro MySQL NDB Cluster by Jesper Wisborg Krogh and Mikiya Okuno. At 690 pages, it is a weighty tome, and something I fully plan on reading, considering I haven’t played with NDBCLUSTER for quite some time.

Did you know that since MySQL 5.7.17, connection control plugins are included? They help DBAs introduce an increasing delay in server response to clients after a certain number of consecutive failed connection attempts. Read more at the connection control plugins.

While there are a tonne of announcements coming out from the Amazon re:Invent 2017 event, I highly recommend also reading Some data of interest as AWS reinvent 2017 ramps up by James Governor. Telemetry data from sumologic’s 1,500 largest customers suggest that NoSQL database usage has overtaken relational database workloads! Read The State of Modern Applications in the Cloud. Page 8 tells us that MySQL is the #1 database on AWS (I don’t see MariaDB Server being mentioned which is odd; did they lump it in together?), and MySQL, Redis & MongoDB account for 40% of database adoption on AWS. In other news, Andy Jassy also mentions that less than 1.5 months after hitting 40,000 database migrations, they’ve gone past 45,000 over the Thanksgiving holiday last week. Have you started using AWS Database Migration Service?


Link List

Upcoming appearances

  • ACMUG 2017 gathering – Beijing, China, December 9-10 2017 – it was very exciting being there in 2016, I can only imagine it’s going to be bigger and better in 2017, since it is now two days long!


I look forward to feedback/tips via e-mail at or on Twitter @bytebot.


autoxtrabackup v1.5.0: A Tool for Automatic Backups


autoxtrabackupThere is a new version of the autoxtrabackup tool. In this post, I’ll provide some of the highlights available this time around.

autoxtrabackup is a tool created by PerconLabs. We’ve now put out the 1.5.0 version, and you can test it further.

Note: PerconaLabs is a repository for various open source scripts and tools created by Percona staff. Items added to the PerconaLabs repository are not official Percona products, and aren’t covered by support or services agreements. 

autoxtrabackup is written in Python3 and hosted in PerconaLab (forked from Shako’s repo). Basically, this tool automates backup/prepare/copy-back actions. I want to talk about recent changes and additions.

First of all, autoxtrabackup now has a --test_mode option, intended to test XtraBackup automation process.

Here is the brief flow for this:

  • Clone percona-qa repo
  • Clone Percona Server for MySQL 5.6 and 5.7 from github.
  • Build PS servers in debug mode.
  • Get 2.3 and 2.4 versions of XtraBackup
  • Generate autoxtrabackup .conf files for each version of PS and XtraBackup
  • Pass different combination of options to PS start command and initialize PS servers each time with different options
  • Run sysbench against each started PS server
  • Take backup in cycles for each started PS + prepare
  • If make_slaves is defined, then create slave1 server from this backup (i.e., copy-back to another directory and start the slave from it)
  • Then take a backup, prepare and copy-back from this new slave1 to create slave2
  • Run pt-table-checksum on the master to check backup consistency

I have prepared my environment, and now want to start --test_mode. Basically, it creates option combinations and passes them to the start script:

2017-11-15 22:28:21 DEBUG    Starting cycle1
2017-11-15 22:28:21 DEBUG    Will start MySQL with --innodb_buffer_pool_size=1G --innodb_log_file_size=1G
--log-bin=mysql-bin --log-slave-updates --server-id=1 --gtid-mode=ON --enforce-gtid-consistency --binlog-format=row

So as you see, it is starting MySQL with --innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=64K. In cycle2, it will likely pick --innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=32K, and so on. It depends what you have passed in config:

# Do not touch; this is for --test_mode, which is testing for XtraBackup itself.
ps_branches=5.6 5.7
gitcmd=--recursive --depth=1
xb_configs=xb_2_4_ps_5_6.conf xb_2_4_ps_5_7.conf xb_2_3_ps_5_6.conf
mysql_options=--innodb_buffer_pool_size=1G 2G 3G,--innodb_log_file_size=1G 2G 3G,--innodb_page_size=4K 8K 16K 32K 64K

You can pass more options by changing the mysql_options in the config file. Also you can specify how many incremental backups you want by setting the incremental_count option. You can enable creating slaves from backup to test it as well, by enabling the make_slaves option. This is not recommended for daily usage. You can read more about it here: –test_mode.

For daily backup actions, I have added the --tag and --show_tags options, which can be quite useful. They help you to tag your backups. Take a full backup:

$ sudo autoxtrabackup --tag="My Full backup" -v
-lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log
-l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Take an incremental one:

$ autoxtrabackup --tag="First incremental backup" -v
-lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log
-l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Take a second incremental one:

$ autoxtrabackup --tag="Second incremental backup" -v
-lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log
-l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --backup

Now you can use the --show_tags to list tags:

$ sudo autoxtrabackup --show_tags
Backup              Type    Status  TAG
2017-11-16_20-10-53 Full        OK  'My Full backup'
2017-11-16_20-12-23 Inc         OK  'First incremental backup'
2017-11-16_20-13-39 Inc         OK  'Second incremental backup'

It would be quite nice if we could prepare those backups with a tag name. In other words, if I have a full backup and five incremental backups, what if I want to prepare until the second or third incremental, or just a full backup?

Pass the tag name with the --prepare option, and it will do the trick:

$ autoxtrabackup --tag="First incremental backup" -v
-lf /home/shahriyar.rzaev/autoxtrabackup_2_4_5_7.log
-l DEBUG --defaults_file=/home/shahriyar.rzaev/XB_TEST/server_dir/xb_2_4_ps_5_7.conf --prepare

It will prepare the full and “First incremental backup” – the remaining incremental backups will be ignored.

autoxtrabackup 1.5.0 also has a --dry_run option, which is going to show but not run exact commands. It is described here: –dry_run.

How about autoxtrabackup 1.5.0 installation? You can install it from the source or use pip3:

pip3 install mysql-autoxtrabackup

For more please read: Installation.

Do you want to enable encryption and compression for backups? Yes? You can enable this from the autoxtrabackup config as described here: Config file structure.

You can enable taking partial backups again by editing the config: partial backups.

autoxtrabackup 1.5.0 allows you to perform a partial recovery – i.e., restoring only a single specified table from a full backup. If the table was dropped,  autoxtrabackup will try to extract the create table statement from the .frm file using the mysqlfrm tool and then discard/import the tablespace from full backup. This is related to the transportable tablespace concept. You can read more here: restoring-single-table-after-drop.

For a full list of available options, read the DOC: autoxtrabackup DOC.

Thanks for reading! If you are going to try autoxtrabackup 1.5.0, don’t hesitate to provide some feedback!


Pepper Turns to Percona to Ensure a Great Customer Experience at, the world’s largest community deal platform, has selected Percona to manage its open source database performance.’s around-the-clock community seeks and finds the best offers in fashion, electronics, traveling and much more. With 500 million page views, over 25 million users and over 65,000 user-submitted deals per month across communities in America, Europe and Asia, Pepper has quickly risen to be the largest community deal platform worldwide.

When’s primary MySQL database administrator left the company, Pepper decided to shift to a managed service to maintain uptime and responsiveness. Having previously attended Percona Live Europe, the premier European open source database conference, as well as being avid readers of the Percona Database Performance Blog, the Pepper team turned to Percona for open source database remote managed service expertise.

“Guaranteeing database performance is key to making sure our web applications are responsive and up-to-date,” said Pavel Genov, Head of Software Development at “Percona Care Ultimate helps us to achieve these objectives.”

Pepper.comPepper was already using Percona Server for MySQL. Following a Percona Database Performance Audit to review the environment, architecture and setup, Percona XtraBackup was deployed to provide online non-blocking, tightly compressed, highly secure backups.

Check out the case study on and Percona’s engagement to improve and manage Pepper’s database environment.


Webinar Wednesday, September 6, 2017: Percona Roadmap and Software News Update – Q3 2017

Percona Roadmap

Percona RoadmapCome and listen to Percona CEO Peter Zaitsev on Wednesday, September 6, 2017 at 10am PT / 1pm ET (UTC-7) discuss the Percona roadmap, as well as what’s new in Percona open source software.


During this webinar Peter will talk about newly released features in Percona software, show a few quick demos and share with you highlights from the Percona open source software roadmap. This discussion will cover Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

Peter will also talk about new developments in Percona commercial services and finish with a Q&A.

Register for the webinar before seats fill up for this exciting webinar Wednesday, September 6, 2017 at 10am PT / 1pm ET (UTC-7).

Peter ZaitsevPeter Zaitsev, Percona CEO and Co-Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University, where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone have both tapped Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of’s most popular downloads.

Avoiding the “An optimized (without redo logging) DDL operation has been performed” Error with Percona XtraBackup

Percona XtraBackup

Percona XtraBackupThis blog discusses newly added options for Percona XtraBackup 2.4.8 and how they can impact your database backups.

To avoid issues with MySQL 5.7 skipping the redo log for DDL, Percona XtraBackup has implemented three new options (

xtrabackup --lock-ddl


xtrabackup --lock-ddl-timeout


xtrabackup --lock-ddl-per-table

) that can be used to place MDL locks on tables while they are copied.

So why we need those options? Let’s discuss the process used to get there.

Originally, we found problems while running DDLs: Percona XtraBackup produced corrupted backups as described in two reports:

After experimenting, it was clear that the core cause of those fails was MySQL 5.7 skipping redo logging for some DDLs. This is a newly added feature to MySQL named Sorted Index BuildsYou can read more from following links:

To prevent this we introduced a solution: wWhen Percona XtraBackup detects skipping the redo log), it aborts the backup to prevent creating a corrupted backup.

The scary error message you get with this fix is:

[FATAL] InnoDB: An optimized(without redo logging) DDLoperation has been performed. All modified pages may not have been flushed to the disk yet.
Percona XtraBackup will not be able to take a consistent backup. Retry the backup operation

We need to avoid aborting backup with this message. So how do we do that? Let’s create a test case first and reproduce the issue.

Prepare two tables:

sysbench /usr/share/sysbench/oltp_insert.lua --db-driver=mysql --mysql-db=db1 --mysql-user=msandbox --mysql-password=msandbox --table-size=2000000 --mysql-socket=/tmp/mysql_sandbox20393.sock prepare
sysbench /usr/share/sysbench/oltp_insert.lua --db-driver=mysql --mysql-db=db2 --mysql-user=msandbox --mysql-password=msandbox --table-size=2000000 --mysql-socket=/tmp/mysql_sandbox20393.sock prepare

Create a file and place it in the sandbox:

echo "drop table if exists db1.sb1"|./use
echo "create table sb1 as select id,c from sbtest1 where id < 150000;"|./use db1
echo "create unique index ix on sb1 (id)"|./use db1
sleep 1
echo "drop table if exists db2.sb1"|./use
echo "create table sb1 as select id,c from sbtest1 where id < 150000;"|./use db2
echo "create unique index ix on sb1 (id)"|./use db2

Run the script in a loop while the backup is taken:

$ while true; do bash; done

Try to take a backup:

xtrabackup --defaults-file=/home/shahriyar.rzaev/sandboxes/rsandbox_Percona-Server-5_7_18/master/my.sandbox.cnf
--user=msandbox --password='msandbox'  --target-dir=/home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-11-45
--backup --host= --port=20393 --binlog-info=AUTO --galera-info --parallel 4
--check-privileges --no-version-check

You will likely get something like:

InnoDB: An optimized (without redo logging) DDLoperation has been performed. All modified pages may not have been flushed to the disk yet.
PXB will not be able take a consistent backup. Retry the backup operation

Ok, now we have reproduced the error. To avoid this error, XtraBackup has the new options as mentioned above.




xtrabackup --defaults-file=/home/shahriyar.rzaev/sandboxes/rsandbox_Percona-Server-5_7_18/master/my.sandbox.cnf
--user=msandbox --password='msandbox'  --target-dir=/home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-16-56
--backup --host= --port=20393 --binlog-info=AUTO --galera-info --parallel 4
--check-privileges --no-version-check --lock-ddl

The new thing you should notice is:

170726 11:16:56 Executing LOCK TABLES FOR BACKUP...

And the backup status:

xtrabackup: Transaction log of lsn (2808294311) to (2808304872) was copied.
170726 11:20:42 completed OK!

Another new option is --lock-ddl-per-table:

xtrabackup --defaults-file=/home/shahriyar.rzaev/sandboxes/rsandbox_Percona-Server-5_7_18/master/my.sandbox.cnf
--user=msandbox --password='msandbox'  --target-dir=/home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56
--backup --host= --port=20393 --binlog-info=AUTO --galera-info --parallel 4 --check-privileges --no-version-check  --lock-ddl-per-table

The new output will look like this:

170726 11:32:33 [01] Copying ./ibdata1 to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/ibdata1
170726 11:32:33 Locking MDL for db1.sb1
170726 11:32:33 [02] Copying ./db1/sb1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db1/sb1.ibd
170726 11:32:33 Locking MDL for db1.sbtest1
170726 11:32:33 Locking MDL for db2.sb1
170726 11:32:33 [03] Copying ./db1/sbtest1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db1/sbtest1.ibd
170726 11:32:33 [04] Copying ./db2/sb1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db2/sb1.ibd
170726 11:32:33 [04]        ...done
170726 11:32:33 >> log scanned up to (2892754398)
170726 11:32:34 Locking MDL for db2.sbtest1

The result of the backup:

170726 11:35:45 Unlocking MDL for all tables
xtrabackup: Transaction log of lsn (2871333326) to (2892754764) was copied.
170726 11:35:45 completed OK!

The another thing I should add here is about using


 with non-Percona Server for MySQL servers. For example., using it with MariaDB:

2017-07-26 12:08:32 ERROR    FULL BACKUP FAILED!
2017-07-26 12:08:37 ERROR    170726 12:08:32 Connecting to MySQL server host:, user: msandbox, password: set, port: 10207, socket: /tmp/mysql_sandbox10207.sock
Using server version 10.2.7-MariaDB
170726 12:08:32 Error: LOCK TABLES FOR BACKUP is not supported.

But you can use


 with any server. Use


 with caution, it can block updates to tables for highly loaded servers under some circumstances. Let’s explore one:

  connection 1:
  - BEGIN; SELECT * FROM sb1 LIMIT 1; <--- MDL
  connection 2:
  - UPDATE sb1 SET c = '288' WHERE id = 34;    <--- completes OK
  connection 3:
  - CREATE INDEX sb1_1 ON sb1 (c(10));         <--- WAITING for MDL
  connection 2:
  - UPDATE sb1 SET c = '288' WHERE id = 34;    <--- WAITING for MDL
  connection 1:
  connection 2 and 3 are able to complete now

If one connection holds an MDL lock, and another connection does ALTER TABLE (CREATE INDEX is mapped to an ALTER TABLE statement to create indexes), then updates to that table are blocked.

Testing this with the backup process is quite easy:

Sample table:

  `id` int(11) NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  UNIQUE KEY `ix` (`id`)
select count(*) from sb1;
| count(*) |
|   149999 |
select * from sb1 limit 3;
| id | c                                                                                                                       |
|  1 | 83868641912-28773972837-60736120486-75162659906-27563526494-20381887404-41576422241-93426793964-56405065102-33518432330 |
|  2 | 38014276128-25250245652-62722561801-27818678124-24890218270-18312424692-92565570600-36243745486-21199862476-38576014630 |
|  3 | 33973744704-80540844748-72700647445-87330233173-87249600839-07301471459-22846777364-58808996678-64607045326-48799346817 |

So our “connection 1:” is an 



xtrabackup --defaults-file=/home/shahriyar.rzaev/sandboxes/rsandbox_Percona-Server-5_7_18/master/my.sandbox.cnf
--user=msandbox --password='msandbox'  --target-dir=/home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-28_07-55-30
--backup --host= --port=20393 --binlog-info=AUTO --galera-info --parallel 4 --check-privileges --no-version-check

So after running the backup command and doing the same steps for “connection 2” and “connection 3,” the result is something like this in processlist:

show processlist;
| Id | User     | Host            | db   | Command | Time | State                           | Info                                   | Rows_sent | Rows_examined |
|  4 | root     | localhost       | db1  | Sleep   |   28 |                                 | NULL                                   |         0 |             1 |
| 10 | root     | localhost       | db1  | Query   |   26 | Waiting for table metadata lock | CREATE INDEX sb1_1 ON sb1 (c(10))      |         0 |             0 |
| 11 | root     | localhost       | db1  | Query   |    6 | Waiting for table metadata lock | UPDATE sb1 SET c = '288' WHERE id = 34 |         0 |             0 |
| 12 | root     | localhost       | NULL | Query   |    0 | starting                        | show processlist                       |         0 |             0 |
| 13 | msandbox | localhost:36546 | NULL | Sleep   |   31 |                                 | NULL                                   |         1 |           116 |
| 14 | msandbox | localhost:36550 | NULL | Sleep   |   17 |                                 | NULL                                   |         1 |             1 |
6 rows in set (0.00 sec)

Updates are only completed after the backup, as described. It should be clear now why you should use it with caution.

The last thing we should discuss is if you do not want to use any “hacks” with


 , you can do things on the MySQL side such as:

  1. Avoiding bad DDLs ?
  2. Enabling old_alter_table. When this variable is enabled, the server does not use the optimized method of processing an ALTER TABLE operation. It reverts to using a temporary table, copying over the data and then renaming the temporary table to the original, as used by MySQL 5.0 and earlier. So it is going to use ALGORITHM=COPY for alters.

In conclusion, we do not have a panacea for this issue but, you can use some of the described tricks to get rid of this problem. Thanks for reading!

Powered by WordPress | Theme: Aeros 2.0 by