Oct
11
2013
--

Handling long-running queries in MySQL with Percona XtraBackup

I recently had a case where replication lag on a slave was caused by a backup script. First reaction was to incriminate the additional pressure on the disks, but it turned out to be more subtle: Percona XtraBackup was not able to execute FLUSH TABLES WITH READ LOCK due to a long-running query, and the server ended up being read-only. Let’s see how we can deal with that kind of situation.

In short

Starting with Percona XtraBackup 2.1.4, you can:

  • Configure a timeout after which the backup will be aborted (and the global lock released) with the lock-wait-threshold, lock-wait-query-type and lock-wait-timeout options
  • Or automatically kill all queries that prevent the lock to be granted with the kill-long-queries-timeout and kill-long-query-type settings

Full documentation is here

Diagnosing the problem

Percona XtraBackup has to run FLUSH TABLES WITH READ LOCK before backing up non InnoDB tables. And by default, it will wait forever for this lock to be granted. So you can end up with the following situation:

  • A backup is started while a long-running query is executing
  • The long-running query holds locks that prevent FLUSH TABLES WITH READ LOCK to complete, the backup is stalled
  • All new write queries are blocked, waiting for FLUSH TABLES WITH READ to complete

There are several ways to diagnose the problem. First if you look at the output of innobackupex, you will see that FLUSH TABLES WITH READ LOCK is being requested and that everything is stalled from this point on:

[...]
131002 17:49:58  innobackupex: Starting to lock all tables...
>> log scanned up to (4061999899)
>> log scanned up to (4061999899)
>> log scanned up to (4061999899)
>> log scanned up to (4061999899)
>> log scanned up to (4061999899)
>> log scanned up to (4061999899)
>> log scanned up to (4061999899)
[...]

And with SHOW PROCESSLIST, we can see that write queries are waiting for FLUSH TABLES WITH READ LOCK to be released, while FLUSH TABLES WITH READ LOCK is waiting for the long query to finish:

+----+------+-----------+-----------+---------+------+------------------------------+-------------------------------------------------------------+
| Id | User | Host      | db        | Command | Time | State                        | Info                                                        |
+----+------+-----------+-----------+---------+------+------------------------------+-------------------------------------------------------------+
|  5 | root | localhost | NULL      | Query   |  244 | Sending data                 | select count(*) from employees.employees,employees.salaries |
|  6 | root | localhost | NULL      | Query   |  206 | Waiting for table flush      | FLUSH TABLES WITH READ LOCK                                 |
|  7 | root | localhost | NULL      | Query   |    0 | NULL                         | show processlist                                            |
| 12 | root | localhost | employees | Query   |    2 | Waiting for global read lock | update employees set gender='M' where emp_no=40000          |
| 13 | root | localhost | employees | Query   |    5 | Waiting for global read lock | update employees set gender='M' where emp_no=30000          |
+----+------+-----------+-----------+---------+------+------------------------------+-------------------------------------------------------------+

Let’s now use the new options of Percona XtraBackup to have a better control over this global lock.

Aborting the backup if long queries are running

Here is a way to get a long-running query with the employees database:

mysql> select count(*) from employees.employees,employees.salaries;

Now let’s run a backup which will automatically aborts if it cannot take the global lock after waiting for some time:

innobackupex --user=root --password=root --lock-wait-threshold=10 --lock-wait-query-type=all --lock-wait-timeout=5 /root/backup

This command line tells Percona XtraBackup to only wait up to 5 seconds if there is any query running for more than 10 seconds.

If we check the output of innobackupex, we will see something like:

[...]
[01] Copying ./employees/employees.ibd to /root/backup/2013-10-07_15-15-38/employees/employees.ibd
[01]        ...done
>> log scanned up to (4062000510)
xtrabackup: Creating suspend file '/root/backup/2013-10-07_15-15-38/xtrabackup_suspended_2' with pid '4313'
131007 15:15:58  innobackupex: Continuing after ibbackup has suspended
131007 15:15:58  innobackupex: Waiting for query 17 (duration 37 sec): select count(*) from employees.employees,employees.salaries
>> log scanned up to (4062000510)
131007 15:15:59  innobackupex: Waiting for query 17 (duration 38 sec): select count(*) from employees.employees,employees.salaries
>> log scanned up to (4062000510)
131007 15:16:00  innobackupex: Waiting for query 17 (duration 39 sec): select count(*) from employees.employees,employees.salaries
>> log scanned up to (4062000510)
131007 15:16:01  innobackupex: Waiting for query 17 (duration 40 sec): select count(*) from employees.employees,employees.salaries
>> log scanned up to (4062000510)
131007 15:16:02  innobackupex: Waiting for query 17 (duration 41 sec): select count(*) from employees.employees,employees.salaries
>> log scanned up to (4062000510)
131007 15:16:03  innobackupex: Waiting for query 17 (duration 42 sec): select count(*) from employees.employees,employees.salaries
>> log scanned up to (4062000510)
innobackupex: Error: Unable to obtain lock. Please try again. at /usr/bin/innobackupex line 3229.
131007 15:16:04  innobackupex: Waiting for ibbackup (pid=4312) to finish

In this case, the backup was aborted because the long-running query did not finish in time. As a side note, because of this bug, it is safer to use Percona XtraBackup 2.1.5+.

Killing long running queries to allow the backup to complete

Let’s change the settings we give to innobackupex. Instead of aborting the backup if it cannot run FLUSH TABLES WITH READ LOCK, let’s now allow the tool to kill any offending query after waiting for maximum 10 seconds:

innobackupex --user=root --password=root --kill-long-queries-timeout=10 --kill-long-query-type=all /root/backup

The output of innobackupex shows that the long-running query is killed and that the whole backup can complete:

[...]
>> log scanned up to (4062000510)
xtrabackup: Creating suspend file '/root/backup/2013-10-07_15-27-05/xtrabackup_suspended_2' with pid '4359'
131007 15:27:25  innobackupex: Continuing after ibbackup has suspended
131007 15:27:25  innobackupex: Starting to lock all tables...
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
>> log scanned up to (4062000510)
131007 15:27:35  innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root'  (using password: YES).
131007 15:27:35  innobackupex: Connected to MySQL server
131007 15:27:35  innobackupex: Killing query 17 (duration 734 sec): select count(*) from employees.employees,employees.salaries
131007 15:27:35  innobackupex: Connection to database server closed
Query killing process is finished
131007 15:27:35  innobackupex: All tables locked and flushed to disk
[...]

Conclusion

The new settings of Percona XtraBackup are nice to avoid nasty interactions between backups and long-running queries. However if you want to use the settings discussed in this article, be prepared for the potential consequences:

  • Allowing the backup tool to kill queries can cause unexpected behavior in the application
  • Aborting one backup can be a minor problem, but if the backup is aborted at each run, this is a major problem!

The post Handling long-running queries in MySQL with Percona XtraBackup appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com