Nov
30
2015
--

Transporting tablespace from MySQL 5.6 to MySQL 5.7 (case study)

MySQL 5.7Recently, I was working on a MySQL support ticket where a customer was facing an issue while transporting tablespace from MySQL 5.6 to MySQL 5.7.
After closely reviewing the situation, I saw that while importing tablespace they were getting errors such as:

ERROR 1808 (HY000): Schema mismatch (Table flags don't match, server table has 0x10 and the meta-data file has 0x1)

After some research, I found that there is a similar bug reported to MySQL for this issue (https://bugs.mysql.com/bug.php?id=76142), but no solution is mentioned. I tested the scenario locally and found a solution that I will detail in this post.

First, I created a table on MySQL 5.6

nilnandan.joshi@bm-support01:~$ mysql -uroot -p --socket=/tmp/mysql_sandbox5624.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 35
Server version: 5.6.24 MySQL Community Server (GPL)
...
mysql> create database nil;
Query OK, 1 row affected (0.02 sec)
mysql> use nil;
Database changed
mysql> create table nil(id int, name varchar(10));
Query OK, 0 rows affected (0.01 sec)
mysql> insert into nil values (1, 'nilnandan'),(2, 'niljoshi'),(3, 'njoshi'),(4,'joshi');
Query OK, 4 rows affected (0.10 sec)
Records: 4 Duplicates: 0 Warnings: 0
mysql> select * from nil;
+------+-----------+
| id | name |
+------+-----------+
| 1 | nilnandan |
| 2 | niljoshi |
| 3 | njoshi |
| 4 | joshi |
+------+-----------+
4 rows in set (0.02 sec)

Then took the backup for the nil database and exported it.

nilnandan.joshi@bm-support01:~$ innobackupex --defaults-file=/home/njoshi/sandboxes/msb_5_6_24/my.sandbox.cnf --user=root --password=msandbox --databases="nil" /home/njoshi/backup/
151127 01:29:14 innobackupex: Starting the backup operation
....
151127 01:29:16 Backup created in directory '/home/njoshi/backup//2015-11-27_01-29-14'
151127 01:29:16 [00] Writing backup-my.cnf
151127 01:29:16 [00] ...done
151127 01:29:16 [00] Writing xtrabackup_info
151127 01:29:16 [00] ...done
xtrabackup: Transaction log of lsn (1695477) to (1695477) was copied.
151127 01:29:16 completed OK!
nilnandan.joshi@bm-support01:~$
nilnandan.joshi@bm-support01:~$ innobackupex --apply-log --export backup/2015-11-27_01-29-14
151127 01:32:25 innobackupex: Starting the apply-log operation
...
xtrabackup: export option is specified.
xtrabackup: export metadata of table 'nil/nil' to file `./nil/nil.exp` (1 indexes)
xtrabackup: name=GEN_CLUST_INDEX, id.low=31, page=3
...
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 1706518
151127 01:32:28 completed OK!
nilnandan.joshi@bm-support01:~$
nilnandan.joshi@bm-support01:~/backup/2015-11-27_01-29-14/nil$ ll
total 140
drwx------ 2 nilnandan.joshi percona 4096 Nov 27 01:32 ./
drwx------ 3 nilnandan.joshi percona 4096 Nov 27 01:32 ../
-rw-r----- 1 nilnandan.joshi percona 65 Nov 27 01:29 db.opt
-rw-r--r-- 1 nilnandan.joshi percona 421 Nov 27 01:32 nil.cfg
-rw-r--r-- 1 nilnandan.joshi percona 16384 Nov 27 01:32 nil.exp
-rw-r----- 1 nilnandan.joshi percona 8586 Nov 27 01:29 nil.frm
-rw-r----- 1 nilnandan.joshi percona 98304 Nov 27 01:29 nil.ibd
nilnandan.joshi@bm-support01:~/backup/2015-11-27_01-29-14/nil$

Now on MySQL 5.7, I create a nil table, discarded tablespace, copied the .cfg and .ibd files from backup to the datadir, and set proper permissions.

nilnandan.joshi@bm-support01:~/backup/2015-11-27_01-29-14/nil$ mysql -uroot -p --socket=/tmp/mysql_sandbox13454.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 16
Server version: 5.7.9-log MySQL Community Server (GPL)
...
mysql> create database nil;
Query OK, 1 row affected (0.04 sec)
mysql> use nil
Database changed
mysql> create table nil(id int, name varchar(10));
Query OK, 0 rows affected (0.04 sec)
mysql> ALTER TABLE nil DISCARD TABLESPACE;
Query OK, 0 rows affected (0.00 sec)
mysql> quit
Bye
nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ sudo cp /home/njoshi/backup/2015-11-27_01-29-14/nil/nil.cfg .
nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ sudo cp /home/njoshi/backup/2015-11-27_01-29-14/nil/nil.ibd .
nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$
nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ mysql -uroot -p --socket=/tmp/mysql_sandbox13454.sock
Enter password:
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 17
Server version: 5.7.9-log MySQL Community Server (GPL)
...
mysql> use nil
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+---------------+
| Tables_in_nil |
+---------------+
| nil |
+---------------+
1 row in set (0.00 sec)
mysql> ALTER TABLE nil IMPORT TABLESPACE;
ERROR 1808 (HY000): Schema mismatch (Table flags don't match, server table has 0x5 and the meta-data file has 0x1)

After all that, I got the same error but I didn’t find any specific error message about the problem. When I deleted the .cfg file and tried again I got the exact error message.

mysql> ALTER TABLE nil IMPORT TABLESPACE;
ERROR 1808 (HY000): Schema mismatch (Table has ROW_TYPE_DYNAMIC row format, .ibd file has ROW_TYPE_COMPACT row format.)

So, I dropped the table on MySQL 5.7, created the same table with the  “row_format=compact” option, copied both .cfg and .ibd files again, and this time it worked.

mysql> drop table nil;
Query OK, 0 rows affected (0.00 sec)
mysql> create table nil(id int, name varchar(10)) row_format=compact;
Query OK, 0 rows affected (0.00 sec)
mysql> ALTER TABLE nil DISCARD TABLESPACE;
Query OK, 0 rows affected (0.00 sec)
mysql> quit
Bye
nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ sudo cp /home/njoshi/backup/2015-11-27_01-29-14/nil/nil.ibd .
nilnandan.joshi@bm-support01:/home/nilnandan.joshi/sandboxes/rsandbox_mysql-5_7_9/master/data/nil$ sudo cp /home/njoshi/backup/2015-11-27_01-29-14/nil/nil.cfg .
mysql> ALTER TABLE nil IMPORT TABLESPACE;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from nil;
+------+-----------+
| id | name |
+------+-----------+
| 1 | nilnandan |
| 2 | niljoshi |
| 3 | njoshi |
| 4 | joshi |
+------+-----------+
4 rows in set (0.01 sec)

I’m not sure if the same issue occurs each time, but one thing is sure: removing the .cfg file gives you the exact cause of the problem and and lets you resolve it.

Why does moving tablespace from MySQL 5.6 to MySQL 5.7 give you this error? The answer is because the default innodb_file_format is changed in MySQL 5.7 from Antelope to Barracuda.

Important Change; InnoDB: The following changes were made to InnoDB configuration parameter default values:
  • The innodb_file_format default value was changed to Barracuda. The previous default value was Antelope. This change allows tables to use Compressed or Dynamic row formats.

https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-7.html

I’ve already submitted bug report to MySQL for this error and inappropriate error message.  https://bugs.mysql.com/bug.php?id=79469

BTW: here the the link for how we can transport tablespace with Xtrabackup.
https://www.percona.com/doc/percona-xtrabackup/2.3/howtos/recipes_ibkx_partition.html

The post Transporting tablespace from MySQL 5.6 to MySQL 5.7 (case study) appeared first on MySQL Performance Blog.

Nov
24
2014
--

Schema changes in MySQL for OpenStack Trove users

People using OpenStack Trove instances can hit a common issue in the MySQL world: how to perform schema change operations while minimizing the impact on the database server? Let’s explore the options that can allow online schema changes.

Summary

Schema changes in MySQL for OpenStack Trove usersWith MySQL 5.5, pt-online-schema-change from Percona Toolkit is your best option for large tables while regular ALTER TABLE statements are only acceptable for small tables. Also beware of metadata locks.

With MySQL 5.6, almost all types of schema changes can be done online. Metadata locks can also be an issue. pt-online-schema-change can still be worth using as it is also online on read replicas.

Regular ALTER TABLE with MySQL 5.5

If you are still using MySQL 5.5, almost all schema changes will require a table rebuild and MySQL will set a write lock. Therefore all writes to the table that gets modified will be blocked. As soon as the table gets large or if you cannot afford to have a maintenance window, using ALTER TABLE becomes tricky.

The only exception is that secondary indexes can be added or removed without rebuilding the whole table. The table is still write locked during the operation but it is much faster.

You can spot this ‘fast index creation’ process by looking at SHOW PROCESSLIST (see manage keys in the State field):

+----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+
| Id | User | Host      | db     | Command | Time | State       | Info                                    | Rows_sent | Rows_examined | Rows_read |
+----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+
|  1 | root | localhost | sbtest | Query   |    4 | manage keys | alter table sbtest2 add index idx_k (k) |         0 |             0 |         0 |
+----+------+-----------+--------+---------+------+-------------+-----------------------------------------+-----------+---------------+-----------+

It is helpful for small or even medium tables, but it is not fast enough for large tables.

A common workaround when you have a master-slave configuration is to perform the schema change on the slave first, promote the slave and then perform the schema change on the old master. However replication is not only available if you are using Icehouse and even with Juno, replicas are read-only. So this workaround is not usable with Trove.

pt-online-schema-change

A good alternative is to use pt-online-schema-change. In a nutshell the tool creates a new table with the desired structure, creates triggers on the original table, and copies rows in chunks from the original table to the new table.

Nice features of the tool include being able to write to the original table while the schema change is being performed (hence the name of the tool), monitoring replication lag on slaves and automatic throttling to keep lag under a reasonable threshold.

Let’s assume you want to add a new field called ‘city’ in the ‘user’ table. Instead of running:

mysql> ALTER TABLE mydb.user ADD city VARCHAR(50) NOT NULL DEFAULT '';

you will run:

# pt-online-schema-change --progress=percentage,10 --alter="ADD city VARCHAR(50) NOT NULL DEFAULT ''" h=localhost,u=root,D=mydb,t=user --execute

All ALTER TABLE statements can be converted to an equivalent pt-online-schema-change command.

Is there any tradeoff? The first one is that pt-online-schema-change will be slower than a plain ALTER TABLE. This is expected as the tool has extra code to throttle itself if it adds too much load.

How much slower is it? As always it depends. On a lightly loaded server the difference will be minimal, for instance I measured 3mn24s for ALTER TABLE and 3mn30s for pt-online-schema-change on a test table. However on a heavily loaded server, it can be like 5x slower.

The second tradeoff is that pt-online-schema-change adds triggers on the original table to capture data change. So if you already have triggers, you can’t use the tool (this will be lifted in MySQL 5.7).

Metadata Locks

Metadata locks were introduced in MySQL 5.5 for better transaction isolation.

But one side effect is: if you start a schema change operation on a table while another transaction is reading or writing on the same table, ALTER TABLE will be stuck in the Waiting for metadata lock state.

Another negative side effect of metadata locks is that in some situations ALTER TABLE will also block all queries to the table that is being altered, even reads. In the worst cases, this may create a query pileup situation that will freeze the whole server. For more on this topic, have a look at this post.

What about pt-online-schema-change and metadata locks? It is of course hit by this issue/feature. However the good thing is that the timeout for metadata locks is set by the tool to 60s instead of the default 1 year. So after 60s, pt-online-schema-change will simply retry the operation that failed because of metadata locks.

MySQL 5.6: Online Schema Changes?

Metadata locks also exist with MySQL 5.6, the same issues as described above can then happen.

However the good news with MySQL 5.6 is that most schema changes can be done online. Now the question is: should you use pt-online-schema-change or a regular online ALTER TABLE statement?

Both have pros and cons:

  • ALTER TABLE is easy to use while being confident with pt-online-schema-change requires some time.
  • There is no way for ALTER TABLE to know if it’s overloading a replica while pt-online-schema-change monitors replication lag and throttles itself if needed.
  • ALTER TABLE is only online for the master, not for replicas while pt-online-schema-change is online for all servers.

The last point can be a bit confusing: why an online ALTER TABLE on the master wouldn’t be online on the slave as well? The truth is that unless you are using multi-threaded replication, writes on slaves are serialized. So while the ALTER TABLE is running, the slave will not process any other write, which is not much different from an offline ALTER TABLE.

So if you are running a single server (all Icehouse users for instance), ALTER TABLE is probably the right choice. But if you have read replicas and specifically if the application cannot tolerate replication lag, pt-online-schema-change is definitely a better choice.

Conclusion

Performing schema changes is becoming easier with newer versions of MySQL. However potential issues can be found with any version, so always be careful when you need to run ALTER TABLE.

The post Schema changes in MySQL for OpenStack Trove users appeared first on MySQL Performance Blog.

May
15
2014
--

Why ALTER TABLE runs faster on Percona Server 5.5 vs. MySQL 5.5

OpenStack 2014Some of us Perconians are at OpenStack summit this week in Atlanta. Matt Griffin, our director of product management, tweeted about the turbo-hipster CI talk about their experience of ALTER TABLEs running faster on Percona Server. Oracle’s Morgan Tocker then tweeted in response, asking why this was the case. I decided that the simplest way to answer that was here in this post.

The reason for this is the expand_fast_index_creation feature of Percona Server. I did a quick schema change on MySQL 5.5 and Percona Server 5.5 to demonstrate this (in the talk, the speaker mentioned that these versions were used).

The schema modifications in the talk could fall in 2 categories, the ones that could use fast index creation and the ones that could not.

I did the following tests on my laptop, on a sysbench tale with 300k records.

Vanilla MySQL 5.5:

mysql> alter table sbtest1 add index idx_c(c);
Query OK, 0 rows affected (4.37 sec)

Percona Server 5.5:

mysql> alter table sbtest1 add index idx_c(c);
Query OK, 0 rows affected (3.90 sec)

We know that this used fast index creation from the 0 rows affected. In this case, there is nor substantial difference between the 2 servers, also probably my laptop with CPU frewquency scaling doesn’t have the most consistent performance in the world.

For the second schema change, I added a column which copies the table.

Vanilla MySQL 5.5:

mysql> alter table sbtest1 add column d int default 0;
Query OK, 300000 rows affected (37.05 sec)
Records: 300000  Duplicates: 0  Warnings: 0

Percona Server 5.5:

mysql> alter table sbtest1 add column d int default 0;
Query OK, 300000 rows affected (9.51 sec)
Records: 300000  Duplicates: 0  Warnings: 0

The reason for this speed difference is that in case of Percona Server, for the table copy, the table is created only with a primary key, and the secondary indexes are built at the end of the process (rather than on the fly). For more details, check Alexey’s blog post on this topic.

This can be tuned further, by tuning innodb_merge_sort_block_size (in Percona Server 5.6, this is replaced by innodb_sort_buffer_size).

mysql> select @@innodb_merge_sort_block_size/1024/1024;
+------------------------------------------+
| @@innodb_merge_sort_block_size/1024/1024 |
+------------------------------------------+
|                               1.00000000 |
+------------------------------------------+
1 row in set (0.00 sec)
mysql> set innodb_merge_sort_block_size=8*1024*1024;
Query OK, 0 rows affected (0.00 sec)
mysql> alter table sbtest1 add column d int default 0;
Query OK, 300000 rows affected (8.61 sec)
Records: 300000  Duplicates: 0  Warnings: 0

So, in order to be accurate, schema changes are faster in Percona Server if they are table copies and if the tables have secondary indexes.

The post Why ALTER TABLE runs faster on Percona Server 5.5 vs. MySQL 5.5 appeared first on MySQL Performance Blog.

Feb
26
2014
--

How to monitor ALTER TABLE progress in MySQL

While working on a recent support issue as a Percona Support Engineer,  I got one question from a customer asking how to monitor ALTER TABLE progress. Actually, for MySQL 5.5 and prior versions, it’s quite difficult to ALTER the table in a running production environment especially for large tables (with millions records). Because it will rebuild and lock the table affecting the performance as well as our users. Therefore even if we start ALTER it’s really important to know when it will finish. Even while creating the index, ALTER TABLE will not rebuild the table if fast_index_creation is ON but still it might lock the table.

fast_index_creation feature was introduced in MySQL 5.5 and higher versions. Also available in MySQL 5.1 with the InnoDB Plugin

From MySQL 5.6 and later, “Online DDL” feature has been introduced, which is enhancing many other types of ALTER TABLE operations to avoid “copying the table” and “locking.” It  also allows SELECT, INSERT, UPDATE, and DELETE statements to run while the table is being altered. So in latest version, we can control the copy of file and locking by using ALGORITHM and LOCK options. But even in MySQL 5.6, there are some ALTER operations which still required table rebuild i.e ADD/DROP column, change data type, ADD/DROP primary key etc. You can check table here for more details.

Summary of Online Status for DDL Operations”  http://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-overview.html

So in any case, if required, we can check the ALTER TABLE progress with below solutions. 

One of the solutions is to use the Percona Toolkit utility, pt-online-schema-change which will ALTER the table without locking it for long time and show the progress. i.e

nilnandan@Dell-XPS:~$ pt-online-schema-change --alter "ADD COLUMN phone INT" u=root,p=root,D=nil,t=nil_test --execute
No slaves found. See --recursion-method if host Dell-XPS has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
copy_rows, 10, 0.25
create_triggers, 10, 1
drop_triggers, 10, 1
swap_tables, 10, 1
update_foreign_keys, 10, 1
Altering `nil`.`nil_test`...
Creating new table...
Created new table nil._nil_test_new OK.
Altering new table...
Altered `nil`.`_nil_test_new` OK.
2014-02-07T12:20:54 Creating triggers...
2014-02-07T12:20:54 Created triggers OK.
2014-02-07T12:20:54 Copying approximately 12583349 rows...
Copying `nil`.`nil_test`: 29% 01:12 remain
Copying `nil`.`nil_test`: 60% 00:38 remain
Copying `nil`.`nil_test`: 91% 00:08 remain
2014-02-07T12:22:33 Copied rows OK.
2014-02-07T12:22:33 Swapping tables...
2014-02-07T12:22:33 Swapped original and new tables OK.
2014-02-07T12:22:33 Dropping old table...
2014-02-07T12:22:33 Dropped old table `nil`.`_nil_test_old` OK.
2014-02-07T12:22:33 Dropping triggers...
2014-02-07T12:22:33 Dropped triggers OK.
Successfully altered `nil`.`nil_test`.
nilnandan@Dell-XPS:~$

While using pt-online-schema-change, many times I get questions such as: “What will happen to those data changes (INSERT/UPDATE/DELETE) which are performing during the ALTER TABLE?”

Here, I would like to explain briefly about it. While running pt-online-schema-change, if we will check data dir,

root@Dell-XPS:/var/lib/mysql/nil# ll
total 830524
drwx------ 2 mysql mysql 4096 Feb 7 12:20 ./
drwxr-xr-x 5 mysql mysql 4096 Feb 7 12:05 ../
-rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt
-rw-rw---- 1 mysql mysql 8616 Feb 7 12:06 nil_test.frm
-rw-rw---- 1 mysql mysql 822083584 Feb 7 12:18 nil_test.ibd
-rw-rw---- 1 mysql mysql 8648 Feb 7 12:20 _nil_test_new.frm
-rw-rw---- 1 mysql mysql 28311552 Feb 7 12:20 _nil_test_new.ibd
-rw-rw---- 1 mysql mysql 944 Feb 7 12:20 nil_test.TRG
-rw-rw---- 1 mysql mysql 40 Feb 7 12:20 pt_osc_nil_nil_test_del.TRN
-rw-rw---- 1 mysql mysql 40 Feb 7 12:20 pt_osc_nil_nil_test_ins.TRN
-rw-rw---- 1 mysql mysql 40 Feb 7 12:20 pt_osc_nil_nil_test_upd.TRN

We can see that it will create triggers (separate trigger for INSERT, UPDATE and DELETE) on the original table to update corresponding rows to the new table( _nil_test_new). So any modifications happened to the data in original tables during the copy, will be reflected in the new table.

NOTE: This tool will not work if any triggers are already defined on the table.

But what if we don’t want to use pt-online-schema-change and run regular ALTER TABLE on mysql prompt? After some research I found many ways to calculate the progress of ALTER TABLE specially with innodb_file_per_table is ON. (innodb_file_per_table is ON by default in MySQL 5.6.)

  • Calculate the progress by checking the size of temporary tablespace. 

With innodb_file_per_table=1,when we are running ALTER TABLE, innodb creates a temporary tablespace within the same data directory with random name starting from #sql ending with .ibd like #sql-1c80_27.ibd. i.e

root@Dell-XPS:/var/lib/mysql/nil# ll                    
...
-rw-rw---- 1 mysql mysql 8682 Feb 7 13:33 nil_test.frm
-rw-rw---- 1 mysql mysql 335544320 Feb 7 13:34 nil_test.ibd
-rw-rw---- 1 mysql mysql 8716 Feb 7 13:35 #sql-1c80_27.frm
-rw-rw---- 1 mysql mysql 23068672 Feb 7 13:35 #sql-1c80_27.ibd

While altering table, innodb reads original ibd file like nil_test.ibd and writes new pages to #sql-1c80_27.ibd. So with file size of nil_test.ibd and temporary #sql-1c80_27.ibd , we can check the ALTER TABLE progress with something like, 

#!/bin/bash
while true
do
A=`du -m '#sql'*'.ibd' 2>/dev/null|cut -f 1`;
# if $A -lt 0 ;
if [[ -z "$A" ]] ;
then
echo "Done";
exit 0 ;
fi
TABLENAME='nil_test';
TT=$TABLENAME.ibd;
B=`du -m $TT |cut -f 1`;
echo "ALTER TABLE $TABLENAME...$A MB written to tmp tablespace out of $B MB";
sleep 10
done

When we run ALTER on mysql, we can simply run this script in data dir and check the progress like,

mysql> ALTER TABLE nil_test ADD COLUMN phone int;
Query OK, 7582912 rows affected (58.54 sec)
Records: 7582912 Duplicates: 0 Warnings: 0
root@Dell-XPS:/var/lib/mysql/nil# ./alter_table_monitor.sh
ALTER TABLE nil_test...23 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...73 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...121 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...173 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...225 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...277 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...333 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...381 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...433 MB written in tmp file out of 485 MB
ALTER TABLE nil_test...481 MB written in tmp file out of 485 MB
Done
root@Dell-XPS:/var/lib/mysql/nil#

There are couple of things which we need to consider while using this script.

  1. We have to change script with $TABLENAME as per your requirement. It will work for only one ALTER TABLE.
  2. Script should run from database dir (i.e /var/lib/mysql/nil)
  3. This calculations are an approximation, because new table can be a bit different in size. For example, deleted rows or fragmentation can cause accuracy problems. 

I would like to mention here that we cannot check the progress of every ALTER TABLE with the above formula. For example, with fast_index_creation, It will create the table without any secondary indexes, then adding the secondary indexes after the data is loaded so it will not create temp tablespace (.ibd file) while creating secondary index. This process is very interesting, let me explain here.

When we add secondary index with ‘fast_index_creation‘, it will only create #sql***.frm file to update table structure but not temporary .ibd file. It will do some file sorts and then add directly index to original table.ibd file. so when you monitors those files, initially you didn’t see that file size increasing, but after some time (after files sorts) file size starts increasing till the end of ALTER TABLE. i.e

Initially, you’ll see

root@Dell-XPS:/var/lib/mysql/nil# ll
total 409644
drwx------ 2 mysql mysql 4096 Feb 12 10:50 ./
drwxr-xr-x 5 mysql mysql 4096 Feb 12 10:10 ../
-rwxr-xr-x 1 root root 306 Feb 7 13:35 alter_table_monitor.sh*
-rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt
-rw-rw---- 1 mysql mysql 8750 Feb 12 10:26 nil_test.frm
-rw-rw---- 1 mysql mysql 419430400 Feb 12 10:50 nil_test.ibd
-rw-rw---- 1 mysql mysql 8750 Feb 12 10:50 #sql-671_25.frm
...
root@Dell-XPS:/var/lib/mysql/nil# ll
total 409644
drwx------ 2 mysql mysql 4096 Feb 12 10:50 ./
drwxr-xr-x 5 mysql mysql 4096 Feb 12 10:10 ../
-rwxr-xr-x 1 root root 306 Feb 7 13:35 alter_table_monitor.sh*
-rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt
-rw-rw---- 1 mysql mysql 8750 Feb 12 10:26 nil_test.frm
-rw-rw---- 1 mysql mysql 419430400 Feb 12 10:50 nil_test.ibd
-rw-rw---- 1 mysql mysql 8750 Feb 12 10:50 #sql-671_25.frm

Then suddenly .ibd file size starts increasing…till end.

root@Dell-XPS:/var/lib/mysql/nil# ll
total 417836
drwx------ 2 mysql mysql 4096 Feb 12 10:50 ./
drwxr-xr-x 5 mysql mysql 4096 Feb 12 10:10 ../
-rwxr-xr-x 1 root root 306 Feb 7 13:35 alter_table_monitor.sh*
-rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt
-rw-rw---- 1 mysql mysql 8750 Feb 12 10:26 nil_test.frm
-rw-rw---- 1 mysql mysql 427819008 Feb 12 10:50 nil_test.ibd
-rw-rw---- 1 mysql mysql 8750 Feb 12 10:50 #sql-671_25.frm
....
root@Dell-XPS:/var/lib/mysql/nil# ll
total 487456
drwx------ 2 mysql mysql 4096 Feb 12 10:51 ./
drwxr-xr-x 5 mysql mysql 4096 Feb 12 10:10 ../
-rwxr-xr-x 1 root root 306 Feb 7 13:35 alter_table_monitor.sh*
-rw-rw---- 1 mysql mysql 65 Jan 31 12:12 db.opt
-rw-rw---- 1 mysql mysql 8750 Feb 12 10:50 nil_test.frm
-rw-rw---- 1 mysql mysql 499122176 Feb 12 10:51 nil_test.ibd
root@Nil-Dell-XPS:/var/lib/mysql/nil#

This will happen if there is only one secondary index. If there are multiple secondary indexes then for each index, process will pause, do file sorts and then add index so the number of pauses in file size increases, will be equal to number of secondary indexes.

  • Calculate the progress by checking the records in information_schema.GLOBAL_TEMPORARY_TABLES 

When the ALTER TABLE is running, we can also check GLOBAL_TEMPORARY_TABLES in information_schema and it will give you records count which are inserted in temporary table. i.e

mysql> select * from GLOBAL_TEMPORARY_TABLES \G;
*************************** 1. row ***************************
SESSION_ID: 38
TABLE_SCHEMA: nil
TABLE_NAME: #sql-1c80_27
ENGINE: InnoDB
NAME: #sql-696_26
TABLE_ROWS: 623711
AVG_ROW_LENGTH: 42
DATA_LENGTH: 26787840
INDEX_LENGTH: 0
CREATE_TIME: 2014-02-11 10:37:34
UPDATE_TIME: NULL
1 row in set (0.00 sec)
.......
mysql> select * from GLOBAL_TEMPORARY_TABLES \G;
*************************** 1. row ***************************
SESSION_ID: 38
TABLE_SCHEMA: nil
TABLE_NAME: #sql-1c80_27
ENGINE: InnoDB
NAME: #sql-696_26
TABLE_ROWS: 7017408
AVG_ROW_LENGTH: 42
DATA_LENGTH: 299663360
INDEX_LENGTH: 0
CREATE_TIME: 2014-02-11 10:37:34
UPDATE_TIME: NULL
1 row in set (0.01 sec)

  •  Calculate the progress by checking the Handler_read_rnd_next status variable (global counters).  

While running ALTER TABLE, we can also check the handler_read_rnd_next status variable by “SHOW GLOBAL STATUS LIKE ‘Handler_read_rnd%’ OR mysqladmin extended. Check this 2008 post titled “How to estimate query completion time in MySQL” by Baron Schwartz. He describes it very well.

NOTE: This won’t need innodb_file_per_table = ON  but we can use this option only when there are no other parallel transactions running.

So normally in any case, it will become really difficult to find the ALTER TABLE progress unless MySQL itself will provide some feature like MariaDB is providing progress in “SHOW PROCESSLIST”  https://mariadb.com/kb/en/show-processlist/

I’ve installed MariaDB 5.5 locally and tried to check, along with “SHOW PROCESSLIST”, you can also check progress by running query against information_schema.processlist table. But I observed that progress time is different in both the output.  It looks like information_schema.processlist table gives accurate progress for ALTER TABLE.

[root@percona-pxc3 nil]# mysql -uroot -proot -e "select ID, USER, HOST, DB, TIME, STATE, INFO, STAGE, MAX_STAGE, PROGRESS from information_schema.processlist where ID = 2; show processlist"
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
| ID | USER | HOST | DB | TIME | STATE | INFO | STAGE | MAX_STAGE | PROGRESS |
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
| 2 | root | localhost | nil | 3 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 1 | 2 | 4.279 |
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
| 2 | root | localhost | nil | Query | 3 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 2.140 |
| 29 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0.000 |
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
[root@percona-pxc3 nil]#
...............
...........
[root@percona-pxc3 nil]# mysql -uroot -proot -e "select ID, USER, HOST, DB, TIME, STATE, INFO, STAGE, MAX_STAGE, PROGRESS from information_schema.processlist where ID = 2; show processlist"
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
| ID | USER | HOST | DB | TIME | STATE | INFO | STAGE | MAX_STAGE | PROGRESS |
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
| 2 | root | localhost | nil | 25 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 1 | 2 | 45.613 |
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
| 2 | root | localhost | nil | Query | 25 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 22.807 |
| 34 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0.000 |
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
[root@percona-pxc3 nil]#
..............
.........
[root@percona-pxc3 nil]# mysql -uroot -proot -e "select ID, USER, HOST, DB, TIME, STATE, INFO, STAGE, MAX_STAGE, PROGRESS from information_schema.processlist where ID = 2; show processlist"
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
| ID | USER | HOST | DB | TIME | STATE | INFO | STAGE | MAX_STAGE | PROGRESS |
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
| 2 | root | localhost | nil | 54 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 1 | 2 | 98.300 |
+----+------+-----------+------+------+-------------------+--------------------------------------------+-------+-----------+----------+
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
| 2 | root | localhost | nil | Query | 54 | copy to tmp table | ALTER TABLE nil_test ADD COLUMN phone1 int | 49.157 |
| 39 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0.000 |
+----+------+-----------+------+---------+------+-------------------+--------------------------------------------+----------+
[root@percona-pxc3 nil]#
[root@percona-pxc3 nil]# mysql -uroot -proot -e "select ID, USER, HOST, DB, TIME, STATE, INFO, STAGE, MAX_STAGE, PROGRESS from information_schema.processlist where ID = 2; show processlist"
+----+------+-----------+------+------+-------+------+-------+-----------+----------+
| ID | USER | HOST | DB | TIME | STATE | INFO | STAGE | MAX_STAGE | PROGRESS |
+----+------+-----------+------+------+-------+------+-------+-----------+----------+
| 2 | root | localhost | nil | 0 | | NULL | 0 | 0 | 0.000 |
+----+------+-----------+------+------+-------+------+-------+-----------+----------+
+----+------+-----------+------+---------+------+-------+------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+------+-----------+------+---------+------+-------+------------------+----------+
| 2 | root | localhost | nil | Sleep | 1 | | NULL | 0.000 |
| 40 | root | localhost | NULL | Query | 0 | NULL | show processlist | 0.000 |
+----+------+-----------+------+---------+------+-------+------------------+----------+
[root@percona-pxc3 nil]#

Conclusion: As we know, some ALTER TABLE can cause problems like server hang or crash. So whenever we are planning to run it on productions servers, it has to be well planned. As MySQL doesn’t provide a “progress bar,” we can use the above explained methods to check progress. If anyone knows of any other method aside from the above then please share in the comments and I would glad to add it in this post.

The post How to monitor ALTER TABLE progress in MySQL appeared first on MySQL Performance Blog.

Jul
05
2013
--

Schema changes – what’s new in MySQL 5.6?

MySQL 5.6Among many of the improvements you can enjoy in MySQL 5.6, there is one that addresses a huge operational problem that most DBAs and System Administrators encounter in their life: schema changes.

While it is usually not a problem for small tables or those in early stages of product life cycle, schema changes become a huge pain once your tables get a significant amount of data. Planning for maintenance is becoming more and more difficult, and your worldwide users want the service to be up and running 24/7, while on the other hand, your developers desire to introduce schema changes every week.

PITA

But what is the real problem here? Let me illustrate very typical case:

Session1> ALTER TABLE revision ADD COLUMN mycol tinyint;
Query OK, 1611193 rows affected (1 min 5.74 sec)
Records: 1611193  Duplicates: 0  Warnings: 0
Session2> INSERT INTO revision SET rev_page=3,rev_comment="test",rev_text_id=1;
Query OK, 1 row affected (48.30 sec)
Session3 > show processlist;
+----+------+-----------+------+---------+------+---------------------------------+----------------------------------------------------------------------+
| Id | User | Host      | db   | Command | Time | State                           | Info                                                                 |
+----+------+-----------+------+---------+------+---------------------------------+----------------------------------------------------------------------+
|  1 | root | localhost | test | Query   |   47 | copy to tmp table               | ALTER TABLE revision ADD COLUMN mycol tinyint                        |
|  2 | root | localhost | test | Query   |   30 | Waiting for table metadata lock | INSERT INTO revision SET rev_page=3,rev_comment="test",rev_text_id=1 |
|  3 | root | localhost | NULL | Query   |    0 | init                            | show processlist                                                     |
+----+------+-----------+------+---------+------+---------------------------------+----------------------------------------------------------------------+
3 rows in set (0.01 sec)

What you see above is how changing table’s structure works in MySQL in all pre-5.6 versions. Normally the example INSERT statement is done in matter of microseconds. But once a DBA runs this ALTER TABLE (session1), the application (session2) has to wait for the ALTER to complete before INSERT and other DML statements can succeed. As a result, application will stall on writes to this table until ALTER TABLE completes.

WORKAROUNDS

There are many less or more complicated approaches to the ALTER TABLE problem in MySQL, starting from just well planned downtime window, through master/slave switching techniques, ending on using advanced tools that let you do the change in less possible intrusive way. These tools are out there for years, yet it appears that still many MySQL users are not aware of them. Let me just name here pt-online-schema-change from Percona Toolkit, oak-online-alter-table from Openark Kit or Facebook OSC.

LONG STORY

Historically, MySQL had to perform full table copy for all DDL (Data Definition Language) operations, while you could not write to a table being altered during the process. This problem is even more painful when awaiting writes can also block following reads.

An important, though small step forward was made for InnoDB storage engine since new InnoDB version (aka InnoDB plugin) was introduced for MySQL 5.1. Since then, you can create and drop indexes without copying the whole table contents. So at least the operation that is practised very often for query optimizations, can be done much much quicker.

Next significant improvement (or perhaps I should say fix) in DDL operations area was introduced in MySQL 5.5 as Metadata Locking. Basically from now on schema changes are working properly and consistent with transactions. You will find this explained in details here.

ONLINE(!) DDL in MySQL 5.6

Fortunately, this is not where so much desired evolution in this area has stopped! With MySQL 5.6, this time a huge step forward was made: from now on most of the ALTER types won’t block writes to a table that is being changed!

Another improvement is that in addition to existing instant ALTER implementations (like change default value for a column), now you can perform also following operations without the need of full table copy:

  • Change auto-increment value for a column
  • Rename a column (if data type remains the same)*
  • Add/Drop a foreign key constraint

As said before though, the main improvement in MySQL 5.6 is that during the ALTER operation a table being changed can still be fully accessible by clients, so both reads and writes (with few exceptions) are allowed! Complete statement matrix can be found here.

Let’s see some examples in practice

Example 1 – reset auto-increment value for a column

It can happen that wrong explicit insert sets the value for an auto-increment column higher then necessary. It may be even close to the data type limit and we want to make the table using lower values back again. We realize that and delete that high value row, but…

Session1> SELECT t.TABLE_NAME,c.COLUMN_NAME,c.COLUMN_KEY,t.AUTO_INCREMENT FROM information_schema.tables t JOIN information_schema.columns c USING (TABLE_SCHEMA,TABLE_NAME) WHERE t.TABLE_SCHEMA="test" AND t.TABLE_NAME="revision" AND c.EXTRA="auto_increment";
+------------+-------------+------------+----------------+
| TABLE_NAME | COLUMN_NAME | COLUMN_KEY | AUTO_INCREMENT |
+------------+-------------+------------+----------------+
| revision   | rev_id      | PRI        |        2000002 |
+------------+-------------+------------+----------------+
1 row in set (0.03 sec)
Session1> select max(rev_id) from revision;
+-------------+
| max(rev_id) |
+-------------+
| 1700000     |
+-------------+
1 row in set (0.00 sec)
Session1> insert into revision set rev_page=3,rev_comment="test",rev_text_id=1;
Query OK, 1 row affected (0.02 sec)
Session1> select max(rev_id) from revision;
+-------------+
| max(rev_id) |
+-------------+
| 2000002     |
+-------------+
1 row in set (0.00 sec)

Not possible since the table has next AUTO_INCREMENT=2000002. So this alter is our only help:

Session1> ALTER TABLE revision AUTO_INCREMENT=1700001;
Query OK, 0 rows affected (0.01 sec)
Records: 0 Duplicates: 0 Warnings: 0
Session1> insert into revision set rev_page=4,rev_comment="test",rev_text_id=1;
Query OK, 1 row affected (0.01 sec)
Session1> select max(rev_id) from revision;
+-------------+
| max(rev_id) |
+-------------+
|     1700001 |
+-------------+

Finally, this operation in MySQL 5.6 is instant! In previous MySQL versions such ALTER causes full table rewrite and blocks the table for writes for the whole process time. In version 5.5.31 the same ALTER on the same hardware looks like that:

Session1> ALTER TABLE revision AUTO_INCREMENT=1700001;
Query OK, 1611226 rows affected (1 min 3.42 sec)
Records: 1611226  Duplicates: 0  Warnings: 0

Example 2 – DROP COLUMN

Session1> ALTER TABLE revision DROP COLUMN rev_sha1;
Query OK, 0 rows affected (1 min 39.24 sec)
Records: 0 Duplicates: 0 Warnings: 0

During the ALTER is in progress:

Session2> insert into revision set rev_page=3,rev_comment="test",rev_text_id=1,rev_sha1=2;
Query OK, 1 row affected (0.01 sec)

After the ALTER finished:

Session2> insert into revision set rev_page=3,rev_comment="test",rev_text_id=1,rev_sha1=2;
ERROR 1054 (42S22): Unknown column 'rev_sha1' in 'field list'

Great! Drop table was non-blocking, we can use the table without interruption.

Example 3 – RENAME COLUMN

Original column definition was:
rev_deleted tinyint(1) unsigned NOT NULL DEFAULT '0',

Session1> ALTER TABLE revision CHANGE COLUMN rev_deleted rev_deleted1 tinyint(1) unsigned NOT NULL DEFAULT '0';
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0

(The same column rename in MySQL 5.5 or earlier copies the whole table.)

But let’s try another column:
rev_timestamp char(14) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL DEFAULT '',

Session1> ALTER TABLE revision CHANGE COLUMN rev_timestamp rev_date char(14) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL DEFAULT '';
Query OK, 1611226 rows affected (1 min 43.34 sec)
Records: 1611226  Duplicates: 0  Warnings: 0

What the heck?

Hmm, let’s try another one:
rev_comment tinyblob NOT NULL,

Session1> ALTER TABLE revision CHANGE COLUMN rev_comment rev_comment1 tinyblob NOT NULL;
Query OK, 1611226 rows affected (2 min 7.91 sec)
Records: 1611226  Duplicates: 0  Warnings: 0

So, by the way of writing this blog post I identified a new bug and reported it here: http://bugs.mysql.com/bug.php?id=69580
In short, Online DDL does not work as expected when you rename a column of binary data type, but also for a char type with binary collation. The bug not only leads to full table copy but also blocks a table for writes.

Example 4 – NEW ALTER TABLE OPTIONS

In case you are not sure if an ALTER TABLE will copy and/or block the table, and you want to make sure your DDL statement won’t cause such problems, you can add new alter specifications to the statement: ALGORITHM and LOCK.

Session1> ALTER TABLE revision ENGINE=InnoDB, LOCK=NONE;
ERROR 1846 (0A000): LOCK=NONE is not supported. Reason: COPY algorithm requires a lock. Try LOCK=SHARED.

As we can see full table rebuild still needs locking.

Session1> ALTER TABLE revision ADD COLUMN mycol3 tinyint, ALGORITHM=COPY, LOCK=NONE;
ERROR 1846 (0A000): LOCK=NONE is not supported. Reason: COPY algorithm requires a lock. Try LOCK=SHARED.

So, ALGORITHM=COPY and LOCK=NONE are mutually exclusive.

Session1> ALTER TABLE revision MODIFY COLUMN rev_id bigint(8) unsigned NOT NULL AUTO_INCREMENT, ALGORITHM=INPLACE;
ERROR 1846 (0A000): ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY.

Data type change of a column is both locking and makes full table copy.

Session1> ALTER TABLE revision ADD COLUMN mycol3 tinyint, ALGORITHM=INPLACE, LOCK=NONE;
Query OK, 0 rows affected (1 min 38.84 sec)
Records: 0  Duplicates: 0  Warnings: 0

Above example is pretty interesting as the new method is used, so no full table copy takes place, however whole operation is still as much expensive as the old way. This is because significant data reorganization is needed inside the existing ibd tablespace.

ALGORITHM and LOCK options allow you also to force using old method with full table copy by adding ALGORITHM=COPY or protect a table from writes with LOCK=SHARED. The same effect can be achieved with SET old_alter_table=1 before the ALTER.

NEW DIAGNOSTICS

I would like to mention that MySQL 5.6 offers new monitoring tools that we can use to see more details also about ongoing ALTERs.

This is how it looks like during the ALTER with adding a column is in progress:

Session3> SELECT * FROM information_schema.innodb_metrics WHERE name LIKE 'ddl%'\G
*************************** 1. row ***************************
NAME: ddl_background_drop_indexes
SUBSYSTEM: ddl
COUNT: 0
MAX_COUNT: NULL
MIN_COUNT: NULL
AVG_COUNT: 0
COUNT_RESET: 0
MAX_COUNT_RESET: NULL
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: NULL
TIME_ENABLED: 2013-06-04 09:45:36
TIME_DISABLED: NULL
TIME_ELAPSED: 175015
TIME_RESET: NULL
STATUS: enabled
TYPE: counter
COMMENT: Number of indexes waiting to be dropped after failed index creation
*************************** 2. row ***************************
NAME: ddl_background_drop_tables
SUBSYSTEM: ddl
COUNT: 0
MAX_COUNT: NULL
MIN_COUNT: NULL
AVG_COUNT: 0
COUNT_RESET: 0
MAX_COUNT_RESET: NULL
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: NULL
TIME_ENABLED: 2013-06-04 09:45:36
TIME_DISABLED: NULL
TIME_ELAPSED: 175015
TIME_RESET: NULL
STATUS: enabled
TYPE: counter
COMMENT: Number of tables in background drop table list
*************************** 3. row ***************************
NAME: ddl_online_create_index
SUBSYSTEM: ddl
COUNT: 1
MAX_COUNT: NULL
MIN_COUNT: 0
AVG_COUNT: 0.000005713795960346256
COUNT_RESET: 1
MAX_COUNT_RESET: NULL
MIN_COUNT_RESET: 0
AVG_COUNT_RESET: NULL
TIME_ENABLED: 2013-06-04 09:45:36
TIME_DISABLED: NULL
TIME_ELAPSED: 175015
TIME_RESET: NULL
STATUS: enabled
TYPE: counter
COMMENT: Number of indexes being created online
*************************** 4. row ***************************
NAME: ddl_pending_alter_table
SUBSYSTEM: ddl
COUNT: 1
MAX_COUNT: NULL
MIN_COUNT: 0
AVG_COUNT: 0.000005713795960346256
COUNT_RESET: 1
MAX_COUNT_RESET: NULL
MIN_COUNT_RESET: 0
AVG_COUNT_RESET: NULL
TIME_ENABLED: 2013-06-04 09:45:36
TIME_DISABLED: NULL
TIME_ELAPSED: 175015
TIME_RESET: NULL
STATUS: enabled
TYPE: counter
COMMENT: Number of ALTER TABLE, CREATE INDEX, DROP INDEX in progress
4 rows in set (0.01 sec)

To enable those counters, simply do:

Session3> set global innodb_monitor_enable = module_ddl;
Query OK, 0 rows affected (0.00 sec)

IS ONLINE DDL GOOD ENOUGH IN MySQL 5.6?

I would not say that. Although schema changes are now a lot more friendly, but there seems to be still a room for improvement in terms of becoming even more “online”. And there is another aspect – we can’t really control the way online DDL is done in terms of server I/O load and replication. It is often the case when full table rewrite would generate enough high disk utilization to cause performance problems. Yet we cannot do anything about this in MySQL’s Online DDL – it will copy a table with full available speed no matter if it can saturate I/O throughput or not. Here the tools like pt-online-schema-change have this big advantage where it divides table copy into chunks and checks automatically a given status variable if there is not too high load on the server, and pauses data copy process if so.

When we speak about replication – with Online DDL each ALTER is a single operation, so after the master finishes it, it gets replicated to slaves. This will cause replication lag for the time slave has to apply long ALTER, as well as increased I/O activity on all slaves at the same time. Also in this regard pt-online-schema-change chunk copy model has huge advantage – schema change is replicated to the slaves and the tool will check how much slaves are lagged hence pause the copy process in case the lag is too big. Also the data chunks are automatically adjusted to fit within given time limit.

I was curious to compare time required for the same ALTER with ONLINE DDL versus pt-online-schema-change:

[root@centos6-2 ~]# time pt-online-schema-change D=test,t=test --alter "ADD COLUMN mycol4 bigint" --execute
Operation, tries, wait:
  copy_rows, 10, 0.25
  create_triggers, 10, 1
  drop_triggers, 10, 1
  swap_tables, 10, 1
  update_foreign_keys, 10, 1
Altering `test`.`test`...
Creating new table...
Created new table test._test_new OK.
Altering new table...
Altered `test`.`_test_new` OK.
Creating triggers...
Created triggers OK.
Copying approximately 952180 rows...
Copied rows OK.
Swapping tables...
Swapped original and new tables OK.
Dropping old table...
Dropped old table `test`.`_test_old` OK.
Dropping triggers...
Dropped triggers OK.
Successfully altered `test`.`test`.
real    0m22.531s
user    0m0.342s
sys     0m0.095s

versus:

Session1> ALTER TABLE test ADD COLUMN mycol4 bigint, ALGORITHM=INPLACE, LOCK=NONE;
Query OK, 0 rows affected (30.64 sec)
Records: 0  Duplicates: 0  Warnings: 0

pt-online-schema-change wins here clearly, but it’s just one simple example out of many possible scenarios!

OVERHEAD

The fact that a table is ready for writes during Online DDL in progress, does not mean it will be performed at the same speed.

I tried a simple sysbench test to compare overall throughput of the MySQL server when the server is idle versus when ONLINE ALTER is in progress. Before the test I warmed up InnoDB buffers and ran sysbench test several times. All data for sysbench table fit in memory.

Then I started ALTER statement (for an unrelated table):

Session1> ALTER TABLE revision ADD COLUMN mycol4 tinyint, ALGORITHM=INPLACE, LOCK=SHARED;                                                                                                             Query OK, 0 rows affected (1 min 54.73 sec)
Records: 0  Duplicates: 0  Warnings: 0

And just after that, the same sysbench command:

[root@centos6-2 ~]# sysbench --num-threads=3 --max-requests=5000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-db=test --mysql-user=root run
sysbench 0.4.12:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 3
Doing OLTP test.
Running mixed OLTP test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Maximum number of requests for OLTP test is limited to 5000
Threads started!
...

Below we can see concurrent threads:

Session3 > show processlist;
+----+------+-----------+------+---------+------+----------------+--------------------------------------------------------------------------------+
| Id | User | Host      | db   | Command | Time | State          | Info                                                                           |
+----+------+-----------+------+---------+------+----------------+--------------------------------------------------------------------------------+
|  1 | root | localhost | test | Sleep   |  282 |                | NULL                                                                           |
|  2 | root | localhost | test | Query   |   29 | altering table | ALTER TABLE revision ADD COLUMN mycol4 tinyint, ALGORITHM=INPLACE, LOCK=SHARED |
|  5 | root | localhost | NULL | Query   |    0 | init           | show processlist                                                               |
| 39 | root | localhost | test | Execute |    0 | Writing to net | SELECT c from sbtest where id=?                                                |
| 40 | root | localhost | test | Execute |    0 | Writing to net | DELETE from sbtest where id=?                                                  |
| 41 | root | localhost | test | Execute |    0 | Writing to net | SELECT c from sbtest where id=?                                                |
+----+------+-----------+------+---------+------+----------------+--------------------------------------------------------------------------------+
6 rows in set (0.00 sec)

To be clear, sysbench was started after ALTER, and finished before ALTER was done.

Result? On idle MySQL instance sysbench test score is around 270 transactions per second with minimal variation between many the same tests.
While Online DDL of unrelated table is in progress, the same test scored with average 110 transactions per second. Also minimal variation between many the same tests.

In next test, I altered the sbtest table – the same which is used by sysbench test, hence additional overhead of recording all the changes to that table.

Session1> ALTER TABLE sbtest ADD COLUMN mycol4 tinyint, ALGORITHM=INPLACE, LOCK=NONE;
Query OK, 0 rows affected (1 min 5.88 sec)
Records: 0  Duplicates: 0  Warnings: 0

Again, sysbench finished before the ALTER, and now the score was 90 transactions per second.
Note though this was only a very simple test of one example use case.

BUGS

At the end I’d like to mention bugs regarding online DDL that in my opinion are important to be aware of. Fortunately there are not many of them. Following were active as of version 5.6.12:

http://bugs.mysql.com/bug.php?id=69151 – in some circumstances statements needing table copy (like add column) create temporary table in tmpdir instead of the database directory

http://bugs.mysql.com/bug.php?id=69444 – Replication is not crash safe with DDL statements

http://bugs.mysql.com/bug.php?id=69580 – the bug around column rename, I just reported and mentioned above

CONCLUSION

I think schema changes in MySQL 5.6 are really less painful out of the box, plus we can still choose external tools if needed. Having said that, MySQL operators are in much better shape now. It is really good to see MySQL evolution is continued in right direction.

The post Schema changes – what’s new in MySQL 5.6? appeared first on MySQL Performance Blog.

Feb
01
2013
--

Implications of Metadata Locking Changes in MySQL 5.5

While most of the talk recently has mostly been around the new changes in MySQL 5.6 (and that is understandable), I have had lately some very interesting cases to deal with, with respect to the Metadata Locking related changes that were introduced in MySQL 5.5.3. It appears that the implications of Metadata Locking have not been covered well, and since there are still a large number of MySQL 5.0 and 5.1 installations that would upgrade or are in the process of upgrading to MySQL 5.5, I thought it necessary to discuss what these implications exactly are.

To read what Metadata Locking exactly is please read this section here in the MySQL manual.

Let’s start off with having a look at the Meta Data Locking behavior prior to MySQL 5.5.3

Metadata Locking behavior prior to MySQL 5.5.3

Prior to MySQL 5.5.3 a statement that opened a table only held meta data locks till the end of the statement and not the end of the transaction. This meant that transaction was not really isolated, because the same query could return different results if executed twice and if a DDL was executed between the query invocations. Let me give you an example via a simple test case where I will add a new column to the table while a transaction in REPEATABLE-READ isolation mode is ACTIVE.

session1 > select @@tx_isolation;
+-----------------+
| @@tx_isolation  |
+-----------------+
| REPEATABLE-READ |
+-----------------+
1 row in set (0.00 sec)

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)  

session1 > select * from test where id=1;
+----+------+
| id | x    |
+----+------+
|  1 | foo  |
+----+------+
1 row in set (0.00 sec)

session2 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (0.57 sec)
Records: 2  Duplicates: 0  Warnings: 0

session1 > select * from test where id=1;
Empty set (0.00 sec)

session1 > rollback;
Query OK, 0 rows affected (0.00 sec)  

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)  

session1 > select * from test where id=1;
+----+------+------------+
| id | x    | c          |
+----+------+------------+
|  1 | foo  | dummy_text |
+----+------+------------+
1 row in set (0.00 sec)

And you can see how isolation is broken because the SELECT was not repeatable although transaction isolation level of REPEATABLE-READ was used. This behavior of versions prior to 5.5 also means that queries could be written in different order to the binary log breaking locking semantics and contrary to serialization concepts. For example take a look at the following excerpt from the binary log of a case when an UPDATE transaction is mixed with an ALTER:

# at 536
#130201 11:21:03 server id 1  end_log_pos 658   Query   thread_id=8     exec_time=0     error_code=0
SET TIMESTAMP=1359714063/*!*/;
ALTER TABLE test add column id_2 int(11) default 0 after id
/*!*/;
# at 658
#130201 11:21:39 server id 1  end_log_pos 726   Query   thread_id=7     exec_time=0     error_code=0
SET TIMESTAMP=1359714099/*!*/;
BEGIN
/*!*/;
# at 726
# at 773
#130201 11:21:35 server id 1  end_log_pos 773   Table_map: `test`.`test` mapped to number 17
#130201 11:21:35 server id 1  end_log_pos 829   Update_rows: table id 17 flags: STMT_END_F

BINLOG '
L5cLURMBAAAALwAAAAUDAAAAABEAAAAAAAEABHRlc3QABHRlc3QAAwMD/gL+CQY=
L5cLURgBAAAAOAAAAD0DAAAAABEAAAAAAAEAA///+AIAAAAAAAAAA2JhcvgCAAAAAAAAAANob3A=
'/*!*/;
### UPDATE test.test
### WHERE
###   @1=2 /* INT meta=0 nullable=0 is_null=0 */
###   @2=0 /* INT meta=0 nullable=1 is_null=0 */
###   @3='bar' /* STRING(9) meta=65033 nullable=1 is_null=0 */
### SET
###   @1=2 /* INT meta=0 nullable=0 is_null=0 */
###   @2=0 /* INT meta=0 nullable=1 is_null=0 */
###   @3='hop' /* STRING(9) meta=65033 nullable=1 is_null=0 */
# at 829
#130201 11:21:39 server id 1  end_log_pos 856   Xid = 85
COMMIT/*!*/;

Note how ALTER is logged before the UPDATE, because ALTER did not block waiting for the transaction to commit.

For the reasons described above the implementation of Metadata Locking was changed, starting MySQL 5.5.3. Let’s see how this works now.

Metadata Locking behavior starting MySQL 5.5.3

Starting with 5.5.3 DDL statements that modify the table metadata are executed in an isolated fashion consistent with transactional behavior. This means that any open transaction will hold metadata locks on the table it has accessed for as long as the transaction is open. Since an open transaction retains metadata locks on all tables that were opened by the transaction, hence any DDL operation cannot commence till all the transactions that accessed that table are open. Let’s see this in affect via a simple test case:

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)

session1 > select * from test order by id;
+----+------+
| id | x    |
+----+------+
|  1 | foo  |
|  2 | bar  |
+----+------+
2 rows in set (0.00 sec)

session2 > ALTER TABLE test add column c char(32) default 'dummy_text';

session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
| Id | User     | Host      | db   | Command | Time | State                           | Info                                                        |
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
|  1 | msandbox | localhost | test | Sleep   |  140 |                                 | NULL                                                        |
|  2 | msandbox | localhost | test | Query   |    3 | Waiting for table metadata lock | ALTER TABLE test add column c char(32) default 'dummy_text' |
|  3 | msandbox | localhost | test | Query   |    0 | NULL                            | show processlist                                            |
+----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+
3 rows in set (0.00 sec)

You can see how the ALTER blocks, because the transaction in session1 is still open and once the transaction in session1 is closed, the ALTER proceeds through successfully:

session1 > rollback;
Query OK, 0 rows affected (0.00 sec)

session2 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (46.77 sec)
Records: 2  Duplicates: 0  Warnings: 0

Let’s see where the ALTER spent most of its time:

session2 > show profiles;
+----------+-------------+-------------------------------------------------------------+
| Query_ID | Duration    | Query                                                       |
+----------+-------------+-------------------------------------------------------------+
|        1 | 46.78110075 | ALTER TABLE test add column c char(32) default 'dummy_text' |
+----------+-------------+-------------------------------------------------------------+
1 row in set (0.00 sec)

session2 > show profile for query 1;
+------------------------------+-----------+
| Status                       | Duration  |
+------------------------------+-----------+
| starting                     |  0.000060 |
| checking permissions         |  0.000003 |
| checking permissions         |  0.000003 |
| init                         |  0.000005 |
| Opening tables               |  0.000045 |
| System lock                  |  0.000006 |
| setup                        |  0.000016 |
| creating table               |  0.168283 |
| After create                 |  0.000061 |
| copy to tmp table            |  0.165808 |
| rename result table          | 46.446738 |
| end                          |  0.000035 |
| Waiting for query cache lock |  0.000003 |
| end                          |  0.000006 |
| query end                    |  0.000003 |
| closing tables               |  0.000008 |
| freeing items                |  0.000016 |
| cleaning up                  |  0.000004 |
+------------------------------+-----------+
18 rows in set (0.00 sec)

So the ALTER waited on the meta data locks just after the table with the new structure had been created and populated with data but before the old table was swapped with the new one. Note that ALTER is a multi-step process, the old table is locked in shared mode and then something similar to the following steps are taken: a new table with the new structure is created and then INSERT INTO new_table SELECT * FROM old_table is done and then RENAME old_table to tmp_table, new_table to old_table and finally DROP tmp_table.
Let’s see another example, this time trying a RENAME:

session2 > RENAME TABLE test to test_2;

session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
| Id | User     | Host      | db   | Command | Time | State                           | Info                        |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
|  1 | msandbox | localhost | test | Sleep   |   49 |                                 | NULL                        |
|  2 | msandbox | localhost | test | Query   |   35 | Waiting for table metadata lock | RENAME TABLE test to test_2 |
|  3 | msandbox | localhost | test | Query   |    0 | NULL                            | show processlist            |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
3 rows in set (0.00 sec)

And you can see that the RENAME is also blocked, because a transaction that accessed the table “test” is still open.

So we have an interesting conclusion here that the ALTER waits only at the last stages when its making changes to the table metadata, a table ALTER that alters a big table can keep executing without any hindrance, copying rows from the table with the old structure to the table with the new structure and will only wait at the last step when its about to make changes to table metadata.

Let’s see another interesting side-affect of metadata locking.

When can ALTER render the table inaccessible?

Now there is another interesting side-affect, and that is that when the ALTER comes at the state where it needs to wait for metadata locks, at that point the ALTER simply blocks any type of queries to the table, we know that writes would be blocked anyhow for the entire duration of the ALTER, but reads would be blocked as well at the time when the ALTER is waiting for metadata locks. Let’s see this in action via another test case:

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)

session1 > select * from test_2 order by id;
+----+------+------------+
| id | x    | c          |
+----+------+------------+
|  1 | foo  | dummy_text |
|  2 | bar  | dummy_text |
+----+------+------------+
2 rows in set (0.00 sec)

session6 > ALTER TABLE test_2 DROP COLUMN c;

session7 > select * from test_2 order by id;
session8 > select * from test_2 order by id;
session9 > select * from test_2 order by id;
session10 > select * from test_2 order by id;

session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
| Id | User     | Host      | db   | Command | Time | State                           | Info                             |
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
|  1 | msandbox | localhost | test | Sleep   |  403 |                                 | NULL                             |
|  3 | msandbox | localhost | test | Query   |    0 | NULL                            | show processlist                 |
|  6 | msandbox | localhost | test | Query   |  229 | Waiting for table metadata lock | ALTER TABLE test_2 DROP COLUMN c |
|  7 | msandbox | localhost | test | Query   |  195 | Waiting for table metadata lock | select * from test_2 order by id |
|  8 | msandbox | localhost | test | Query   |  180 | Waiting for table metadata lock | select * from test_2 order by id |
|  9 | msandbox | localhost | test | Query   |  169 | Waiting for table metadata lock | select * from test_2 order by id |
| 10 | msandbox | localhost | test | Query   |   55 | Waiting for table metadata lock | select * from test_2 order by id |
+----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+
7 rows in set (0.00 sec)

And you can see that the table is blocked for any kind of operation. Let’s see the profiling information for one of the queries that was blocked to see where the query spent most of its time:

session10 > show profile for query 1;
+----------------------+------------+
| Status               | Duration   |
+----------------------+------------+
| starting             |   0.000058 |
| checking permissions |   0.000006 |
| Opening tables       | 213.028481 |
| System lock          |   0.000009 |
| init                 |   0.000014 |
| optimizing           |   0.000002 |
| statistics           |   0.000005 |
| preparing            |   0.000006 |
| executing            |   0.000001 |
| Sorting result       |   0.000002 |
| Sending data         |   0.000040 |
| end                  |   0.000003 |
| query end            |   0.000002 |
| closing tables       |   0.000003 |
| freeing items        |   0.000007 |
| logging slow query   |   0.000002 |
| cleaning up          |   0.000002 |
+----------------------+------------+
17 rows in set (0.00 sec)

And you can see how the query spent nearly all its time waiting in the “Opening tables” state. Now this behavior with respect to ALTER making the table inaccessible in some cases is not really documented and as such I have reported a bug: http://bugs.mysql.com/bug.php?id=67647

Metadata locking behaves differently for queries that are serviced from the Query Cache, let’s see what happens in that case.

Metadata Locking and Query Cache

How does metadata locking behave with query_cache? That is an important question. If Query Cache is enabled and the SELECT can be serviced from the Query Cache then the SELECT will not block on the ALTER even though the ALTER is waiting for meta data locks. Why? Because in such a case no table open operation has to be performed. Let’s see this scenario via a test case:

session1 > start transaction;
Query OK, 0 rows affected (0.00 sec)

session1 > select * from test_2 order by id;
+----+------+
| id | x    |
+----+------+
|  1 | foo  |
|  2 | bar  |
+----+------+
2 rows in set (0.00 sec)

session6 > RENAME TABLE test_2 to test;

session10 > select * from test_2 order by id;
+----+------+
| id | x    |
+----+------+
|  1 | foo  |
|  2 | bar  |
+----+------+
2 rows in set (0.00 sec)

session3 > show processlist;
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
| Id | User     | Host      | db   | Command | Time | State                           | Info                        |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
|  1 | msandbox | localhost | test | Sleep   |   22 |                                 | NULL                        |
|  3 | msandbox | localhost | test | Query   |    0 | NULL                            | show processlist            |
|  6 | msandbox | localhost | test | Query   |    3 | Waiting for table metadata lock | RENAME TABLE test_2 to test |
| 10 | msandbox | localhost | test | Sleep   |   37 |                                 | NULL                        |
+----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+
4 rows in set (0.00 sec)

The query proceeds without being blocked on anything while the RENAME is still waiting for metadata locks. Let’s see the profiling information for this query:

session10 > show profile for query 11;
+--------------------------------+----------+
| Status                         | Duration |
+--------------------------------+----------+
| starting                       | 0.000018 |
| Waiting for query cache lock   | 0.000003 |
| checking query cache for query | 0.000007 |
| checking privileges on cached  | 0.000003 |
| checking permissions           | 0.000005 |
| sending cached result to clien | 0.000011 |
| logging slow query             | 0.000002 |
| cleaning up                    | 0.000002 |
+--------------------------------+----------+
8 rows in set (0.00 sec)

You can see that no table open operation was performed and hence no wait.

Does the fact that the table has already been opened and table object is in the table_cache change anything with respect to metadata locks.

Metadata Locking and Table Cache

No matter if a connection accesses a table that is already in the Table Cache, any query to a table that has a DDL operation waiting, will block. Why? Because MySQL sees that the old entries in the Table Cache have to be invalidated, and any query that accesses the table will have to reopen the modified table and there will be new entries in the Table Cache. Let’s see this phenomenon in action:

session6 > ALTER TABLE test add column c char(32) default 'dummy_text';
Query OK, 2 rows affected (59.80 sec)

session10 > show status like 'Open%tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   | 30    |
| Opened_tables | 0     |
+---------------+-------+
2 rows in set (0.00 sec)

session10 > select * from test order by id;
+----+------+------------+
| id | x    | c          |
+----+------+------------+
|  1 | foo  | dummy_text |
|  2 | bar  | dummy_text |
+----+------+------------+
2 rows in set (53.78 sec)

session10 > show status like 'Open%tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   | 30    |
| Opened_tables | 1     |
+---------------+-------+
2 rows in set (0.00 sec)

session10 > show profile for query 18;
+----------------------+-----------+
| Status               | Duration  |
+----------------------+-----------+
| starting             |  0.000059 |
| checking permissions |  0.000010 |
| Opening tables       | 53.786685 |
| System lock          |  0.000009 |
| init                 |  0.000012 |
| optimizing           |  0.000003 |
| statistics           |  0.000007 |
| preparing            |  0.000006 |
| executing            |  0.000001 |
| Sorting result       |  0.000004 |
| Sending data         |  0.000033 |
| end                  |  0.000003 |
| query end            |  0.000002 |
| closing tables       |  0.000004 |
| freeing items        |  0.000009 |
| logging slow query   |  0.000002 |
| cleaning up          |  0.000002 |
+----------------------+-----------+
17 rows in set (0.00 sec)

As you can see that the SELECT query still blocks, and the status counter Opened_tables is also incremented once the query finishes.

So much for the explanation, now let’s take a look at the consequences.

Consequences

The consequences of these changes in metadata locking is that, if you have some really hot tables, for example in web applications its typical to see a “sessions” table that is accessed on every request, then care should be taken when you have to ALTER the table otherwise it can easily cause a stall as many threads can get piled up waiting for table metadata lock bringing down the MySQL server or causing all the connections to get depleted.

There are some other interesting consequences as well for application that use MySQL versions prior to 5.5:

  • I remember a customer case where there is a reporting slave that daily runs a long running transaction, this transactions tends to run for hours. Now everyday one of the tables was renamed and swapped and that table was the one that is read from by the long running transaction. As the slave tried to execute the rename query it would simply block waiting for the long running transaction to finish, this would cause the slave to lag for hours waiting for the transaction to be completed, as you know that the slave is single-threaded so it cannot really apply any other event. This was never an issue when the application was using MySQL version < 5.5 as the datastore.
  • There was another interesting case this time with how Active MQ uses MySQL when in HA mode. In HA mode there are two Active MQ servers, both try to do something similar to the following sequence of events:
    session1 > CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;
    Query OK, 0 rows affected (0.09 sec)
    
    session1 > insert into t1 values(null);
    Query OK, 1 row affected (0.21 sec)
    
    session1 > start transaction;
    Query OK, 0 rows affected (0.00 sec)
    
    session1 > select * from t1 where i=1 for update;
    +---+
    | i |
    +---+
    | 1 |
    +---+
    1 row in set (0.00 sec)
    
    session2 > CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;
    

    When using MySQL 5.1 the second CREATE would just fail immediately with the error “ERROR 1050 (42S01): Table ‘t1′ already exists”, but because of how meta data locking works in 5.5 this is no longer the case, the second CREATE will simply block with unintended consequences. A workaround here would be to set lock_wait_timeout variable to a very low value and then execute the CREATE TABLE, this will make sure that the CREATE fails immediately (however due to a different reason):

    session2 > set session lock_wait_timeout=1;CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;
    

    However, I feel that the CREATE TABLE should fail in such a case when the table already exists and there is no other DDL like a DROP table waiting to run on the same table, and as such I have reported the bug: http://bugs.mysql.com/bug.php?id=67873

The post Implications of Metadata Locking Changes in MySQL 5.5 appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com