This blog post explains the cause of “ERROR 1412 (HY000): Table definition has changed, please retry transaction” with the specific Isolation level settings. Background As per the MySQL documentation, this error should occur for “operations that make a temporary copy of the original table and delete the original table when the temporary copy is built.” […]
25
2018
MongoDB transactions: your very first transaction with MongoDB 4.0
MongoDB 4.0 is just around the corner and with rc0 we can get a good idea of what we can expect in the GA version. MongoDB 4.0 will allow transactions to run in a replica set and, in a future release, the MongoDB transaction will be cluster-wide. This is a really big change!
Multi-statement transactions are a big deal for a lot of companies. The transactions feature has been in development since MongoDB version 3.6 when sessions were added. Now, we will be able to see how both sessions and transactions work. In an earlier blog post we highlighted a few details from what was delivered in 3.6 that indicated that 4.0 would have transactions.
There are a few limitations for transactions and some operations are not allowed yet. A detailed list can be found in the MongoDB documentation of the Session.startTransaction() method.
One restriction that we must be aware of is that the collection MUST exist in order to use transactions.
A simple transaction will be declared in a very similar way to that we use for other databases. The caveat is that we need to start a session before starting a transaction. This means that multi-statement transactions are not the default behavior to write to the database.
How to use transactions in MongoDB 4.0
Download MongoDB 4.0 RC (or you can install it from the repositories).
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1604-4.0.0-rc1.tgz
Uncompress the files:
tar -xvzf mongodb-linux-x86_64-ubuntu1604-4.0.0-rc1.tgz
Rename the folder to mongo4.0 and create the data folder inside of the bin folder:
mv mongodb-linux-x86_64-ubuntu1604-4.0.0-rc1 mongo4.0 cd mongo4.0 cd bin mkdir data
Start the database process:
Important: in order to have multi-statement transactions replica-set must be enabled
./mongod --dbpath data --logpath data/log.log --fork --replSet foo
Initialize the replica-set:
> rs.initiate() foo:Primary> use percona foo:Primary> db.createCollection('test')
Start a session and then a transaction:
session = db.getMongo().startSession() session.startTransaction() session.getDatabase("percona").test.insert({today : new Date()}) session.getDatabase("percona").test.insert({some_value : "abc"})
Then you can decide whether to commit the transaction or abort it:
session.commitTransaction() session.abortTransaction()
If the startTransaction throws the IllegalOperation error, make sure the database is running with replica set.
Transaction isolation level in in MongoDB 4.0: Snapshot Isolation
MongoDB 4.0 implements snapshot isolation for the transactions. The pending uncommitted changes are only visible inside the session context (the session which has started the transaction) and are not visible outside. Here is an example:
Connection 1:
foo:PRIMARY> use percona switched to db percona foo:PRIMARY> db.createCollection('test') { "ok" : 1, "operationTime" : Timestamp(1528903182, 1), "$clusterTime" : { "clusterTime" : Timestamp(1528903182, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } foo:PRIMARY> session = db.getMongo().startSession() session { "id" : UUID("bdd82af7-ab9d-4cd3-9238-f08ee928f31e") } foo:PRIMARY> session.startTransaction() foo:PRIMARY> session.getDatabase("percona").test.insert({today : new Date()}) WriteResult({ "nInserted" : 1 }) foo:PRIMARY> session.getDatabase("percona").test.insert({some_value : "abc"}) WriteResult({ "nInserted" : 1 })
Connection 2: starting second transaction in its own session:
foo:PRIMARY> use percona switched to db percona foo:PRIMARY> db.test.find() foo:PRIMARY> db.test.find() foo:PRIMARY> session = db.getMongo().startSession() session { "id" : UUID("eb628bfd-425e-450c-a51b-733435474eaa") } foo:PRIMARY> session.startTransaction() foo:PRIMARY> session.getDatabase("percona").test.find() foo:PRIMARY>
Connection 1: commit
foo:PRIMARY> session.commitTransaction()
Connection 2: after connection1 commits:
foo:PRIMARY> db.test.find() { "_id" : ObjectId("5b21361252bbe6e5b9a70a4e"), "today" : ISODate("2018-06-13T15:19:46.645Z") } { "_id" : ObjectId("5b21361252bbe6e5b9a70a4f"), "some_value" : "abc" }
Outside of the session it sees the new values, however inside the opened session it will not see the new values.
foo:PRIMARY> session.getDatabase("percona").test.find() foo:PRIMARY>
Now if we commit the transaction inside connection 2 it will commit as well, and we will have 2 rows now (as there are no conflicts).
Sometimes, however, we may see the transient transaction error when committing or even doing find() inside a session:
foo:PRIMARY> session.commitTransaction() 2018-06-14T21:56:29.111+0000 E QUERY [js] Error: command failed: { "errorLabels" : [ "TransientTransactionError" ], "operationTime" : Timestamp(1529013385, 1), "ok" : 0, "errmsg" : "Transaction 0 has been aborted.", "code" : 251, "codeName" : "NoSuchTransaction", "$clusterTime" : { "clusterTime" : Timestamp(1529013385, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } : _getErrorWithCode@src/mongo/shell/utils.js:25:13 doassert@src/mongo/shell/assert.js:18:14 _assertCommandWorked@src/mongo/shell/assert.js:520:17 assert.commandWorked@src/mongo/shell/assert.js:604:16 commitTransaction@src/mongo/shell/session.js:878:17 @(shell):1:1
From the MongoDB doc we can read that we could retry the transaction back when we have this error.
If an operation encounters an error, the returned error may have an errorLabels array field. If the error is a transient error, the errorLabels array field contains “TransientTransactionError” as an element and the transaction as a whole can be retried.
MongoDB transactions: conflict
What about transaction conflicts in MongoDB? Let’s say we are updating the same row. Here is the demo:
First we create a record, trx, in the collection:
use percona db.test.insert({trx : 0})
Then we create session1 and update trx to change from 0 to 1:
foo:PRIMARY> session = db.getMongo().startSession() session { "id" : UUID("0b7b8ce0-919a-401a-af01-69fe90876301") } foo:PRIMARY> session.startTransaction() foo:PRIMARY> session.getDatabase("percona").test.update({trx : 0}, {trx: 1}) WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
Then (before committing) create another session which will try to change from 0 to 2:
foo:PRIMARY> session = db.getMongo().startSession() session { "id" : UUID("b312c662-247c-47c5-b0c9-23d77f4e9f6d") } foo:PRIMARY> session.startTransaction() foo:PRIMARY> session.getDatabase("percona").test.update({trx : 0}, {trx: 2}) WriteCommandError({ "errorLabels" : [ "TransientTransactionError" ], "operationTime" : Timestamp(1529675754, 1), "ok" : 0, "errmsg" : "WriteConflict", "code" : 112, "codeName" : "WriteConflict", "$clusterTime" : { "clusterTime" : Timestamp(1529675754, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } })
As we can see, MongoDB catches the conflict and return the error on the insert (even before the commit).
We hope this post, with its simple example how transactions will work, has been useful. Feedback is welcome: you can comment here, catch Adamo on twitter @AdamoTonete or talk to the team at @percona.
The post MongoDB transactions: your very first transaction with MongoDB 4.0 appeared first on Percona Database Performance Blog.
01
2016
Introduction into storage engine troubleshooting: Q & A
In this blog, I will provide answers to the Q & A for the “Introduction into storage engine troubleshooting” webinar.
First, I want to thank everybody for attending the July 14 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:
Q: At which isolation level do
pt-online-schema-change
and
pt-archive
copy data from a table?
A: Both tools do not change the server’s default transaction isolation level. Use either
REPEATABLE READ
or set it in my
.cnf
.
Q: Can I create an index to optimize a query which has group by A and order by B, both from different tables and A column is from the first table in the two table join?
A: Do you mean a query like
SELECT ... FROM a, b GROUP BY a.A ORDER BY b.B
? Yes, this is possible:
mysql> explain select A, B, count(*) from a join b on(a.A=b.id) WHERE b.B < 4 GROUP BY a.A, b.B ORDER BY b.B ASC; +----+-------------+-------+-------+---------------+------+---------+-----------+------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+-------+---------------+------+---------+-----------+------+-----------------------------------------------------------+ | 1 | SIMPLE | b | range | PRIMARY,B | B | 5 | NULL | 15 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | a | ref | A | A | 5 | test.b.id | 1 | Using index | +----+-------------+-------+-------+---------------+------+---------+-----------+------+-----------------------------------------------------------+ 2 rows in set (0.00 sec)
Q: Where can I find recommendations on what kind of engine to use for different application types or use cases?
A: Storage engines are always being actively developed, therefore I suggest that you don’t search for generic recommendations. These can be outdated just a few weeks after they are written. Study engines instead. For example, just a few years ago MyISAM was the only engine (among those officially supported) that could work with FULLTEXT indexes and SPATIAL columns. Now InnoDB supports both: FULLTEXT indexes since version 5.6 and GIS features in 5.7. Today I can recommend InnoDB as a general-purpose engine for all installations, and TokuDB for write-heavy workloads when you cannot use high-speed disks.
Alternative storage engines can help to realize specific business needs. For example, CONNECT brings data to your server from many sources, SphinxSE talks to the Sphinx daemon, etc.
Other alternative storage engines increase the speed of certain workloads. Memory, for example, can be a good fit for temporary tables.
Q: Can you please explain how we find the full text of the query when we query the view ‘statements_with_full_table_Scans’?
A: Do you mean view in sys schema? Sys schema views take information from
summary_*
and digests it in Performance Schema, therefore it does not contain full queries (only digests). Full text of the query can be found in the
events_statements_*
tables in the Performance Schema. Note that even the
events_statements_history_long
table can be rewritten very quickly, and you may want to save data from it periodically.
Q: Hi is TokuDB for the new document protocol?
A: As Alex Rubin showed in his detailed blog post, the new document protocol just converts NoSQL queries into SQL, and is thus not limited to any storage engine. To use documents and collections, a storage engine must support generated columns (which TokuDB currently does not). So support of X Protocol for TokuDB is limited to relational tables access.
Q: Please comment on “read committed” versus “repeatable read.”
Q: Repeatable read holds the cursor on the result set for the client versus read committed where the cursor is updated after a transaction.
A:
READ COMMITTED
and
REPEATABLE READ
are transaction isolation levels, whose details are explained here.
I would not correlate locks set on table rows in different transaction isolation modes with the result set. A transaction with isolation level
REPEATABLE READ
instead creates a snapshot of rows that are accessed by the transaction. Let’s consider a table:
mysql> create table ti(id int not null primary key, f1 int) engine=innodb; Query OK, 0 rows affected (0.56 sec) mysql> insert into ti values(1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9); Query OK, 9 rows affected (0.03 sec) Records: 9 Duplicates: 0 Warnings: 0
Then start the transaction and select a few rows from this table:
mysql1> begin; Query OK, 0 rows affected (0.00 sec) mysql1> select * from ti where id < 5; +----+------+ | id | f1 | +----+------+ | 1 | 1 | | 2 | 2 | | 3 | 3 | | 4 | 4 | +----+------+ 4 rows in set (0.04 sec)
Now let’s update another set of rows in another transaction:
mysql2> update ti set f1 = id*2 where id > 5; Query OK, 4 rows affected (0.06 sec) Rows matched: 4 Changed: 4 Warnings: 0 mysql2> select * from ti; +----+------+ | id | f1 | +----+------+ | 1 | 1 | | 2 | 2 | | 3 | 3 | | 4 | 4 | | 5 | 5 | | 6 | 12 | | 7 | 14 | | 8 | 16 | | 9 | 18 | +----+------+ 9 rows in set (0.00 sec)
You see that the first four rows – which we accessed in the first transaction – were not modified, and last four were modified. If InnoDB only saved the cursor (as someone answered above) we would expect to see the same result if we ran
SELECT * ...
query in our old transaction, but it actually shows whole table content before modification:
mysql1> select * from ti; +----+------+ | id | f1 | +----+------+ | 1 | 1 | | 2 | 2 | | 3 | 3 | | 4 | 4 | | 5 | 5 | | 6 | 6 | | 7 | 7 | | 8 | 8 | | 9 | 9 | +----+------+ 9 rows in set (0.00 sec)
So “snapshot” is a better word than “cursor” for the result set. In the case of
READ COMMITTED
, the first transaction would see modified rows:
mysql1> drop table ti; Query OK, 0 rows affected (0.11 sec) mysql1> create table ti(id int not null primary key, f1 int) engine=innodb; Query OK, 0 rows affected (0.38 sec) mysql1> insert into ti values(1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9); Query OK, 9 rows affected (0.04 sec) Records: 9 Duplicates: 0 Warnings: 0 mysql1> set transaction isolation level read committed; Query OK, 0 rows affected (0.00 sec) mysql1> begin; Query OK, 0 rows affected (0.00 sec) mysql1> select * from ti where id < 5; +----+------+ | id | f1 | +----+------+ | 1 | 1 | | 2 | 2 | | 3 | 3 | | 4 | 4 | +----+------+ 4 rows in set (0.00 sec)
Let’s update all rows in the table this time:
mysql2> update ti set f1 = id*2; Query OK, 9 rows affected (0.04 sec) Rows matched: 9 Changed: 9 Warnings: 0
Now the first transaction sees both the modified rows with id >= 5 (not in the initial result set), but also the modified rows with id < 5 (which existed in the initial result set):
mysql1> select * from ti; +----+------+ | id | f1 | +----+------+ | 1 | 2 | | 2 | 4 | | 3 | 6 | | 4 | 8 | | 5 | 10 | | 6 | 12 | | 7 | 14 | | 8 | 16 | | 9 | 18 | +----+------+ 9 rows in set (0.00 sec)
01
2013
Implications of Metadata Locking Changes in MySQL 5.5
While most of the talk recently has mostly been around the new changes in MySQL 5.6 (and that is understandable), I have had lately some very interesting cases to deal with, with respect to the Metadata Locking related changes that were introduced in MySQL 5.5.3. It appears that the implications of Metadata Locking have not been covered well, and since there are still a large number of MySQL 5.0 and 5.1 installations that would upgrade or are in the process of upgrading to MySQL 5.5, I thought it necessary to discuss what these implications exactly are.
To read what Metadata Locking exactly is please read this section here in the MySQL manual.
Let’s start off with having a look at the Meta Data Locking behavior prior to MySQL 5.5.3
Metadata Locking behavior prior to MySQL 5.5.3
Prior to MySQL 5.5.3 a statement that opened a table only held meta data locks till the end of the statement and not the end of the transaction. This meant that transaction was not really isolated, because the same query could return different results if executed twice and if a DDL was executed between the query invocations. Let me give you an example via a simple test case where I will add a new column to the table while a transaction in REPEATABLE-READ isolation mode is ACTIVE.
session1 > select @@tx_isolation; +-----------------+ | @@tx_isolation | +-----------------+ | REPEATABLE-READ | +-----------------+ 1 row in set (0.00 sec) session1 > start transaction; Query OK, 0 rows affected (0.00 sec) session1 > select * from test where id=1; +----+------+ | id | x | +----+------+ | 1 | foo | +----+------+ 1 row in set (0.00 sec) session2 > ALTER TABLE test add column c char(32) default 'dummy_text'; Query OK, 2 rows affected (0.57 sec) Records: 2 Duplicates: 0 Warnings: 0 session1 > select * from test where id=1; Empty set (0.00 sec) session1 > rollback; Query OK, 0 rows affected (0.00 sec) session1 > start transaction; Query OK, 0 rows affected (0.00 sec) session1 > select * from test where id=1; +----+------+------------+ | id | x | c | +----+------+------------+ | 1 | foo | dummy_text | +----+------+------------+ 1 row in set (0.00 sec)
And you can see how isolation is broken because the SELECT was not repeatable although transaction isolation level of REPEATABLE-READ was used. This behavior of versions prior to 5.5 also means that queries could be written in different order to the binary log breaking locking semantics and contrary to serialization concepts. For example take a look at the following excerpt from the binary log of a case when an UPDATE transaction is mixed with an ALTER:
# at 536 #130201 11:21:03 server id 1 end_log_pos 658 Query thread_id=8 exec_time=0 error_code=0 SET TIMESTAMP=1359714063/*!*/; ALTER TABLE test add column id_2 int(11) default 0 after id /*!*/; # at 658 #130201 11:21:39 server id 1 end_log_pos 726 Query thread_id=7 exec_time=0 error_code=0 SET TIMESTAMP=1359714099/*!*/; BEGIN /*!*/; # at 726 # at 773 #130201 11:21:35 server id 1 end_log_pos 773 Table_map: `test`.`test` mapped to number 17 #130201 11:21:35 server id 1 end_log_pos 829 Update_rows: table id 17 flags: STMT_END_F BINLOG ' L5cLURMBAAAALwAAAAUDAAAAABEAAAAAAAEABHRlc3QABHRlc3QAAwMD/gL+CQY= L5cLURgBAAAAOAAAAD0DAAAAABEAAAAAAAEAA///+AIAAAAAAAAAA2JhcvgCAAAAAAAAAANob3A= '/*!*/; ### UPDATE test.test ### WHERE ### @1=2 /* INT meta=0 nullable=0 is_null=0 */ ### @2=0 /* INT meta=0 nullable=1 is_null=0 */ ### @3='bar' /* STRING(9) meta=65033 nullable=1 is_null=0 */ ### SET ### @1=2 /* INT meta=0 nullable=0 is_null=0 */ ### @2=0 /* INT meta=0 nullable=1 is_null=0 */ ### @3='hop' /* STRING(9) meta=65033 nullable=1 is_null=0 */ # at 829 #130201 11:21:39 server id 1 end_log_pos 856 Xid = 85 COMMIT/*!*/;
Note how ALTER is logged before the UPDATE, because ALTER did not block waiting for the transaction to commit.
For the reasons described above the implementation of Metadata Locking was changed, starting MySQL 5.5.3. Let’s see how this works now.
Metadata Locking behavior starting MySQL 5.5.3
Starting with 5.5.3 DDL statements that modify the table metadata are executed in an isolated fashion consistent with transactional behavior. This means that any open transaction will hold metadata locks on the table it has accessed for as long as the transaction is open. Since an open transaction retains metadata locks on all tables that were opened by the transaction, hence any DDL operation cannot commence till all the transactions that accessed that table are open. Let’s see this in affect via a simple test case:
session1 > start transaction; Query OK, 0 rows affected (0.00 sec) session1 > select * from test order by id; +----+------+ | id | x | +----+------+ | 1 | foo | | 2 | bar | +----+------+ 2 rows in set (0.00 sec) session2 > ALTER TABLE test add column c char(32) default 'dummy_text'; session3 > show processlist; +----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+ | 1 | msandbox | localhost | test | Sleep | 140 | | NULL | | 2 | msandbox | localhost | test | Query | 3 | Waiting for table metadata lock | ALTER TABLE test add column c char(32) default 'dummy_text' | | 3 | msandbox | localhost | test | Query | 0 | NULL | show processlist | +----+----------+-----------+------+---------+------+---------------------------------+-------------------------------------------------------------+ 3 rows in set (0.00 sec)
You can see how the ALTER blocks, because the transaction in session1 is still open and once the transaction in session1 is closed, the ALTER proceeds through successfully:
session1 > rollback; Query OK, 0 rows affected (0.00 sec) session2 > ALTER TABLE test add column c char(32) default 'dummy_text'; Query OK, 2 rows affected (46.77 sec) Records: 2 Duplicates: 0 Warnings: 0
Let’s see where the ALTER spent most of its time:
session2 > show profiles; +----------+-------------+-------------------------------------------------------------+ | Query_ID | Duration | Query | +----------+-------------+-------------------------------------------------------------+ | 1 | 46.78110075 | ALTER TABLE test add column c char(32) default 'dummy_text' | +----------+-------------+-------------------------------------------------------------+ 1 row in set (0.00 sec) session2 > show profile for query 1; +------------------------------+-----------+ | Status | Duration | +------------------------------+-----------+ | starting | 0.000060 | | checking permissions | 0.000003 | | checking permissions | 0.000003 | | init | 0.000005 | | Opening tables | 0.000045 | | System lock | 0.000006 | | setup | 0.000016 | | creating table | 0.168283 | | After create | 0.000061 | | copy to tmp table | 0.165808 | | rename result table | 46.446738 | | end | 0.000035 | | Waiting for query cache lock | 0.000003 | | end | 0.000006 | | query end | 0.000003 | | closing tables | 0.000008 | | freeing items | 0.000016 | | cleaning up | 0.000004 | +------------------------------+-----------+ 18 rows in set (0.00 sec)
So the ALTER waited on the meta data locks just after the table with the new structure had been created and populated with data but before the old table was swapped with the new one. Note that ALTER is a multi-step process, the old table is locked in shared mode and then something similar to the following steps are taken: a new table with the new structure is created and then INSERT INTO new_table SELECT * FROM old_table is done and then RENAME old_table to tmp_table, new_table to old_table and finally DROP tmp_table.
Let’s see another example, this time trying a RENAME:
session2 > RENAME TABLE test to test_2; session3 > show processlist; +----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+ | 1 | msandbox | localhost | test | Sleep | 49 | | NULL | | 2 | msandbox | localhost | test | Query | 35 | Waiting for table metadata lock | RENAME TABLE test to test_2 | | 3 | msandbox | localhost | test | Query | 0 | NULL | show processlist | +----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+ 3 rows in set (0.00 sec)
And you can see that the RENAME is also blocked, because a transaction that accessed the table “test” is still open.
So we have an interesting conclusion here that the ALTER waits only at the last stages when its making changes to the table metadata, a table ALTER that alters a big table can keep executing without any hindrance, copying rows from the table with the old structure to the table with the new structure and will only wait at the last step when its about to make changes to table metadata.
Let’s see another interesting side-affect of metadata locking.
When can ALTER render the table inaccessible?
Now there is another interesting side-affect, and that is that when the ALTER comes at the state where it needs to wait for metadata locks, at that point the ALTER simply blocks any type of queries to the table, we know that writes would be blocked anyhow for the entire duration of the ALTER, but reads would be blocked as well at the time when the ALTER is waiting for metadata locks. Let’s see this in action via another test case:
session1 > start transaction; Query OK, 0 rows affected (0.00 sec) session1 > select * from test_2 order by id; +----+------+------------+ | id | x | c | +----+------+------------+ | 1 | foo | dummy_text | | 2 | bar | dummy_text | +----+------+------------+ 2 rows in set (0.00 sec) session6 > ALTER TABLE test_2 DROP COLUMN c; session7 > select * from test_2 order by id; session8 > select * from test_2 order by id; session9 > select * from test_2 order by id; session10 > select * from test_2 order by id; session3 > show processlist; +----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+ | 1 | msandbox | localhost | test | Sleep | 403 | | NULL | | 3 | msandbox | localhost | test | Query | 0 | NULL | show processlist | | 6 | msandbox | localhost | test | Query | 229 | Waiting for table metadata lock | ALTER TABLE test_2 DROP COLUMN c | | 7 | msandbox | localhost | test | Query | 195 | Waiting for table metadata lock | select * from test_2 order by id | | 8 | msandbox | localhost | test | Query | 180 | Waiting for table metadata lock | select * from test_2 order by id | | 9 | msandbox | localhost | test | Query | 169 | Waiting for table metadata lock | select * from test_2 order by id | | 10 | msandbox | localhost | test | Query | 55 | Waiting for table metadata lock | select * from test_2 order by id | +----+----------+-----------+------+---------+------+---------------------------------+----------------------------------+ 7 rows in set (0.00 sec)
And you can see that the table is blocked for any kind of operation. Let’s see the profiling information for one of the queries that was blocked to see where the query spent most of its time:
session10 > show profile for query 1; +----------------------+------------+ | Status | Duration | +----------------------+------------+ | starting | 0.000058 | | checking permissions | 0.000006 | | Opening tables | 213.028481 | | System lock | 0.000009 | | init | 0.000014 | | optimizing | 0.000002 | | statistics | 0.000005 | | preparing | 0.000006 | | executing | 0.000001 | | Sorting result | 0.000002 | | Sending data | 0.000040 | | end | 0.000003 | | query end | 0.000002 | | closing tables | 0.000003 | | freeing items | 0.000007 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +----------------------+------------+ 17 rows in set (0.00 sec)
And you can see how the query spent nearly all its time waiting in the “Opening tables” state. Now this behavior with respect to ALTER making the table inaccessible in some cases is not really documented and as such I have reported a bug: http://bugs.mysql.com/bug.php?id=67647
Metadata locking behaves differently for queries that are serviced from the Query Cache, let’s see what happens in that case.
Metadata Locking and Query Cache
How does metadata locking behave with query_cache? That is an important question. If Query Cache is enabled and the SELECT can be serviced from the Query Cache then the SELECT will not block on the ALTER even though the ALTER is waiting for meta data locks. Why? Because in such a case no table open operation has to be performed. Let’s see this scenario via a test case:
session1 > start transaction; Query OK, 0 rows affected (0.00 sec) session1 > select * from test_2 order by id; +----+------+ | id | x | +----+------+ | 1 | foo | | 2 | bar | +----+------+ 2 rows in set (0.00 sec) session6 > RENAME TABLE test_2 to test; session10 > select * from test_2 order by id; +----+------+ | id | x | +----+------+ | 1 | foo | | 2 | bar | +----+------+ 2 rows in set (0.00 sec) session3 > show processlist; +----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+ | 1 | msandbox | localhost | test | Sleep | 22 | | NULL | | 3 | msandbox | localhost | test | Query | 0 | NULL | show processlist | | 6 | msandbox | localhost | test | Query | 3 | Waiting for table metadata lock | RENAME TABLE test_2 to test | | 10 | msandbox | localhost | test | Sleep | 37 | | NULL | +----+----------+-----------+------+---------+------+---------------------------------+-----------------------------+ 4 rows in set (0.00 sec)
The query proceeds without being blocked on anything while the RENAME is still waiting for metadata locks. Let’s see the profiling information for this query:
session10 > show profile for query 11; +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000018 | | Waiting for query cache lock | 0.000003 | | checking query cache for query | 0.000007 | | checking privileges on cached | 0.000003 | | checking permissions | 0.000005 | | sending cached result to clien | 0.000011 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+----------+ 8 rows in set (0.00 sec)
You can see that no table open operation was performed and hence no wait.
Does the fact that the table has already been opened and table object is in the table_cache change anything with respect to metadata locks.
Metadata Locking and Table Cache
No matter if a connection accesses a table that is already in the Table Cache, any query to a table that has a DDL operation waiting, will block. Why? Because MySQL sees that the old entries in the Table Cache have to be invalidated, and any query that accesses the table will have to reopen the modified table and there will be new entries in the Table Cache. Let’s see this phenomenon in action:
session6 > ALTER TABLE test add column c char(32) default 'dummy_text'; Query OK, 2 rows affected (59.80 sec) session10 > show status like 'Open%tables'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | Open_tables | 30 | | Opened_tables | 0 | +---------------+-------+ 2 rows in set (0.00 sec) session10 > select * from test order by id; +----+------+------------+ | id | x | c | +----+------+------------+ | 1 | foo | dummy_text | | 2 | bar | dummy_text | +----+------+------------+ 2 rows in set (53.78 sec) session10 > show status like 'Open%tables'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | Open_tables | 30 | | Opened_tables | 1 | +---------------+-------+ 2 rows in set (0.00 sec) session10 > show profile for query 18; +----------------------+-----------+ | Status | Duration | +----------------------+-----------+ | starting | 0.000059 | | checking permissions | 0.000010 | | Opening tables | 53.786685 | | System lock | 0.000009 | | init | 0.000012 | | optimizing | 0.000003 | | statistics | 0.000007 | | preparing | 0.000006 | | executing | 0.000001 | | Sorting result | 0.000004 | | Sending data | 0.000033 | | end | 0.000003 | | query end | 0.000002 | | closing tables | 0.000004 | | freeing items | 0.000009 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +----------------------+-----------+ 17 rows in set (0.00 sec)
As you can see that the SELECT query still blocks, and the status counter Opened_tables is also incremented once the query finishes.
So much for the explanation, now let’s take a look at the consequences.
Consequences
The consequences of these changes in metadata locking is that, if you have some really hot tables, for example in web applications its typical to see a “sessions” table that is accessed on every request, then care should be taken when you have to ALTER the table otherwise it can easily cause a stall as many threads can get piled up waiting for table metadata lock bringing down the MySQL server or causing all the connections to get depleted.
There are some other interesting consequences as well for application that use MySQL versions prior to 5.5:
- I remember a customer case where there is a reporting slave that daily runs a long running transaction, this transactions tends to run for hours. Now everyday one of the tables was renamed and swapped and that table was the one that is read from by the long running transaction. As the slave tried to execute the rename query it would simply block waiting for the long running transaction to finish, this would cause the slave to lag for hours waiting for the transaction to be completed, as you know that the slave is single-threaded so it cannot really apply any other event. This was never an issue when the application was using MySQL version < 5.5 as the datastore.
- There was another interesting case this time with how Active MQ uses MySQL when in HA mode. In HA mode there are two Active MQ servers, both try to do something similar to the following sequence of events:
session1 > CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB; Query OK, 0 rows affected (0.09 sec) session1 > insert into t1 values(null); Query OK, 1 row affected (0.21 sec) session1 > start transaction; Query OK, 0 rows affected (0.00 sec) session1 > select * from t1 where i=1 for update; +---+ | i | +---+ | 1 | +---+ 1 row in set (0.00 sec) session2 > CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;
When using MySQL 5.1 the second CREATE would just fail immediately with the error “ERROR 1050 (42S01): Table ‘t1′ already exists”, but because of how meta data locking works in 5.5 this is no longer the case, the second CREATE will simply block with unintended consequences. A workaround here would be to set lock_wait_timeout variable to a very low value and then execute the CREATE TABLE, this will make sure that the CREATE fails immediately (however due to a different reason):
session2 > set session lock_wait_timeout=1;CREATE TABLE t1(i int(11) not null auto_increment primary key) ENGINE=InnoDB;
However, I feel that the CREATE TABLE should fail in such a case when the table already exists and there is no other DDL like a DROP table waiting to run on the same table, and as such I have reported the bug: http://bugs.mysql.com/bug.php?id=67873
The post Implications of Metadata Locking Changes in MySQL 5.5 appeared first on MySQL Performance Blog.