Sep
25
2017
--

Avoid Shared Locks from Subqueries When Possible

Shared Locks

Shared LocksIn this blog post, we’ll look at how to avoid shared locks from subqueries.

I’m pretty sure most of you have seen an UPDATE statement matching rows returned from a SELECT query:

update ibreg set k=1 where id in (select id from ibcmp where id > 90000);

This query, when executed with

autocommit=1

, is normally harmless. However, this can have bad effects when combined with other statements in the same transaction that result in holding the shared locks from the SELECT query. But first, let me explain why the SELECT query would hold locks in the first place.

Due to InnoDB’s ACID properties, to make sure that the outer UPDATE statement has a consistent view of the matching rows from the SELECT query the server has to acquire a shared lock on those rows. No other thread should modify those matching rows to maintain consistency within the transaction. To demonstrate, let’s take two transactions executed in specific order below:

mysql1> begin;
mysql1> update ibreg set k=1 where id in (select id from ibcmp where id > 90000);
mysql2> begin;
mysql2> delete from ibcmp where id > 90000;

By the time the second session executes, it will be in a LOCK WAIT state (as confirmed from INFORMATION_SCHEMA):

mysql1> select * from information_schema.innodb_trx G
*************************** 1. row ***************************
                    trx_id: 3932449
                 trx_state: LOCK WAIT
               trx_started: 2017-09-06 00:20:05
     trx_requested_lock_id: 3932449:13:1354:31
          trx_wait_started: 2017-09-06 00:20:05
                trx_weight: 2
       trx_mysql_thread_id: 9
                 trx_query: delete from test.ibcmp where id > 90000
       trx_operation_state: starting index read
...
mysql1> select * from information_schema.innodb_locks G
*************************** 1. row ***************************
    lock_id: 3932449:13:1354:31
lock_trx_id: 3932449
  lock_mode: X
  lock_type: RECORD
 lock_table: `test`.`ibcmp`
 lock_index: PRIMARY
 lock_space: 13
  lock_page: 1354
   lock_rec: 31
  lock_data: 90001
*************************** 2. row ***************************
    lock_id: 3932174:13:1354:31
lock_trx_id: 3932174
  lock_mode: S
  lock_type: RECORD
 lock_table: `test`.`ibcmp`
 lock_index: PRIMARY
 lock_space: 13
  lock_page: 1354
   lock_rec: 31
  lock_data: 90001

Information_Schema.INNODB_LOCKS confirms that our first transaction has held a shared lock on the rows that matched the SELECT queries from the first transaction. This can be bad for a number of reasons:

  1. As the number of rows that matches the SELECT grows, DEADLOCK and lock wait timeouts can become more frequent
  2. As a consequence of this, ROLLBACKs would also increase (and are expensive operations)
  3. Your users can become unhappy, especially if it is not handled gracefully from the application

If you really need the consistency of the view between the table being read from and the table getting updated, the lock is necessary and unavoidable. Avoiding the deadlocks and lock wait timeouts can be minimized, but not totally avoided.

On the other hand, if you’re not worried about view consistency, there are two ways you can avoid such problems: by using variables or making sure the SELECT becomes a transient read inside the transaction (i.e., by dumping the results into an OUTFILE).

mysql1> begin;
mysql1> select group_concat(id) into @ids from ibcmp where id > 90000;
mysql1> update ibreg set k=1 where id in (@ids);
mysql2> begin;
mysql2> delete from ibcmp where iid > 90000;

The first method is bound by the

group_concat_max_len

 variable. If you think you will only have a few resulting IDs that fit into

group_concat_max_len

, this is a good solution.

mysql1> begin;
mysql1> select id into outfile '/tmp/id.csv' from ibcmp where id > 90000;
mysql1> create temporary table t (id int unsigned not null) engine=innodb;
mysql1> load data infile '/tmp/id.csv' into table t;
mysql1> update ibreg inner join t on ibreg.id = t.id;
mysql2> begin;
mysql2> delete from ibcmp where id > 90000;

The second approach is only meant to overcome the limitation of the GROUP_CONCAT method.

Again, these two approaches only work if you do not care if the result of the SELECT queries changes on the other table between the BEGIN statement and UPDATE within the transaction.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com