Mar
29
2012
--

Percona XtraBackup 1.9.2 released (Beta)

Percona is glad to announce the release of Percona XtraBackup 1.9.2 on 28th March 2012. Downloads are available from our download site here. This is a late BETA release in the process of going towards Percona XtraBackup 2.0.0.

This is a BETA quality release and is not inteded for production. If you want a high quality, Generally Available release, you should use the current Stable version – currently 1.6.5 in the 1.6 series at the time of writing.

The 1.9.x version numbers will be used to distinguish between pre-release versions of PerconaXtraBackup 2.0 and the Generally Available final release.

If you wish to run pre-release Percona software, you can use our experimental software repositories or download packages from our normal download site.

The full release notes can be seen in our online documentation.

Mar
29
2012
--

Percona Server 5.5.21-25.1 released!

Percona is glad to announce the release of Percona Server 5.5.21-25.1 on March 30, 2012 (Downloads are available here and from the Percona Software Repositories).

Based on MySQL 5.5.21, including all the bug fixes in it, Percona Server 5.5.21-25.1 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.21-25.1 milestone at Launchpad.

We are doing another release based on MySQL 5.5.21 as we have a couple of important bug fixes that we wish to make available.

Bugs Fixed:

  • Fixed a memory corruption regression introduced in 5.5.18-23.0. Bug fixed #915814 (Alexey Kopytov).
  • Fixed InnoDB compilation warnings on CentOS 5. Bug fixed #962940 (Laurynas Biveinis).
  • Fixed MySQL upstream bug #64160 that was causing issues on upgrade to 5.5.20 and 5.5.21. Bug fixed #966844 (Stewart Smith).

Full release notes are available in the documentation.

Mar
29
2012
--

Introducing the pmp-check-mysql-status Nagios Plugin

Most of the Nagios plugins included with the Percona Monitoring Plugins are purpose-built for specific scenarios that I found in my research to be frequent, severe, or subtle causes of problems in MySQL systems. They are intentionally not generic because I wanted to focus on doing one thing with each plugin, and doing it excellently. This makes the plugins easier to set up and use, and I believe it should help avoid some of the common problems such as spammy alerts.

There is one “generic, flexible” plugin, however, which you can use for fairly arbitrary checks of status counters and variables. This is the pmp-check-mysql-status plugin, which can perform computations on status variables. By default, I recommend using it only for two purposes:

  • Check whether the Uptime variable is too small, indicating the server has been restarted
  • Check whether Threads_connected is approaching max_connections, which can be inconvenient to solve in some environments

However, there’s no reason not to use the plugin for other purposes if you want. Here are some of the things you could do:

  • Compare a counter to a threshold
  • Compare a counter to a variable or another counter
  • Add, subtract, multiply, and divide counters or variables
  • Transform a ratio into a percentage
  • Compute the rate of change over time for one or more counters (e.g. queries per second)

The documentation contains examples of how to do all of these things, if you so desire.

Mar
29
2012
--

Congratulations to Percona Live Ticket Winners!

It’s taken a little while to sort out the contest logistics, but we’re ready now, so I want to congratulate our three first-prize winners to the Percona Live MySQL Conference and Expo:

  • David Juntgen
  • Jeff Hansen
  • Domenick Petrella

For helping us to promote the conference, they’re receiving full conference passes, including tutorials. I hope you enjoy the conference, guys, and look forward to seeing you there!

Our ten second-prize winners receive a copy of High Performance MySQL Third Edition, and I’m still in progress with that.

By the way, if you’re not registered for the conference yet, a lot of things are getting sold out, so you might want to just go ahead and do that. Yeahhhh. Some tutorials are full. The hotel is long since sold out, although there’s apparently still space at the Hilton across the road, and maybe at the Joie De Vivre Avatar down the street. As a further incentive for signing up soon, we’re giving away more books and Kindle Fires to those who register today.

Mar
27
2012
--

InnoDB’s gap locks

One of the most important features of InnoDB is the row level locking. This feature provides better concurrency under heavy write load but needs additional precautions to avoid phantom reads and to get a consistent Statement based replication. To accomplish that, row level locking databases also acquire gap locks.

What is a Phantom Read

A Phantom Read happens when in a running transaction, two identical statements get different values, because some other transaction has modified the table’s rows. For example:


transaction1> START TRANSACTION;
transaction1> SELECT * FROM t WHERE i > 20 FOR UPDATE;
+------+
| i |
+------+
| 21 |
| 25 |
| 30 |
+------+


transaction2> START TRANSACTION;
transaction2> INSERT INTO t VALUES(26);
transaction2> COMMIT;

transaction1> select * from t where i > 20 FOR UPDATE;
+------+
| i |
+------+
| 21 |
| 25 |
| 26 |
| 30 |
+------+

Phantom reads do not occur if you’re simply doing a SELECT. They only occur if you do UPDATE or DELETE or SELECT FOR UPDATE. InnoDB provides REPEATABLE READ for read-only SELECT, but it behaves as if you use READ COMMITTED for all write queries, in spite of your chosen transaction isolation level (considering only the two most common isolation levels, REPEATABLE READ and READ COMMITTED).

What is a gap lock?

A gap lock is a lock on the gap between index records. Thanks to this gap lock, when you run the same query twice, you get the same result, regardless other session modifications on that table. This makes reads consistent and therefore makes the replication between servers consistent. If you execute SELECT * FROM id > 1000 FOR UPDATE twice, you expect to get the same value twice. To accomplish that, InnoDB locks all index records found by the WHERE clause with an exclusive lock and the gaps between them with a shared gap lock.

This lock doesn’t only affect to SELECT … FOR UPDATE. This is an example with a DELETE statement:

transaction1 > SELECT * FROM t;
+------+
| age |
+------+
| 21 |
| 25 |
| 30 |
+------+

Start a transaction and delete the record 25:

transaction1 > START TRANSACTION;
transaction1 > DELETE FROM t WHERE age=25;

At this point we suppose that only the record 25 is locked. Then, we try to insert another value on the second session:

transaction2 > START TRANSACTION;
transaction2 > INSERT INTO t VALUES(26);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(29);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(23);
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
transaction2 > INSERT INTO t VALUES(31);
Query OK, 1 row affected (0.00 sec)

After running the delete statement on the first session, not only the affected index record has been locked but also the gap before and after that record with a shared gap lock preventing the insertion of data to other sessions.

How to troubleshoot gap locks?

Is possible to detect those gap locks using SHOW ENGINE INNODB STATUS:

---TRANSACTION 72C, ACTIVE 755 sec
4 lock struct(s), heap size 1248, 3 row lock(s), undo log entries 1
MySQL thread id 3, OS thread handle 0x7f84a78ba700, query id 163 localhost msandbox
TABLE LOCK table `test`.`t` trx id 72C lock mode IX
RECORD LOCKS space id 19 page no 4 n bits 80 index `age` of table `test`.`t` trx id 72C lock_mode X
RECORD LOCKS space id 19 page no 3 n bits 80 index `GEN_CLUST_INDEX` of table `test`.`t` trx id 72C lock_mode X locks rec but not gap
RECORD LOCKS space id 19 page no 4 n bits 80 index `age` of table `test`.`t` trx id 72C lock_mode X locks gap before rec

If you have lot of gaps locks in your transactions affecting the concurrency and the performance you can disable them in two different ways:

1- Change the ISOLATION level to READ COMMITTED. In this isolation level, it is normal and expected that query results can change during a transaction, so there is no need to create locks to prevent that from happening.
2- innodb_locks_unsafe_for_binlog = 1. Disables the gap locks except for foreign-key constraint checking or duplicate-key checking.

The most important difference between these two options is that the second one is a global variable that affects all sessions and needs a server restart to change its value. Both options cause phantom reads (non repeatable reads) so in order to prevent problems with the replication you should change the binary log format to “row”.

Depending on the statement, the behavior of these locks can be different. In the following link there is a good source of information:

http://dev.mysql.com/doc/refman/5.1/en/innodb-locks-set.html.

Conclusion

MySQL uses REPEATABLE READ as the default isolation level so it needs to lock the index records and the gaps to avoid phantom reads and to get a consistent Statement based replication. If your application can deal with phantom reads and your binary log is in row format, changing the ISOLATION to READ COMMITTED will help you to avoid all those extra locks. As a final advice, keep your transactions short :)

Mar
26
2012
--

How to diagnose errors in the MySQL error log

I frequently see questions about how to understand the nature of errors in MySQL’s error log. Now, there is a lot of complexity to this — the flowchart would be quite large, as with any nontrivial piece of software. But there is one particular class of errors that is relatively easy to diagnose, if you pay close attention to the error message.

Often an error has a little number in it, as in this example from our forums:


120326 16:56:45 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_21b2_0.MYI'; try to repair it
120326 16:56:45 [ERROR] Got an error from unknown thread, storage/myisam/mi_write.c:223
120326 16:56:45 [ERROR] /usr/sbin/mysqld: Sort aborted: Error writing file '/tmp/MYK74Kpi' (Errcode: 28)

The gem in all of that mess is this bit: (Errcode: 28). Now, MySQL includes a little perror utility to translate this into something a human can understand:

$ perror 28
OS error code  28:  No space left on device

That’s the key to understanding what really happened. MySQL created a temporary table, using the MyISAM storage engine, and when the disk filled up, MyISAM got an error. Little-known secret: MyISAM doesn’t handle a disk-full error gracefully ;-)

Sometimes I’ve seen people having trouble with error codes that they don’t believe. Good examples are error 13 (permission denied) or 2 (No such file or directory). The typical reaction is “but it certainly has permission!” or “I can see the file/directory myself, of course it exists!” In cases such as these you need to dig more deeply. The error code is not wrong, I promise you. If it says permission is denied, then permission is denied; there is no bug — that error comes from the operating system. Typical causes are things like SELinux or AppArmor interfering.

So, if you see an error code, try to diagnose with that first. It’s often the easiest route to a solution.

PS: You may wish to get a copy of MySQL Troubleshooting by Sveta Smirnova — I have not read it yet, but I’ve heard really good things about it.

Mar
26
2012
--

Community events around Percona Live MySQL Conference and Expo 2012

We’re very pleased with community support for Percona Live MySQL Conference and Expo. We’ve got wonderful speakers providing content of phenomenal session lineup for the conference this year. We have community helping by spreading the world, about conference, picking talks and all king of other
wonderful contributions.

We also have series of events organized by Community and different organizations, taking place around the conference which you surely should not miss:

Pythian is stepping in to organize traditional MySQL Community Dinner event during Tutorial Day of the conference (April 10th)

SkySQL and MariaDB are hosting SkySQL Solutions Day (April 13) which will feature keynotes by Michael (Monty) Widenius and David Axmark.

Drizzle will have its 4th Annual Drizzle day (April 13)

Finally there is a first Sphinx Search Day hosted by Sphinx Technologies (April 13)

All these events are free to attend (registration needed) whenever you’re attending MySQL Conference and Expo or not.

Furthermore Birds of the Feather sessions on Percona Live MySQL Conference and Expo, taking place Tuesday and Wednesday in the evening are free for anyone to atten.

Mar
23
2012
--

How FLUSH TABLES WITH READ LOCK works with Innodb Tables

Many backup tools including Percona Xtrabackup, MyLVMBackup and others use FLUSH TABLES WITH READ LOCK to temporary make MySQL read only. In many cases the period for which server has to be made read only is very short, just few seconds, yet the impact of FLUSH TABLES WITH READ LOCK can be quite large because of the time it may take to complete this statement. Lets look at what the problem is.

As of MySQL 5.5 FLUSH TABLES WITH READ LOCK does not work as optimally as you could think it works. Even though with general lock compatibility guidelines Read Lock should not conflict with another Read Lock, it does for this statement, and as such it has to wait for any SELECT statement to complete in order to complete locking tables. This means if you have workload which includes some very long SELECT queries you can be potentially waiting for hours for this statement to complete. Here is example how it can look:

mysql> show processlist;
+-------+------+-----------+----------+---------+------+-------------------------+-----------------------------+-----------+---------------+-----------+
| Id    | User | Host      | db       | Command | Time | State                   | Info                        | Rows_sent | Rows_examined | Rows_read |
+-------+------+-----------+----------+---------+------+-------------------------+-----------------------------+-----------+---------------+-----------+
| 10219 | root | localhost | dumptest | Query   |  324 | Sending data            | select count(*) from A,B    |         0 |             0 |   2359297 |
| 10290 | root | localhost | NULL     | Query   |  317 | Waiting for table flush | flush tables with read lock |         0 |             0 |         1 |
| 10291 | root | localhost | dumptest | Query   |    0 | NULL                    | show processlist            |         0 |             0 |         2 |
+-------+------+-----------+----------+---------+------+-------------------------+-----------------------------+-----------+---------------+-----------+
3 rows in set (0.00 sec)

As you can see FLUSH TABLES WITH READ LOCK is waiting for that very nasty “full join” select to complete. What is worse as the statement started execution all writes will be blocked to the server, which in the end causes “field list” operation to be blocked too, which among other tools used by mysql client, so you might get a feel you can’t connect to the database at all (use mysql -A in this case, to prevent it from getting field list from all tables)

Reads though would not be blocked, ie there is no MYISAM like table lock priority problem with pending WRITE query blocks any READ queries to execute on the table.

mysql> show processlist;
+-------+------+-----------+----------+------------+------+------------------------------+------------------------------+-----------+---------------+-----------+
| Id    | User | Host      | db       | Command    | Time | State                        | Info                         | Rows_sent | Rows_examined | Rows_read |
+-------+------+-----------+----------+------------+------+------------------------------+------------------------------+-----------+---------------+-----------+
| 10219 | root | localhost | dumptest | Query      |  688 | Sending data                 | select count(*) from A,B     |         0 |             0 |   2359297 |
| 10290 | root | localhost | NULL     | Query      |  681 | Waiting for table flush      | flush tables with read lock  |         0 |             0 |         1 |
| 10291 | root | localhost | dumptest | Query      |   74 | Waiting for global read lock | insert into C values ("a",1) |         0 |             0 |         1 |
| 10304 | root | localhost | dumptest | Field List |   15 | Waiting for table flush      |                              |         0 |             0 |         4 |
| 10305 | root | localhost | dumptest | Query      |    0 | NULL                         | show processlist             |         0 |             0 |         1 |
+-------+------+-----------+----------+------------+------+------------------------------+------------------------------+-----------+---------------+-----------+
5 rows in set (0.00 sec)

As result this means single run away select can effectively cause downtime if you use backup solution which does FLUSH TABLES WITH READ LOCK which is very unfortunate.

As the real solution I’d love to see this problem fixed. There is no reason (other than code design) for read lock to wait for other read lock in order to be set.

As a workaround you can consider having scripts which would check for situation like above and either kill FLUSH TABLES WITH READ LOCK and fail backup or kill long running SELECT queries to let backup to proceed, but resolving server gridlock one way or another.

If you’re just using Innodb tables and you’re not actively changing users, stored procedures etc (which are stored in MyISAM tables anyway) you can consider using –no-lock option with Percona Xtrabackup. Many other tools have similar option. This option can be especially helpful when backing up data from the slave with –safe-slave-backup option, as preventing writes to the slave can be done by simply pausing replication.

Mar
23
2012
--

Speaking at POSSCON

I’m speaking at POSSCON in Columbia,SC next week (March 28). My talk will be about Open Source software Percona Produces – Percona Server, Percona Xtrabackup, Percona Toolkit, Percona XtraDB Cluster. If you want to get a birds eye view about Open Source software Percona has created this is a great talk to attend.

We’re also Sponsoring event this year so we encourage you to come by and say Hi to us at the Expo hall Booth.

If you’re also attending and would like to meet up to have a chat, please drop me a note.

Mar
23
2012
--

Best kept MySQLDump Secret

Many people use mysqldump –single-transaction to get consistent backup for their Innodb tables without making database read only. In most cases it works, but did you know there are some cases when you can get table entirely missing from the backup if you use this technique ?

The problem comes from the fact how MySQL’s Transactions work with DDL, In particular ALTER TABLE. When ALTER TABLE is Performed in many cases it will Create temporary table with modified structure, copy data to that table and when drop original table and rename such temporary table to original name.

How does data visibility works in this case ? DDLs are not transactional and as such the running transaction will not see the contents of old table once it is dropped, transaction also will see the new table which was created after transaction was started, including table created by ALTER TABLE statement. Transactions however apply to DATA which is stored in this table and so data which was inserted after start of transaction (by ALTER TABLE statement) will not be visible. In the end we will get new structure in the dump but no data.

Here is example:

SESSION1:
mysql> show tables;
+--------------------+
| Tables_in_dumptest |
+--------------------+
| A                  |
| B                  |
| C                  |
+--------------------+
3 rows in set (0.00 sec)

mysql> select count(*) from A;
+----------+
| count(*) |
+----------+
|  2359296 |
+----------+
1 row in set (1.73 sec)

mysql> select * from C;
+------+
| t    |
+------+
| test |
+------+
1 row in set (0.00 sec)

SESSION2:
root@ubuntu:~/dump# mysqldump --single-transaction dumptest > dump.sql

SESSION1:  (before dump has completed)
mysql> alter table C add i int not null;
Query OK, 1 row affected (0.65 sec)
Records: 1  Duplicates: 0  Warnings: 0

SESSION2:

root@ubuntu:~/dump# tail -29 dump.sql

DROP TABLE IF EXISTS `C`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `C` (
  `t` char(255) NOT NULL,
  `i` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `C`
--

LOCK TABLES `C` WRITE;
/*!40000 ALTER TABLE `C` DISABLE KEYS */;
/*!40000 ALTER TABLE `C` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;

/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;

-- Dump completed on 2012-03-23  7:27:18

SESSION1:

mysql> select * from C;
+------+---+
| t    | i |
+------+---+
| test | 0 |
+------+---+
1 row in set (0.00 sec)

As you can see as we altered table C at the same time as mysqldump was running we got table empty table with new structure in mysqldump instead of table with valuable data.

This is a pretty edge case scenario neither the less it can be problem for some workloads which run ALTER TABLE regularly during normal operation. I also hope if you get some empty
tables in your mysqldump –single-transaction backups you will know the potential cause for it.

What are potential solutions for this problem ? you can use mysqldump –lock-all-tables instead which does not have this problem at the cost of having database read only for the
duration of operation. You can also use Percona Xtrabackup, LVM or other database backup approach which does not relay on transaction visibility.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com