Oct
10
2011
--

MySQL versions shootout

As part of work on “High Performance MySQL, 3rd edition”, Baron asked me to compare different MySQL version in some simple benchmark, but on decent hardware.

So why not.

I took our Cisco UCS C250 and ran simple sysbench oltp read-write all data fits into memory workload.

Versions in question:

  • MySQL 4.1
  • MySQL 5.0
  • MySQL 5.1 (with built-in InnoDB)
  • MySQL 5.1 with InnoDB-plugin
  • MySQL 5.5
  • MySQL 5.6

All versions are vanilla MySQL, not Percona Server.

The results are there:

There is nothing unexpected, staring with InnoDB-plugin, MySQL scale much better on multi-cores, there is gain up to 1.7x in throughput.

What is interesting there, let’s take look on single thread results:

You can see that each new version is slower in single thread than previous.
It is actually easily explainable: each new version adds more features, which give more overhead.
Profiling MySQL 5.0 vs MySQL 4.1 couple years ago, I have seen biggest performance hit comes
from query parser. In MySQL 5.0 grammar became much bigger because of storage procedures and triggers.

And raw results, in tps (more is better)

threads MySQL 4.1 MySQL 5.0 MySQL 5.1 MySQL 5.1+InnoDB-plugin MySQL 5.5 MySQL 5.6.2
1 685.52 639.73 596.01 593.51 531.07 525.80
2 1306.50 1221.40 1139.60 1139.20 1076.59 1018.64
4 2274.83 2167.92 2031.80 2042.51 1937.64 1830.80
8 3879.11 3746.29 3606.02 3681.36 3522.65 3320.10
16 4373.82 4527.04 4392.81 6130.70 5881.40 5572.50
32 4591.42 4864.46 4698.39 7762.32 7548.93 7138.54
64 4688.41 5078.02 4910.21 7535.65 7269.47 6994.04

Scripts and results are also on our Launchpad Benchmark project



Oct
10
2011
--

Infinite Replication Loop

Last week I helped 2 different customers with infinite replication loops. I decided to write a blog post about these infinite loop of binary log statements in MySQL Replication. To explain what they are, how to identify them… and how to fix them.

An infinite replication loop is one or more transactions that are replicating over and over in a Multi Master Replication relationship. This happens because those transactions belong to a server-id which is different from the two actual masters.

Let’ s have a look first at the normal flow:

Replication flow: [1]

When you setup MySQL replication, you define the global variable server_id, this value must be unique for each host. In the illustration above, arrows represent the replication flow, so 1 is replicating to 2 and to 3, and 2 is replicating to 1.

When your active master (having more masters receiving simultaneous writes causes problems and is not beneficial in 99% of all cases) commits a transaction, the DML statements from that transaction are written to the binary log. They are called binary log events. A binary log event contains the server-id where it was executed. Then that event is replicated to the slaves (the I/O slave thread filters out only the events with a different server_id value. It is also possible to change that behaviour by enabling replicate_same_server_id but this not today’s focus[2]).

As we are in multi-master replication, the other master should also replicate events :

Closer look:

 

Infinite Replication Loop flow:

If you have master-master with slaves or if you want to backup one master, you need log_slave_updates[3] to be enabled. This is for example the case if you re using MMM with slaves.

But if for one reason that I explain later, one or more binary events with a different server-id than those of the two current masters are still in the relay log, they are just applied. Then written with their server-id on the binary log, and the slaves (included the other master) will replicate them… and write them in their binary log… and then the loop is created.

Look at the following process, notice that the server-id has changed on our server 2:

And that’s the loop ! The event for server-id 2 gets replicated over and over…

How can you create loops ?

  • You restore a backup from another machine and you forget to change the server-id in my.cnf.
  • You change the server-id interactively, this is an action you should never perform on a slave. “SET GLOBAL server_id =…”
  • You promote a slave as master to replace one of the masters but the replication was still catching up… so some events from the replaced master still had to be executed on the slaves.
  • And recently I saw a weird problem, my.cnf was correct on all the machines, but the value was not ok in show global variables… it was always 1 !! There was a hidden character in my.cnf that was unvalidating the value and then it was set to 1.

 

How to identify an infinite replication loop ?

When you have an infinite replication loop, the value of Seconds_behind_master will have many randomly looking values. The value of Seconds_behind_master is calculated form the timestamp included in the last executed sql statement. As some of them are looping, the timestamp of those statements does not increase, causing a flip of values from 0 to the time those looping statement were first inserted.

Another way to see it is by monitoring your MySQL server with cacti, you will immediately notice the problem. Look at those graphs:

You can see the CPU usage and the replication lag… once again cacti is very helpful !

 

How to fix then ?

There are several possibilities to fix this problem. They are also linked to the situation.

  • If you know that you have only one or two statements in the loop, you can try to skip the statements by stopping the slave, set global SQL_SLAVE_SKIP_COUNTER and start the slave again. This method is not really recommended because no writes should happen during the process.
  • You can change again the server_id on the passive master, but that could also create another loop in case the replication of events with the current server_id didn’t fetch up.
  • There is also the possibility to use IGNORE_SERVER_IDS from CHANGE MASTER[4]
  • The last solution, my favorite one, is to stop the replication on the active master. Then wait until the passive master has executed the looping statements and pick up a position from SHOW MASTER STATUS on the master still replicating, change the master with the write role where replication was stopped and it is fixed ! Of course you must be sure that there are no direct writes to the passive master during the process. As I mentioned before, if both your masters receive writes simultaneously (two active masters) you should review your setup and this method won’t work.

Example on MMM:

1. check the mmm mode and put it to passive to disable any automatic failover during the process because high availability tools like mmm can cause failover if you stop replication

# mmm_control @production set_passive (to avoid automatic failover)

2. stop replication on the master having writer role (mysql1)

3. wait until some statements are replicated

4. change the replication master position on the active master to a newer position from the passive master (get position from show master status\G on mysql 4)

 

Illustration of the recommended solution :

Server 1 is the active master (receive the writes)

Server 4 is the passive master

Server 3 is a normal slave

Step1 Stop the replication on the active master (server 1):

mysql> stop slave;

Step 2: there are writes on the active master, the loop statements are also replicated from the relay-log but they aren’t not replicated again.

 

Step 3: the binary event with the wrong server-id are performed on the two slaves (mysql 3 and mysql 4) and the new events (with server-id 1) are also replicated.

Step 4: the event with server-id 2 on mysql 4 (the passive master) is written in the binary log and the new events are also performed on the two slaves

Step 5: the new statements (with server-id 1) are also saved in the binary-log of mysql 4 and the master position is increased…

Step 6: the slave position on the active master (mysql 1) is changed to the position returned by SHOW MASTER STATUS on mysql 4 (this position will increase every time writes are performed on the active master, so just take a position when you are sure that the looping statements have been executed) and slave is restarted on mysql 1

 

Note: you can also check all the statements that were in the loop using mysqlbinlog.

For example what was the loop of events from server-id 2?

# mysqlbinlog mysql-relay-bin.000x | grep ‘id 2′ | sort | uniq

 

Special thanks to Kenny[5] who helped me on the cases and to have reviewed this post.

[1] http://dev.mysql.com/doc/refman/5.1/en/replication-implementation-details.html

[2] http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_replicate-same-server-id

[3] http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_log-slave-updates

[4] http://dev.mysql.com/doc/refman/5.1/en/change-master-to.html

[5] http://www.percona.com/about-us/our-team/kenny-gryp/

Oct
07
2011
--

Multiple MySQL instances on Fusion-io ioDrive

It is known that MySQL due internal limitations is not able to utilize
all CPU and IO resources available on modern hardware.
Idea is to run multiple instances of MySQL to gain better performance on Fusion-io ioDrive card.

Full report is available in PDF


For tests we used tpcc-mysql package, which generates TPCC-like workload on MySQL systems.

  • Server hardware: Dell PowerEdge R815
  • Storage: Fusion-io ioDrive Duo 640GB MLC. Fusion-io driver version: 2.3.1 build 123; Firmware v5.0.7, rev 101971
  • Software: Percona Server 5.5.15
  • Client hardware: IBM x3650

Fusion-io ioDrive Duo 640GB MLC card was provided by Fusion-io.

More details on hardware and software configuration are in full report

tpcc-mysql tests were run for following combinations:

2400W, big buffer pool

  • 1 MySQL instance, 2400 warehouses (220GB of data), 120GB buffer pool
  • 2 MySQL instances, 1200 warehouses (110GB of data) each, 60GB buffer pool each
  • 4 MySQL instances, 600 warehouses (55GB of data) each, 30GB buffer pool each

2400W, small buffer pool

  • 1 MySQL instance, 2400 warehouses (220GB of data), 64GB buffer pool
  • 2 MySQL instances, 1200 warehouses (110GB of data) each, 32GB buffer pool each
  • 4 MySQL instances, 600 warehouses (55GB of data) each, 16GB buffer pool each

1200W, big buffer pool

  • 1 MySQL instance, 1200 warehouses (110GB of data), 120GB buffer pool
  • 2 MySQL instances, 600 warehouses (55GB of data) each, 60GB buffer pool each

1200W, small buffer pool

  • 1 MySQL instance, 1200 warehouses (110GB of data), 64GB buffer pool
  • 2 MySQL instances, 600 warehouses (55GB of data) each, 32GB buffer pool each

The purpose of different combination of data and memory sizes, was to check
how data/memory ratio affects results.

We used 48 user sessions and we performed 2700 sec long run, gathering data for New Order Transaction each 10 seconds.

That is, for each set of user sessions, we take 270 throughput measurements.

Based on this, we constructed Median Throughput for last 900 sec, to avoid warm-up influence on results.

In conclusion I can highlight:

  • Running multiple instances shows good improvement in throughput. 1.4x-1.8x for 2 instances and 1.6-2.4x for 4 instances.
  • If you have sharding environment which allows you separate database into multiple instances you may try 2-4 instances setup to get better overall throughput from your MySQL setup

Recently Fusion-io announced new ioDrive2 card with new driver, which promises even better performance for MySQL. I am looking forward to test it.

Disclaimer: This benchmark is sponsored by Fusion-io, but this post is totally independent and fully reflects our opinion.



Oct
07
2011
--

Keep your MySQL backend online no matter what + TGIF Contest!

On Oct. 25th I will be presenting my conference session “Keep your MySQL backend online no matter what” at the Percona Live London Conference. Considering their is a variety of highly knowledgeable MySQL experts speaking at this event I wanted to give my talk a special plug so you won’t miss THIS knowledgeable MySQL expert speak! ;)

My talk focuses on architecture that will help keep your MySQL backends online when you simply can’t afford them to go down. This is not about one single solution, but a combination of different technologies working together. This architecture is built on MMM, HeartBeat, LVS, ldirectord and some custom scripts. As a real world example I will demonstrate how this solution was applied for a payment processor website behind an alexa top50 website, where data consistency was important. In addition I will show an example for another alexa top 500 website, where downtime and customer satisfactory was the numer 1 priority.

For a chance to hear me speak along with all the other MySQL experts watch our @Percona Twitter stream and retweet the contest to win a free ticket! We have celebrated several winners so far so don’t miss your opportunity to win! The conference is in T-minus16 days so register now! Just to prove how great this talk is going to be use the following discount code “Istvanrules” when registering to save 40 pounds off or your ticket! See in London!

Oct
06
2011
--

When Does InnoDB Update Table Statistics? (And When It Can Bite)

An InnoDB table statistics is used for JOIN optimizations and helping the MySQL optimizer choose the appropriate index for a query. If a table’s statistics or index cardinality becomes outdated, you might see queries which previously performed well suddenly show up on slow query log until InnoDB again updates the statistics. But when does InnoDB perform the updates aside from the first opening of the table or manually running ANALYZE TABLE on it? The 2 instances below are documented from the MySQL and InnoDB plugin’s manual:

  1. Metadata commands like SHOW INDEX, SHOW TABLE STATUS and SHOW [FULL] TABLES (or their corresponding queries from INFORMATION_SCHEMA.TABLES and INFORMATION_SCHEMA.STATISTICS)
  2. When 1 / 16th of the table or 2Billion rows has been modified, whichever comes first. ./row/row0mysql.c:row_update_statistics_if_needed
If you or an application constantly executes SHOW [FULL] TABLES or SHOW TABLE STATUS on many a large tables, this can affect your server especially if the random index dives had to be read from disk. Here is an example of SHOW TABLE STATUS FROM schema taking away a couple of seconds on execution time.
# Time: 110923 1:48:17
# User@Host: user1[user1] @ [172.20.6.1]
# Thread_id: 10140441 Schema: db1 Last_errno: 0 Killed: 0
# Query_time: 12.277786 Lock_time: 0.000068 Rows_sent: 294 Rows_examined: 294 Rows_affected: 0 Rows_read: 294
# Bytes_sent: 34187 Tmp_tables: 1 Tmp_disk_tables: 0 Tmp_table_sizes: 0
SET timestamp=1316767697;
show table status from `db1`;

As you can see db1 has about 294 tables and it took the server 12 seconds to update all the tables’ statistics.  Luckily, this can be controlled with Percona Server with the variable innodb_stats_auto_update (from MySQL Community 5.1.17 a similar variable exist called innodb_stats_on_metadata, for Percona Server where both variables exist, both should be 0 if you want to disable the feature). When set to 0, automatic updates to the table statistics (items 1 and 2 above) is disabled unless ANALYZE TABLE is ran or during first open of the table.

On my next post, I will discuss the effects of disabling the feature and how you or your application should compensate.

Oct
03
2011
--

Percona Server 5.5.15 + Galera 21.1-beta2

Codership team published beta2 of MySQL 5.5.15 with Galera replication
https://launchpad.net/codership-mysql
and we made port to Percona Server:

source code:
lp:~percona-dev/percona-server/percona-server-galera-5.5.15
binaries for RedHat/CentOS 6:
http://www.percona.com/downloads/TESTING/Galera/Percona-XtraDB-Galera-5.5.15.tar.gz

What difference between Percona Server+Galera and MySQL 5.5.15 ?
First of course, Percona Server+Galera is based on our XtraDB engine.
Second, we provide wsrep_sst_xtrabackup script, which allows to use Percona XtraBackup for node provisioning.
Percona Server+Galera is still on early stage, and we make it available so you can play it to gain some play-and-touch experience.

So What is Percona Server+Galera at the end ?
I wrote about this previously, but I want to highlight some points again.

1. It is new High Availability + Scalability solution for MySQL.
And this solution is radically different from regular MySQL replication.

You can actually think about N-nodes Percona Server+Galera setup as Cluster.

where each node is active (accepts both reads and writes).
You can perform writes to ANY node.

Setup of 3-master active MySQL Replication is practically impossible.

2. In contrast to MySQL replication, in Percona Server+Galera schema
all nodes are CONSISTENT. Transaction either is commited on all nodes or
not commited at all. Forget about slaves being out-of-sync with master.

3. Nodes are able to apply events in parallel. And this is true parallel replication, not “per-schema” as in MySQL 5.6.

4. New node automatically joins to Cluster. No manual cloning of slave and copying to new box. Using Percona XtraBackup as transport to transfer data between nodes provides you minimal locking time.

You are welcome to try it.



Oct
03
2011
--

Percona Welcomes Patrick Crews

I am very happy to welcome Patrick Crews to the Percona development team. Patrick joins Percona at a very exciting time for the development team. We are getting regular releases of Percona Server and Percona Xtrabackup out the door, we have been heavily using the Jenkins continuous integration system to maintain and improve the quality of the products we ship and we just upgraded our documentation publishing platform for both Percona Server (5.1 and 5.5) and Percona Xtrabackup.

We are at the natural point to expand our QA efforts – and that’s where Patrick joins us.

Patrick has been doing QA in the MySQL world for a while now, and has extensive experience with both MySQL and Drizzle. His work has included use of a variety of testing tools such as the randgen (random query generator) project to which he contributes.

As a Drizzle developer, he saw the code get to its first GA release. This included testing a completely rewritten replication system, drizzledump’s evolution to a migration tool, as well as creating a new pluggable testing system for the project (dbqp – expect to hear a lot more on this in the months to come).

Patrick’s role will have him working on both Percona Server, XtraDB, and XtraBackup. This will include creating more advanced tests and test systems for our development needs which will naturally also improve the testing of Drizzle due to sharing of common code.

Patrick has a blog over at www.wc220.com where he writes about Drizzle QA and other topics.

Oct
02
2011
--

Updated Percona Server and Percona XtraBackup documentation

We’ve just gone live with a new way of publishing and maintaining documentation for Percona Server and Percona XtraBackup. We are now using Sphinx to generate the documentation that we publish on the web site. Sphinx was originally created for the new Python documentation, and has won us over due to the simple markup, ease of cross-referencing, ability to keep our documentation source along with our source code and support for multiple output formats.

This moves our documentation workflow to be the same as for source code. A developer (or writer) will create a bzr branch on launchpad and submit a merge request. This means documentation review now goes through the same system as code review.

We have also set up a Jenkins job to automatically update the documentation on the percona.com web site when changes are made to the source repositories. This means that bug fixes and improvements to the documentation can transparently and automatically be published with a minimal amount of turn-around.

An added benefit for XtraBackup is that it it will enable us to easily share documentation for the Drizzle version of XtraBackup as Drizzle uses Sphinx for its documentation too. The new setup for Percona documentation was based on that of the Drizzle project.

The end result? Better and more frequently updated documentation.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com