MySQL 5.6 – InnoDB Memcached Plugin as a caching layer

A common practice to offload traffic from MySQL 5.6 is to use a caching layer to store expensive result sets or objects.  Some typical use cases include:

  • Complicated query result set (search results, recent users, recent posts, etc)
  • Full page output (relatively static pages)
  • Full objects (user or cart object built from several queries)
  • Infrequently changing data (configurations, etc)

In pseudo-code, here is the basic approach:

data = fetchCache(key)
if (data) {
  return data
data = callExpensiveFunction(params)
storeCache(data, key)
return data

Memcached is a very popular (and proven) option used in production as a caching layer.  While very fast, one major potential shortcoming of memcached is that it is not persistent.  While a common design consideration when using a cache layer is that “data in cache may go away at any point”, this can result in painful warmup time and/or costly cache stampedes.

Cache stampedes can be mitigated through application approaches (semaphores, pre-expiring and populating, etc), but those approaches are more geared towards single key expiration or eviction.  However, they can’t help overall warmup time when the entire cache is cleared (think restarting a memcache node).  This is where a persistent cache can be invaluable.

Enter MySQL 5.6 with the memcached plugin

As part of the standard MySQL 5.6 GA distribution, there is a memcached plugin included in the base plugin directory (/usr/lib64/mysql/plugin/ that can be stopped and started at runtime.  In a nutshell, here is how one would start the memcached plugin:

mysql> install plugin daemon_memcached soname '';

In an effort to not re-invent the wheel, here is a link to the full documentation for setting up the plugin:

As a quick benchmark, I ran some batches of fetch and store against both a standard memcached instance and a minimally tuned MySQL 5.6 instance running the memcached plugin.  Here are some details about the test:

  • Minimal hardware (vBox instances on MacBook Pro)
    • Centos 6.4
    • Single core VM
    • 528M RAM
    • Host-Only network
    • 1 Box with http/php, 1 box with memcache or mysql started
  • PHP script
    • Zend framework
    • libmemcached PECL module
    • Zend_Cache_Backend_Libmemcached

Here is the rough code for this benchmark:

// Identical config/code for memcached vs InnoDB
$frontendOpts = array(
  'caching' => true,
  'lifetime' => 3600,
  'automatic_serialization' => true
$memcacheOpts = array(
  'servers' =>array(
      'host'   => '',
      'port'   => 11211,
      'weight' => 1,
  'client' => array(
    'compression' => true,
$cache = Zend_Cache::factory('Core', 'Libmemcached', $frontendOpts, $memcacheOpts);
for ($i = 0; $i < 100000; $i++) {
  $cache->save(new Object(), "key_$i");
$totalTimeStore = $timer->stop();
$avgTimeStore = $totalTimeStore / 100000;
for ($i = 0; $i < 10; $i++) {
  for ($i = 0; $i < 100000; $i++) {
    $obj = $cache->load("key_$i");
$totalTimeFetch = $timer->stop();
$avgTimeFetch = $totalTimeFetch / 1000000;

While this benchmark doesn’t show any multi-threading or other advanced operation, it is using identical code to eliminate variation due to client libraries.  The only change between runs is on the remote server (stop/start memcached, stop/start plugin).

As expected, there is a slowdown for write operations when using the InnoDB version.  But there is also a slight increase in the average fetch time.  Here are the raw results from this test run (100,000 store operations, 1,000,000 fetch operations):

Standard Memcache:

Storing [100,000] items:

60486 ms total
0.60486 ms per/cmd
0.586 ms min per/cmd
0.805 ms max per/cmd
0.219 ms range per/cmd

Fetching [1,000,000] items:
288257 ms total
0.288257 ms per/cmd
0.2843 ms min per/cmd
0.3026 ms max per/cmd
0.0183 ms range per/cmd

InnoDB Memcache:

Storing [100,000] items:

233863 ms total
2.33863 ms per/cmd
1.449 ms min per/cmd
7.324 ms max per/cmd
5.875 ms range per/cmd

Fetching [1,000,000] items:
347181 ms total
0.347181 ms per/cmd
0.3208 ms min per/cmd
0.4159 ms max per/cmd
0.0951 ms range per/cmd

InnoDB MySQL Select (same table):

Fetching [1,000,000] items:

441573 ms total
0.441573 ms per/cmd
0.4327 ms min per/cmd
0.5129 ms max per/cmd
0.0802 ms range per/cmd

Keep in mind that the entire data set fits into the buffer pool, so there are no reads from disk.  However, there is write activity stemming from the fact that this is using InnoDB under the hood (redo logs, etc).

Based on the above numbers, here are the relative differences:

  • InnoDB store operation was 280% higher (~1.73 ms/op)
  • InnoDB fetch operation was 20% higher (~.06 ms/op)
  • MySQL Select showed 27% increase over InnoDB fetch (~.09 ms/op)
    • This replaced $cache->load() with $db->query(“SELECT * FROM memcached.container WHERE id=’key_id’”);
    • id is PK of the container table

While there are increases in both operations, there are some tradeoffs to consider:

  • Cost of additional memcached hardware
  • Cost of operations time to maintain an additional system
  • Impact of warmup time to application
  • Cost of disk space on database server

Now, there are definitely other NoSQL options for persistent cache out there (Redis, Couchbase, etc), but they are outside the scope of this investigation and would require different client libraries and benchmark methodology.

My goal here was to compare a transparent switch (in terms of code) and experiment with the memcache plugin.  Even the use of HandlerSocket would require coding changes (which is why it was also left out of the discussion).

The post MySQL 5.6 – InnoDB Memcached Plugin as a caching layer appeared first on MySQL Performance Blog.


Network at the Percona Live MySQL Conference and Expo

We’re very pleased with the speaker lineup for the Percona Live MySQL Conference and Expo. We have outstanding MySQL experts speaking on a variety of topics during the breakout sessions and tutorials. And the keynote speakers are all top notch, MySQL technology and industry experts who will share their views of the ecosystem.

Beyond the speakers and sessions, the Percona Live MySQL Conference and Expo offers a unique opportunity to network with the largest gathering of MySQL community members at a single conference. The extra day added to this year’s conference means more opportunities to connect and expand your professional network.

Percona Live MySQL Conference and Expo Logo

There are some great evening activities where you can network with the community:

  • Monday features a Welcome Reception in the exhibit hall following the day’s tutorial sessions. Have drinks and appetizers while mingling with fellow attendees in a laid back atmosphere.
  • Tuesday features the Birds of a Feather sessions where you can join attendees with similar interests to discuss common topics of interest and forge new contacts with other MySQL enthusiasts. The annual Community Dinner will follow the BOFs – stay tuned for more details on the dinner and other Tuesday night activities.
  • Wednesday is the Community Networking Reception in the exhibit hall. Mingle with attendees over drinks and food while enjoying the evening’s entertainment. Watch the Lightning Talks for 5-minute snapshots of interesting MySQL topics. Cheer on the winners during the MySQL Community Awards presentations. Or just make the rounds of our sponsor booths and get your Passport Program card stamped to qualify for the Thursday raffle drawing of valuable sponsor prizes.

Even if you can’t stay for the evening activities, opportunities to network abound at the Percona Live MySQL Conference and Expo. Connect with MySQL community members in the hallways between talks. Or have lunch during the conference with some new friends.

The Percona Live MySQL Conference and Expo will once again have a jobs board where sponsors can post open positions. If you or someone you know are open for new opportunities, be sure to stop by during the conference and see what’s available.

Join us in Santa Clara from April 22nd through the 25th for the premier MySQL community gathering of the year and expand your network of contacts. If you plan to come but haven’t purchased a conference pass or booked a hotel room, I urge you to do so now. The Santa Clara Hyatt is filling quickly and prices have already gone up. If you need to purchase a Percona Live MySQL Conference and Expo pass, you can save 15% by using discount code PERCONA15 when you register. I hope to see you in Santa Clara!

The post Network at the Percona Live MySQL Conference and Expo appeared first on MySQL Performance Blog.


Why MySQL Performance at Low Concurrency is Important

PerformanceA few weeks ago I wrote about “MySQL Performance at High Concurrency” and why it is important, which was followed up by Vadim’s post on ThreadPool in Percona Server providing some great illustration on the topic. This time I want to target an opposite question: why MySQL performance at low concurrency is important for you.

I decided to write about this topic as a number of recent blog posts and articles look at MySQL performance starting with certain concurrency as the low point. For example, MySQL 5.6 DBA and Developer Guide has a number of graphs starting with 32 connections at the low point. Mark Callaghan also starts with a concurrency of 8 in some of the results he publishes. All of this may make it look as though single-thread performance is no more a concern. I believe it still is!

First, performance at concurrency of 1 serves as an important baseline that offers the best response times (MySQL server does not use multiple threads to execute query in parallel at this point). Response times at the variety of concurrency levels can be compared with this baseline to see how they are affected and how the system scales with increasing concurrency in effect.

Second, there are many cases in which single-thread execution is what is going to happen — many batch jobs are written to be single-threaded. MySQL replication is single-thread too, and MySQL 5.6 brings some abilities of parallel replication but these apply only to some use cases. Many database maintenance operations such as “alter table” are run in single-thread too.

In fact,  chances are your system actually runs at low concurrency most of the time. The majority of systems I see in the wild have Threads_running on average of 5 or less, at least the system is performing well, pointing to rather low concurrency in the normal system operation. Much higher concurrency in most cases is seen when the system is overloaded and suffering.

So what is the problem with looking at performance at just high-concurrency values? This might mislead you on how your production will be impacted. Think about, for example, a system that is slower at low concurrency but a lot faster at high concurrency (which is the typical situation). If you just look at performance at high concurrency you might not understand why the system is showing worse response times when you are actually running it in real life with low concurrency.

When it comes to benchmarks I would love to see results published starting from concurrency of 1 if possible so we can see the full picture. In terms of benchmarks, Mark Callaghan published an example where 8 sysbench tables are used which means the lowest concurrency you can run with to compare apples to apples is 8. Running a comparable benchmark with concurrency of 1 and the same tool is not possible.

Want to talk more about performance? Come to Percona Live MySQL Conference and Expo April 22-25 in Santa Clara, California, where “performance” is going to be a big topic :)

The post Why MySQL Performance at Low Concurrency is Important appeared first on MySQL Performance Blog.


Repair MySQL 5.6 GTID replication by injecting empty transactions

In a previous post I explained how to repair MySQL 5.6 GTID replication using two different methods. I didn’t mention the famous SET GLOBAL SQL_SLAVE_SKIP_COUNTER = n for a simple reason, it doesn’t work anymore if you are using MySQL GTID. Then the question is:

Is there any easy way to skip a single transaction?

There is! Injecting empty transactions. Let’s imagine that the replication in slave server is not working because of an error:

Last_SQL_Error: Error 'Duplicate entry '4' for key 'PRIMARY'' on query. Default database: 'test'. Query: 'insert into t VALUES(NULL,'salazar')'
Retrieved_Gtid_Set: 7d72f9b4-8577-11e2-a3d7-080027635ef5:1-5
Executed_Gtid_Set: 7d72f9b4-8577-11e2-a3d7-080027635ef5:1-4

There are different ways to find the failed transaction. You can examine the binary logs or you can also check Retrieved_Gtid_Set and Executed_Gtid_Set from the SHOW SLAVE OUTPUT as we can see in the example. This slave server has retrieved transactions 1 to 5 but has only executed 1 to 4. That means that transaction 5 is the one that is causing the problems.

Since SQL_SLAVE_SKIP_COUNTER doesn’t work with GTID we need to find a way to ignore that transaction. The way to do it is creating a new empty transaction with it the GTID we want to skip.

SET GTID_NEXT="7d72f9b4-8577-11e2-a3d7-080027635ef5:5";
Retrieved_Gtid_Set: 7d72f9b4-8577-11e2-a3d7-080027635ef5:1-5
Executed_Gtid_Set: 7d72f9b4-8577-11e2-a3d7-080027635ef5:1-5

After the START SLAVE the slave checks that transaction 5 is already in its own binary log and that means that it has been executed.

This is an easy way to skip some transactions but take in account that by doing this you will end up with data inconsistencies between Master and Slave servers. pt-table-checksum can help you here, which is found in Percona Toolkit for MySQL.

Last week I gave a talk at Percona MySQL University @Toronto about GTID. It includes an overview of MySQL 5.6 GTID that can help people to start working with this new feature in MySQL 5.6. Here are the slides from that session. I hope you find it useful: MySQL 5.6 GTID in a nutshell

The post Repair MySQL 5.6 GTID replication by injecting empty transactions appeared first on MySQL Performance Blog.


MySQL 5.6 Compatible Percona Toolkit 2.2 Released

Percona ToolkitA new Percona Toolkit series has been released: Percona Toolkit 2.2 for MySQL 5.6. Several months in the making and many new features, changes, and improvements make this a great new series. It replaces the 2.1 series for which we plan to do only one more bug fix release (as 2.1.10) then retire. 2.2 is largely backward-compatible with 2.1, but some tools were changed significantly (e.g. pt-query-digest, pt-upgrade, and pt-online-schema-change).  Here are some highlights:

Official support for MySQL 5.6 and Percona XtraDB Cluster

We started beta support for MySQL 5.6 in 2.1.8 when 5.6 was still beta. Now that MySQL 5.6 is GA, so is our support for it. Check out the Percona Toolkit supported platforms and versions. When you upgrade to MySQL 5.6, be sure to upgrade to Percona Toolkit 2.2, too.

We also started beta support for Percona XtraDB Cluster in 2.1.8, but now that support is official in 2.2 because we have had many months to work with PXC and figure out which tools work with it and how. There’s still one noticeable omission: pt-table-sync. It’s still unclear if or how one would sync a cluster that, in theory, doesn’t become out-of-sync. As Percona XtraDB Cluster develops, Percona Toolkit will continue to evolve to support it.

pt-online-schema-change (pt-osc) is much more resilient

pt-online-schema-change 2.1 has been a great success, and people have been using it for evermore difficult and challenging tasks. Consequently, we needed to make it “try harder”, even though it already tried pretty hard to keep working despite recoverable errors and such. Whereas pt-osc 2.1 only retries certain operations, pt-osc 2.2 retries every critical operation, and its tries and wait time between tries for all operations are configurable. Also, we removed –lock-wait-timeout which set innodb_lock_wait_timeout because that now conflicts, or is at least confused with, lock_wait_timeout (introduced in MySQL 5.5) for metadata locks. Now –set-vars is used to set both of these (or any) system variables. For a quick intro to metadata locks and how they may affect you, see Ovais’s article.

In short: pt-online-schema-change 2.2 is far more resilient out of the box. It’s also aware of metadata locks now, whereas 2.1 was not really aware of them. And it’s highly configurable, so you can make the tool try very hard to keep working.

pt-upgrade is brand-new

pt-upgrade was written once long ago and not updated since.  Now that we have four base versions of MySQL (5.0, 5.1, 5.5, and 5.6), plus at least four major forks (Percona Server, MariaDB, Percona XtraDB Cluster, and MariaDB Galera Cluster), upgrades are fashionable, so to speak. Problem is: “original” pt-upgrade was too noisy and too complex. pt-upgrade 2.2 is far simpler and far easier to use. It’s basically what you expect from such a tool. Moreover, it has a really helpful new feature: “reference results”, i.e. saved results from running queries on a server. Granted, this can take a lot of disk space, but it allows you to “run now, compare later.”

If you’re thinking about upgrading, give pt-upgrade a try. It also reads every type of log now (slow, general, binary, and tcpdump), so you shouldn’t have a problem finding queries to run and compare.

pt-query-digest is simpler

pt-query-digest 2.2 has fewer options now. Basically, we re-focused it on its primary objective: analyzing MySQL query logs. So the ability to parse memcached, Postgres, Apache, and other logs was removed. We also removed several options that probably nobody ever used, and changed/renamed other options to be more logical. The result is a simpler, more focused tool, i.e. less overwhelming.

Also, pt-query-digest 2.2 can save results in JSON format (–output=json). This feature is still in development while we determine the optimal JSON structure.

Version check is on by default

In 2.1.4, released September/October 2012, we introduced a feature called “version check” into most tools. It’s like a lot of software that automatically checks for updates, but it’s also more: it’s a free service from Percona that advises when certain programs (Percona Toolkit tools, MySQL, Perl, etc.) are either out of date or are known bad versions. For example, there are two versions of the DBD::mysql Perl module that have problems. And there are certain versions of MySQL that have critical bugs. Version check will warn you about these if your system is running them.

What’s new in 2.2 is that, whereas this feature (specifically, the option in tools: –version-check) was off by default, now it’s on by default. If the IO::Socket::SSL Perl module is installed (easily available through your package manager), it will use a secure (https) connection over the web, else it will use a standard (http) connection.

pt-stalk and pt-mysql-summary have built-in MySQL options

No more “pt-stalk — -h db1 -u me”. pt-stalk 2.2 and pt-mysql-summary 2.2 have all the standard MySQL options built-in, like other tools: –user, –host, –port, –password, –socket, –defaults-file. So now the command line is what you expect: pt-stalk -h dhb1 -u me.

pt-stalk –no-stalk is no longer magical

Originally, pt-stalk –no-stalk was meant to simulate pt-collect, i.e. collect once and exit. To do that, the tool magically set some options and clobbered others, resulting in no way to do repeated collections at intervals. Now –no-stalk means only that: don’t stalk, just collect, respecting –interval and –iterations as usual. So to collect once and exit: pt-stalk –no-stalk –iterations 1.

pt-fk-error-logger and pt-deadlock-logger are standardized

Similar to the pt-stalk –no-stalk changes, pt-fk-error-logger and pt-deadlock-logger received mini overhauls in 2.2 to make their run-related options (–run-time, –interval, –iterations) standard. If you hadn’t noticed, one tool would run forever by default, while the other would run once and exit. And each treated their run-related options a little differently. This magic is gone now: both tools run forever by default, so specify –iterations or –run-time to limit how long they run.

There are more changes in addition to those highlights.  For example, three tools were removed, and there were several bug fixes. See for the full list.  If you upgrade from 2.1 to 2.2, be sure to re-read tools’ documentation to see what has changed.

As the first release in a new series, 2.2 features are not yet finalized. In other words, we may change things like the pt-query-digest –output json format in future releases after receiving real-world feedback.

In summary, Percona Toolkit 2.2 for MySQL is an exciting release with many new and helpful features. Users are encouraged to begin upgrading, particularly given that, except for the forthcoming 2.1.10 release, no more work will be done on 2.1 (unless you’re a Percona customer with a support contract or other agreement).

The post MySQL 5.6 Compatible Percona Toolkit 2.2 Released appeared first on MySQL Performance Blog.


5 Percona Toolkit Tools for MySQL That Could Save Your Day: April 3 Webinar

Percona ToolkitOn April 3 at 10 a.m. PST, I’ll be giving a webinar titled “5 Percona Toolkit Tools for MySQL That Could Save Your Day.” In this presentation you’ll learn how to perform typical but challenging MySQL database administration tasks.

My focus will be on the following tools:

  • pt-query-digest, to select the queries you should try to improve to get optimal response times
  • pt-archiver, to efficiently purge purge data from huge tables
  • pt-table-checksum/pt-table-sync, to check if data on replicas is in sync with data on the master
  • pt-stalk, to gather data when performance problems happen randomly or are very short
  • pt-online-schema-change, to run ALTER TABLE statements on large tables without downtime

You can reserve your spot for this free MySQL webinar now. If you can’t attend you’ll receive an link to the recording and the slides after the webinar.

More info on “5 Percona Toolkit Tools for MySQL That Could Save Your Day”Percona Toolkit for MySQL is a must-have set of tools that can help serious MySQL administrators perform tasks that are common but difficult to do manually. Years of deployments by tens of thousands of users including some of the best-known Internet sites, have proven the reliability of the Percona Toolkit tools for MySQL.

In this presentation, you will learn how you can use some of these tools to solve typical, real-world MySQL database administration challenges, such as:

  • Selecting which queries you should try to optimize to get better response times
  • How to efficiently purge data from a huge table without putting too much load on your server
  • How to check which tables are affected when someone accidentally wrote to a replica and fix the problem without rebuilding the tables
  • How to gather data for performance problems that happen randomly and last only a few seconds
  • How to run ALTER TABLE on your largest tables without downtime

REGISTER NOW for this MySQL webinar
DOWNLOAD Percona Toolkit for MySQL

The post 5 Percona Toolkit Tools for MySQL That Could Save Your Day: April 3 Webinar appeared first on MySQL Performance Blog.


Percona Live MySQL Conference and Expo 2013: It feels like 2007 again

MySQL Banner: Keynote stageI actually don’t remember exactly whether it was in 2006, 2007 or 2008 — but around that time the MySQL community had one of the greatest MySQL conferences put on by O’Reilly and MySQL. It was a good, stable, predictable time.

Shortly thereafter, the MySQL world saw acquisitions, forks, times of uncertainly, more acquisitions, more forks, rumors (“Oracle is going to kill MySQL and the whole Internet”) and just a lot of drama and politics.

And now, after all this time some 6 or 7 years later, it feels like a MySQL Renaissance. All of the major MySQL players are coming to the Percona Live MySQL Conference and Expo 2013. I am happy to see Oracle’s engineers coming with talks — and now with a great MySQL 5.6 release — and I have great confidence in MySQL’s future with Oracle’s commitment level, coupled with the fact that Percona Server and MariaDB are continuing to mature.

For this year’s conference I am going to give a tutorial titled, “Percona XtraBackup: Old and New Features.” I actually traded my other talk, “MySQL 5.6 improvements from InnoDB internals prospective,” with Oracle’s Sunny Bains and his talk, MySQL 5.6: What’s New in InnoDB. Sunny honestly knows more about InnoDB internals than I do.

There are going to be a lot of great talks, and I would have a hard time listing all of the ones on my list to attend — but you can check all of them out for yourself. But I will say that the greatest part of the conference is not talks, but what happens in between them — in the conference center halls and behind the curtains.

If you are lucky you can grab engineers from Facebook, Twitter, Google, Box, Esty, Pinterest, Tumblr,… and ask them any question and actually get an answer. You can even catch interesting rumors… and spread some of your own. Basically just be part of the MySQL crowd and really feel the great vibe (especially after a few drinks during the Community Networking Reception).

I am personally looking forward to another lengthy discussion with Dimitri Kravtchuk on how benchmarks should be run, what the next bottleneck in MySQL will be, and how we, with Percona Server, are going to beat MySQL in performance. And you can actually go and ask Sunny anything about InnoDB internals: He is one of the few engineers in the world who knows everything about InnoDB.

If you seriously work with MySQL, then you should not miss the Percona Live MySQL Conference and Expo 2013. It runs April 22-25 in Santa Clara, California. But just remember: Advanced Rate tickets end on March 24th, so make sure you register this week.

The post Percona Live MySQL Conference and Expo 2013: It feels like 2007 again appeared first on MySQL Performance Blog.


Testing the Virident FlashMAX II

Approximately 11 months ago, Vadim reported some test results from the Virident FlashMax 1400M, an MLC PCIe SSD device. Since that time, Virident has released the FlashMAX II, which promises both increased capacity and increased performance over the previous model. In this post, we present some benchmark results comparing this new model to its predecessor, and we find that indeed, the FlashMax II is a significant upgrade.

For reference, all of the FlashMax II benchmarks were performed on a Dell R710 with 192GB of RAM. This is a dual-socket Xeon E5-2660 machine with 16 physical and 32 virtual cores. (I had originally planned to use the Cisco UCS C250 that is often used for our test runs, but that machine ran into some unrelated hardware difficulties and was ultimately unavailable.) The operating system in use was CentOS 6.3, and the filesystem used for the test was XFS, mounted with both the noatime,nodiratime options. The card was physically formatted back to factory default settings in between the synchronous and asynchronous test suites. Note that factory default settings for the FlashMax II will cause it to be formatted in “maxcapacity” mode rather than “maxperformance” mode (maxperformance reserves some additional space internally to provide better write performance). In “maxcapacity” mode, the device tested provides approximately 2200GB of space. In “maxperformance” mode, it’s a bit less than 1900GB.

Without further ado, then, here are the numbers.

First, asynchronous random writes:


There is a warmup period of around 18 minutes or so, and after about 45 minutes the performance stabilizes and remains effectively constant, as shown by the next graph.


Once the write performance reaches equilibrium, it does so at just under 780MiB/sec, which is approximately 40% higher than the 550MiB/sec exhibited by the FlashMax 1400M.

Asynchronous random read is up next:


The behavior of the FlashMax II is very similar to that of the FlashMax 1400M in terms of predictable performance; the standard deviation on the asynchronous random read throughput measurement is only 5.7MiB/sec. However, the overall read throughput is over 1000MiB/sec better with the FlashMax II: we see a read throughput of approximately 2580MiB/sec vs. 1450MiB/sec with the previous generation of hardware, an improvement of roughly 80%.

Finally, we take a look at synchronous random read.


At 256 threads, read throughput tops out at 2090MiB/sec, which is about 20% less than the asynchronous results; given the small bump in throughput going from 128 to 256 threads and the doubling of latency that was also introduced there, this is likely about as good as it is going to get.

For comparison, the FlashMax 1400M synchronous random read test stopped after 64 threads, reaching a synchronous random read throughput of 1345MiB/sec and a 95th-percentile latency of 1.49ms. With those same 64 threads, the FlashMax II reaches 1883MiB/sec with a 95th-percentile latency of 1.105ms. This represents approximately 40% more throughput, 25% faster.

In every area tested, the FlashMax II outperforms the original FlashMax 1400M by a significant margin, and can be considered a worthy successor.

The post Testing the Virident FlashMAX II appeared first on MySQL Performance Blog.


MySQL 5.6 Compatible Percona XtraBackup 2.0.6 Released

MySQL 5.6 Compatible Percona XtraBackup 2.0.6

MySQL 5.6 Compatible Percona XtraBackup 2.0.6 was released March 20.

Percona is glad to announce the release of Percona XtraBackup 2.0.6 for MySQL 5.6 on March 20, 2013. Downloads are available from our download site here and Percona Software Repositories.

This release is the current GA (Generally Available) stable release in the 2.0 series.

New Features:

  • XtraBackup 2.0.6 has implemented basic support for MySQL 5.6, Percona Server 5.6 and MariaDB 10.0. Basic support means that these versions are are recognized by XtraBackup, and that backup/restore works as long as no 5.6-specific features are used (such as GTID, remote/transportable tablespaces, separate undo tablespace, 5.6-style buffer pool dump files).

Bugs Fixed:

  • Individual InnoDB tablespaces with size less than 1MB were extended to 1MB on the backup prepare operation. This led to a large increase in disk usage in cases when there are many small InnoDB tablespaces. Bug fixed #950334 (Daniel Frett, Alexey Kopytov).
  • Fixed the issue that caused databases corresponding to inaccessible datadir subdirectories to be ignored by XtraBackup without warning or error messages. This was happening because InnoDB code silently ignored datadir subdirectories it could not open. Bug fixed #664986 (Alexey Kopytov).
  • Under some circumstances XtraBackup could fail to copy a tablespace with a high --parallel option value and a low innodb_open_files value. Bug fixed #870119 (Alexey Kopytov).
  • Fix for the bug #711166 introduced a regression that caused individual partition backups to fail when used with --include option in innobackupex or the --tables option in xtrabackup. Bug fixed #1130627 (Alexey Kopytov).
  • innobackupex didn’t add the file-per-table setting for table-independent backups. Fixed by making XtraBackup auto-enable innodb_file_per_table when the --export option is used. Bug fixed #930062 (Alexey Kopytov).
  • Under some circumstances XtraBackup could fail on a backup prepare with innodb_flush_method=O_DIRECT. Bug fixed #1055547 (Alexey Kopytov).
  • innobackupex did not pass the --tmpdir option to the xtrabackup binary resulting in the server’s tmpdir always being used for temporary files. Bug fixed #1085099 (Alexey Kopytov).
  • XtraBackup for MySQL 5.6 has improved the error reporting for unrecognized server versions. Bug fixed #1087219 (Alexey Kopytov).
  • Fixed the missing rpm dependency for Perl Time::HiRes package that caused innobackupex to fail on minimal CentOS installations. Bug fixed #1121573 (Alexey Bychko).
  • innobackupex would fail when --no-lock and --rsync were used in conjunction. Bug fixed #1123335 (Sergei Glushchenko).
  • Fix for the bug #1055989 introduced a regression that caused xtrabackup_pid file to remain in the temporary dir after execution. Bug fixed #1114955 (Alexey Kopytov).
  • Unnecessary debug messages have been removed from the XtraBackup output. Bug fixed #1131084 (Alexey Kopytov).

Other bug fixes: bug fixed #1153334 (Alexey Kopytov), bug fixed #1098498 (Laurynas Biveinis), bug fixed #1132763 (Laurynas Biveinis), bug fixed #1142229 (Laurynas Biveinis), bug fixed #1130581 (Laurynas Biveinis).

Release notes with all the bugfixes for Percona XtraBackup 2.0.6 for MySQL 5.6 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post MySQL 5.6 Compatible Percona XtraBackup 2.0.6 Released appeared first on MySQL Performance Blog.


Percona MySQL University coming to Toronto this Friday!

Percona CEO Peter Zaitsev leads a track at the inaugural Percona MySQL University event in Raleigh, N.C. on Jan. 29, 2013.

Percona CEO Peter Zaitsev leads a track at Percona MySQL University in Raleigh, N.C. on Jan. 29, 2013.

Percona MySQL University, Toronto is taking place this Friday and I’m very excited about this event because it is a special opportunity to fit a phenomenal number of specific and focused MySQL technical talks all into one day, for free.

Over the course of the day we will cover some of the hottest topics in the MySQL space. There will be talks covering topics like MySQL 5.6, MySQL in the Cloud and High Availability for MySQL, as well as Percona XtraDB Cluster for MySQL. We have talks planned for nearly every MySQL developer and DBA, from anyone  just starting with MySQL all the way to those of you who are seasoned MySQL Experts.

In addition to the conference presentations, we are providing you the opportunity to spend 45 (complementary) minutes one-on-one with a Percona MySQL Consulting expert.  Do not delay in reserving your time with a Percona Expert, as the number of available slots are very limited. Reserve your slot from Percona MySQL University event registration page.

Finally we’re going to have a raffle at the end of the day. The prizes include a ticket next month’s Percona Live MySQL Conference and Expo 2013 – the largest event of the year in MySQL ecosystem, a full week of Percona Training, a Percona MySQL Support subscription, and more valuable prices.

Percona MySQL University will be held at the FreshBooks offices at 35 Golden Avenue, Suite 105, Toronto, Ontario M6R 2J5. Directions along with the day’s complete agenda and registration details are on our Eventbright page.

We hope you’ll join us and invite your friends and colleagues !

The post Percona MySQL University coming to Toronto this Friday! appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by