Great work Innodb Team

I thought I should praise Innodb team for all the work they have been doing recently. We see a lot of cool stuff happening, especially in the area of our interest which is Performance And Scalability.

Innodb Plugin 1.0.4 had a lot of great performance improvements and 1.0.5/1.0.6 gets even further with long standing caching issues of large full table scan wiping off the cache. The Group Commit bug is also finally fixed now.

Innodb plugin documentation now seems to be merged with MySQL Documentation which makes it much more usable and Innodb Plugin is shipped with MySQL 5.1 which makes it much easier to use which means wider community testing.

Number of patches suggested by community such as Google and Percona were reworked and included in the plugin during latest months.

We can also see Innodb Plugin 1.0.5 is named Release Candidate which hopefully means it will be considered ready for general use – almost 2 years from the date of initial public release and probably 4 years since the work on Innodb plugin features such as compression or Online index creation has started.

Entry posted by peter |

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Written by in: Innodb,Zend Developer |

More on table_cache

In my previous post I looked into how large table_cache actually can decrease performance. The “miss” path is getting more expensive very quickly as table cache growths so if you’re going to have high miss ratio anyway you’re better off with small table cache.

What I have not checked though is how does table_cache (or table_open_cache in newer version) size affects the hit path.

I started with the same test as last time – created 100000 tables and read all of them at once to make sure all table cache entries are populated. When I tried repeatedly reading 1000 empty tables with table_cache of 20000 and 2000. With Table Cache of 2000 I got about 16400 selects/sec with Table Cache of 20000 13500 selects/sec. So there is some slow down in hit path as well though it is not as large as with miss path.

I also tested with very small table cache of 5, which would cause opening 1000 tables in the loop to generate misses. This way I got 8000 selects/sec – note this is basically best case scenario for opening tables as MyISAM table headers which are being modified were fully cached in this case.

What does this tell us ? If you can fit your tables in table cache and eliminate (or almost eliminate) table cache misses it is best to size table_cache large enough – even in the best case scenario for misses hit from large table_cache is faster.

There are also possible optimization for “hit” path with large table cache though this should only affect someone who runs really large amount of simple queries – this is when opening table can be significant contributer to total query time.

Entry posted by peter |

Add to: delicious | digg | reddit | netscape | Google Bookmarks


MySQL Performance Blog was down today

MySQL Performance Blog (and too) were down today because the switch in our rack died completely. It took a while to fix it using secondary switch we had. Provider was not willing to do it as remote hands so I had to drive to the data center to fix it.

We got number of calls and messages from the customers and friends about web site going down so we probably have to invest into getting infrastructure more redundant – currently we were quite cheap and a lot of servers have single network card (so you can’t use trunking to eliminate switch as single point of failure).

The customer case management systems were not affected by this outage.

Entry posted by peter |

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Written by in: MySQL,Zend Developer |

Paul McCullagh answers your questions about PBXT

Following on from our earlier announcement, Paul McCullagh has responded with the answers to your questions – as well as a few I gathered from other Percona folks, and attendees of OpenSQL Camp. Thank you Paul!

What’s the “ideal” use case for the PBXT engine, and how does it compare in performance?  When would I use PBXT instead of a storage engine like MyISAM, InnoDB or XtraDB?

Unfortunately it is not possible to point to a specific category of applications and say, “PBXT will be better here, so try it”.  PBXT is a general purpose transactional storage engine, designed to perform well on a broad range of tasks, much like InnoDB.  However, PBXT’s log-based architecture makes performance characteristics different to both MyISAM and InnoDB/XtraDB. Tests show that PBXT’s performance is similar to InnoDB but, depending on your database designed and the application, it can be faster.

PBXT is a community project and, of course, we depend on users trying it out. In the long run, this will determine to what extent we are able to continue to develop and improve the engine.  So, despite this rather vague answer, we are hoping that more people try it out, and work with us to improve the engine as necessary.  My thanks to all who are already doing this! :)

I think I remember reports that PBXT (at an early stage) out performed InnoDB with INSERTS and UPDATES (but not SELECTS). That would make PBXT very interesting for non-SELECT-intensive applications (finance, production management etc.) in my opinion.  Is this the case, and do you have any recent benchmarks available?

This is no longer necessarily the case. For example a test ( by Mark Callaghan shows that PBXT can actually out perform InnoDB with SELECTs under circumstances.

The implementation of full-durability has changed the performance characteristics of PBXT from “MyISAM-like” to more InnoDB-like. Originally PBXT was conceived as an engine that would be somewhere between MyISAM and InnoDB in both performance and features. The early version of PBXT was not fully durable (equivalent to innodb_flush_log_at_trx_commit=2).

A major change was completed at the beginning of last year with the implementation of full-durability. In doing this it was important to keep the log-based architecture which was the reason for the high write performance of earlier versions.

Traditional transactional implementations suffer from the problem that a backlog of asynchronous writes accumulate until it swamps the engine. There has been a lot of work on both InnoDB and XtraDB to solve this problem. The key words here are fuzzy and adaptive checkpointing (the former, originally implementation by Heiki for InnoDB, and the latter, an excellent addition to XtraDB).

Both methods improve the management of the asynchronous writes. The idea behind the log-based solution, on the other hand, is to avoid the accumulating a backlog of asynchronous writes, but writing synchronously.

Although write performance is comparable with InnoDB, I am not entirely convinced that PBXT’s implementation of the log-based I/O is optimal at this stage. This is ongoing work for PBXT 1.5.

Morgan notes: As well as Adaptive Checkpointing, Oracle has also been working on Adaptive Flushing for the InnoDB Plugin. The engine being ‘swamped’ problem that Paul is referring to is best described visually – see this post for more info.

What were the hard decisions or trade-offs that you had to make when designing PBXT?

If you read the white paper from 2006 ( you will notice that the original design was uncompromisingly MVCC-based.  Some of this has been changed to make PBXT more InnoDB-like, but other principles have remained.

Pure-MVCC does not do any locking. Read locks are not required because each transaction effectively gets its own snapshot of the database. And write locks are not acquired when updating. Instead, the application can hit a “optimistic lock error” if a record is updated by another user.

Now PBXT does acquire locks for 2 reasons: to support SELECT FOR UPDATE, and to avoid optimistic locking errors. This makes PBXT’s behavior identical to InnoDB in REPEATABLE READ mode.

On the other hand, there are currently no plans to implement InnoDB style “gap locking”. Gap locking effectively involves locking rows that do not exist. This, in turn, means that PBXT transactions are not SERIALIZABLE.  A result of this is that statement-based replication is not supported by the engine.

Another hard decision was not to implement clustered indexes which I mentioned in more details later.

How does online backup work in PBXT, and is incremental backup possible?

A recent version of PBXT (1.0.09) supports the MySQL Backup API which was originally implemented in MySQL 6.0. This feature is now scheduled for an upcoming version of MySQL 5.4.

The Backup API makes it possible to pull a consistent snapshot of an entire database even when tables use different engine types. The API does not yet support incremental backup, but this is planned.

Internally this feature is implemented by PBXT using an MVCC-based consistent snapshot.

Does PBXT have a maintenance thread like InnoDB’s main thread?

PBXT has several system threads, that are responsible for various maintenance tasks. The most important of these are the “Writer”, the “Sweeper” and the “Checkpointer”;

  • The Writer transfers data from the transaction log to the database table files. This task runs constantly and is the main source of asynchronous I/O in the engine.
  • The Sweeper is a thread unique to the PBXT architecture. It’s job is to clean up after a transaction. Basically it recognizes which record versions are no longer needed and deletes them from the database.
  • The Checkpointer flushes the data written by the Writer to disk. It is also responsible for writing consistent snapshots of the index files to disk. The frequency of checkpoints, as with other engines like InnoDB, determines the recovery time.

Does PBXT support clustered indexes?

No, currently it does not. This is one of the original design decisions (as raised by a previous question).  Two things contributed to this decision:

  • Supporting clustered indexes would have made the implementation of PBXT more complicated than I would have liked. My goal was to make PBXT as simple as possible, so that it is easy to maintain the code and add features.
  • I believe clustered indexes are becoming less relevant with the rise of Solid State technology. As random read access times decrease clustering of data will become less and less important.

What is the page size in PBXT, and can it be tuned?

PBXT uses 16K pages for the index data and (approximately) 32K pages for the table data. Both sizes can be set using compile time switches. However, if the index page size is changed, then the indices need to be rebuilt, which can be done by REPAIR TABLE.  The table data page size does not require a rebuild because a page of records in the table is just a group of records (not an actual fixed length page).

Are there any differences in the PBXT implementation of MVCC that might surprise experienced InnoDB DBAs?  Also – In MVCC does it keep the versions in indexes, and can PBXT use MVCC for index scans?

If you are using InnoDB in REPEATABLE READ mode, then there is essentially no difference in the isolation paradigm between the two engines.

REPEATABLE READ is often preferred over SERIALIZABLE mode because it allows a greater degree of concurrency while still providing the necessary transaction isolation. So I do not consider the lack of serializability as a serious deficit in the engine. And, fortunately MySQL 5.1. supports row-based replication which makes it possible to do replication while using REPEATABLE READ.

PBXT does use MVCC to do index scans. Basically this means that all types of SELECTs can be done without locking.

Morgan notes: Indexes not using MVCC is one of the main differences in the Falcon storage engine.

PBXT supports row-level locking and foreign keys. Does this create any additional locking overhead that we should be aware of?

Firstly, PBXT does not acquire read locks. A normal SELECT does not lock at all. In addition, an UPDATE or DELETE only acquires a temporary row-lock. This lock is released when the row is updated or deleted because the MVCC system can detect that a row has been changed (and is therefore write locked).

This means that PBXT does not normally need to maintain long lists of row-level locks. This is also the case when a foreign key leads to cascading operations which can affect thousands of rows.

The only case you need to be aware of is SELECT FOR UPDATE. This operation acquires and holds a row-level lock for each row returned by the SELECT. These locks are all stored in RAM. The format is quite compact (especially when row IDs are consecutive) but this can become an issue if millions of rows are selected in this manner.

I should also mention that the consequence of this is that SELECT … LOCK IN SHARE MODE is currently not supported.

When I evaluate a storage engine my key acceptance criteria are things like backup, concurrency, ACID compliance and crash recovery. As a storage engine developer, what other criteria do you think I should be adding?

Yes, I think you have mentioned the most important criteria. What I can add to this list are 3 things that make developing a storage engine extremely demanding: performance, stability and data integrity.

Of course, as a DBA or database user these aspects are so basic that they are taken for granted.

But engine developers need to keep performance, stability and data integrity in mind constantly. The problem is, they compete with each other: increasing performance often causes instabilities that then have to be fixed. How to optimize the program without compromising data integrity is a constant question.

Relative to maintaining performance, stability and data integrity, adding features to an engine is easy. So I would say that these are the criteria that concern a developer the most.

MySQL supports a “pluggable storage engine API”, but it seems that not all the storage engine vendors are able to keep all their code at that layer (Infobright had to make major changes to MySQL itself).  What war stories can you report on in plugging into MySQL?

Unfortunately the “war” continues. I have already received several e-mails that PBXT does not compile with the recently released MySQL 5.1.41!

Any dot release can lead to this problem, and I think PBXT is fairly moderate with its integration into MySQL.

My main advantage: I have been able to avoid modifying any part of MySQL to make the engine work. This means that PBXT runs with the standard MySQL/MariaDB distribution.

But this has required quite a bit of creative work, in other words, hacks.

One of the main problems has been running into global locks when calling back into MySQL to do things like open a table, create a session structure (THD) or create a .frm file.

One extreme example of this is PBXT recovery. When MySQL calls the engine “init” method on startup it is holding the global LOCK_plugin lock. In init, the engine needs to do recovery. In PBXT’s case this means opening tables (reading a .frm file), which requires creating a THD. The code to create a THD in turn tries to acquire LOCK_plugin!

Unfortunately a thread hangs if it tries to acquire the same mutex twice, so this just does not work!

We went through quite a few iterations (MySQL code was also changing during the development of 5.1) before we came up with the current solution: create a background thread to do recovery asynchronously. So the thread can wait for the LOCK_plugin to be unlocked before it continues.

The affect is that the init function returns quickly, but the first queries that access PBXT tables may hang waiting for recovery to complete.

PBXT seems to have very few configuration parameters.  Was this an intentional design decision, and do you see it creating opportunities for you in organizations with less internal IT-expertise?

No, this is not by design.

While I try to only add tuning parameters that are absolutely necessary, PBXT is not specifically designed to be self-tuning, because I believe that is a very hard problem to solve in general.

Tuning parameters are often added to an engine in response to performance problems in particular configurations. This is not necessarily a bad thing because it provides DBA’s with the tools they need.

My goal for PBXT in this regard is twofold:

  1. I hope that most installations can get away with setting a minimum of tuning parameters, in particular, just the cache values.
  2. On the other hand, I aim to provide expert tuning parameters for installations that need to extract maximum performance from the hardware. This work is ongoing.

Morgan notes: There are more in InnoDB/XtraDB now than there were three years ago. This is probably something that emerges over time as we get to understand more about an engine.

Entry posted by Morgan Tocker |

Add to: delicious | digg | reddit | netscape | Google Bookmarks


Rare evil MySQL Bug

There is the rare bug which I ran into every so often. Last time I’ve seen it about 3 years ago on MySQL 4.1 and I hoped it is long fixed since… but it looks like it is not. I now get to see MySQL 5.4.2 in the funny state.

When you see bug happening you would see MySQL log flooded with error messages like this:

091119 23:03:34 [ERROR] Error in accept: Resource temporarily unavailable
091119 23:03:34 [ERROR] Error in accept: Resource temporarily unavailable
091119 23:03:34 [ERROR] Error in accept: Resource temporarily unavailable
091119 23:03:34 [ERROR] Error in accept: Resource temporarily unavailable

filling out disk space

Depending on the case you may be able to connect to MySQL through Unix Socket or TCP/IP or neither.
It also looks like there is a correlation between having a lot of tables and such condition.

Previously I was unlucky with seeing this issue in production so we had to restart MySQL quickly currently I have a test MySQL showing some behavior.

Here is what strace tells me:

[percona@test9 msb_5_4_2]$ strace -f -p 19229
Process 19229 attached with 23 threads – interrupt to quit
[pid 19286] rt_sigtimedwait([HUP QUIT ALRM TERM TSTP],
[pid 19285] futex(0x165962ec, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19284] select(0, NULL, NULL, NULL, {0, 765000}
[pid 19283] select(0, NULL, NULL, NULL, {0, 412000}
[pid 19248] futex(0x1781193c, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19247] futex(0x178118bc, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19246] futex(0x1781183c, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19245] futex(0x178117bc, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19244] futex(0x1781173c, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19232] futex(0x1781113c, FUTEX_WAIT_PRIVATE, 165, NULL
[pid 19229] accept(16392,
[pid 19241] futex(0x178115bc, FUTEX_WAIT_PRIVATE, 319, NULL
[pid 19243] futex(0x178116bc, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19242] futex(0x1781163c, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19240] futex(0x1781153c, FUTEX_WAIT_PRIVATE, 3, NULL
[pid 19239] futex(0x178114bc, FUTEX_WAIT_PRIVATE, 7, NULL
[pid 19238] futex(0x1781143c, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19237] futex(0x178113bc, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19236] futex(0x1781133c, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19235] futex(0x178112bc, FUTEX_WAIT_PRIVATE, 7, NULL
[pid 19234] futex(0x1781123c, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19233] futex(0x178111bc, FUTEX_WAIT_PRIVATE, 207, NULL
[pid 19231] futex(0x178110bc, FUTEX_WAIT_PRIVATE, 1, NULL
[pid 19229] <... accept resumed> 0x7fffffac70b0, [18423225245113516048]) = -1 EAGAIN (Resource temporarily unavailable)
[pid 19229] accept(16392, 0x7fffffac70b0, [18423225245113516048]) = -1 EAGAIN (Resource temporarily unavailable)
[pid 19229] accept(16392, 0x7fffffac70b0, [18423225245113516048]) = -1 EAGAIN (Resource temporarily unavailable)
[pid 19229] accept(16392, 0x7fffffac70b0, [18423225245113516048]) = -1 EAGAIN (Resource temporarily unavailable)
[pid 19229] fcntl(16392, F_SETFL, O_RDWR) = 0
[pid 19229] fcntl(16392, F_SETFL, O_RDWR) = 0
[pid 19229] select(16394, [1025 1040 1042 1044
8 9709 9714 9716 9717 15368 15369], NULL, NULL, NULL*** buffer overflow detected ***: strace terminated
======= Backtrace: =========

So as you can see accept gets pretty high socket number – probably because of large innodb_open_files I tested with in this case – which all can be used during recovery.

Note the process gets accept on the socket to fail with EAGAIN (which is not well described in the manual) and later getting call to select call with 16384 sockets. This does not seems to be the healthy number to work with SELECT call and It is quite possible something goes wrong because of it.

If you have any ideas why could be going wrong in this case.

Entry posted by peter |

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Written by in: bugs,MySQL,Zend Developer |

How innodb_open_files affects performance

Recently I looked at table_cache sizing which showed larger table cache does not always provides the best performance. So I decided to look at yet another similar variable – innodb_open_files which defines how many files Innodb will keep open while working in innodb_file_per_table mode.

Unlike MyISAM Innodb does not have to keep open file descriptor when table is open – open table is purely logical state and appropriate .ibd file may be open or closed. Furthermore besides MySQL table_cache Innodb maintains its own (called data dictionary) which keeps all tables ever accessed since table start – there is no variable to control its size and it can take significant amount of memory in some edge cases. Percona patches though provide innodb_dict_size_limit to restrict growth of data dictionary.

So I started with same series of test and creating 100.000 tables with single integer column. The process of creating tables took about 45 minutes which is a lot more than MyISAM and the total size on disk was 12GB in .ibd files plus some space allocated in system tablespace. So if you create Innodb tables you better store some data in them otherwise there will be a huge waste of space.

I used MySQL 5.4.2 for tests which should be as good as it gets in terms of optimizations in this space.

To keep test alligned to my previous experiments I was running with table_open_cache=64 and tried innodb_open_files=64 and 16384.

Reading 100.000 tables first time after MySQL time takes about 500 seconds (so it is some 200 tables/sec) – first time Innodb actually populates data dictionary. The second time we do same operation it takes about 25 seconds (4.000 tables/sec) which is quite a difference. As we can see even in case table it fully in Innodb data dictionary the operation is slower than MyISAM tables. Though the difference can be related to the size of set of empty tables which is about 10 times smaller for MyISAM.

I found no significant difference whatever limit of open files was, which is not surprising as logical operation of opening file is rather fast on local file system – one can open/close file hundreds of thousands times per second.

To verify this I tried doing “open table” test for only 10K out of 100K tables – the performance was about the same, taking 1sec (on the second time) . Whenever innodb_open_files_limit was 64 (virtually all misses ) or 16384 (all hits) performance was the same.

As I mentioned Data Dictionary can take considerable amount of memory – In my case after reading all tables I got “Dictionary memory allocated 392029720” which means very simple single tables takes about 4KB of space in data dictionary. More complicated tables can take a bit more.

So innodb_open_files does not affect performance a lot on reads – what is about writes ? I tried again very simple test inserting the row in each of 100K tables. This test ran about 180sec first time and about 260sec second time (with innodb_flush_log_at_trx_commit=0) go giving 550 and 380 updates/sec appropriately. Why was second time slower and not faster ? Because on the second run there were a lot of dirty pages in innodb buffer pool which had to be flushed before recycling. First ran however was done with clean buffer pool (after reading all tables once)

Same as with select case I could not see any measurable difference between two tested innodb_open_files values.

Finally I decided to test the crash recovery – does it make any difference ?

The crash recovery in Innodb is nasty if you have a lot of tables:

091118 18:43:36 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files…
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer…
InnoDB: Doing recovery: scanned up to log sequence number 12682768136
091118 18:47:44 InnoDB: Starting an apply batch of log records to the database…

If Innodb detects it is not shut down properly it will scan all .ibd files which took a bit over 4 minutes for 100K tables but which obviously can take a lot more if there are more tables or they are less cached than in this case. This part of data recovery does not depends on amount of records which need to be applied just about number of tables.

With innodb_open_files=64 I got bunch of warning messages during recovery:

091118 18:47:44 InnoDB: Warning: too many (67) files stay open while the maximum
InnoDB: allowed value would be 64.
InnoDB: You may need to raise the value of innodb_max_files_open in
InnoDB: my.cnf.
InnoDB: fil_sys open file LRU len 0
091118 18:47:44 InnoDB: Warning: too many (68) files stay open while the maximum
InnoDB: allowed value would be 64.
InnoDB: You may need to raise the value of innodb_max_files_open in
InnoDB: my.cnf

So we can see Innodb may with to have so many open files during recovery stage and it will open more files than allowed if needed.

Both scanning open files and applying logs took about 9 minutes in this setup. This number of course can change a lot depending on hardware log file size workload and even when crash happen (how many unflushed changes we had)

Repeating test with innodb_open_files=16384 I got about same crash recovery speed though with no warnings.

So it looks like innodb_open_files_limit=300 is not being that large liability even with large number of tables and you can also safely increase this number if you like – there is no any surprises such as surprised slow downs for replacing open files in the list. I guess Heikki knows how to implement LRU in the end :)

Entry posted by peter |

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Written by in: Innodb,MySQL,Zend Developer |

5.0.87-build20 Percona binaries

Dear Community,

We are pleased to present the 20th build of MySQL server with Percona patches.

Comparing to the previous release it has following new features:

  • The build is based on MySQL-5.0.87
  • innodb_rw_lock.patch is ported from InnoDB Plugin 1.0.3
  • To be compatible with RedHat RPM repository, the naming scheme has changed to
    <rpm name>-<mysql version>-<percona build version>.<buildnumber>.<redhat version>.<architecture>.rpm



See release notes for earlier changes.

Since the build 20 MySQL server with Percona patches is available in Percona RPM repository via YUM. To make it working add a file Percona.repo in /etc/yum.repos.d with following content


  1. [percona]
  2. name=CentOS-$releasever – Percona
  3. baseurl=$releasever/os/$basearch/
  4. gpgcheck=0

As usual you can download binaries and sources with the patches here

There is Debian packages repository is also available. See release page for configuration and usage guideance.

The Percona patches live on Launchpad : and you can report bug to Launchpad bug system: The documentation is available on our Wiki

For general questions use our Pecona-discussions group, and for development question Percona-dev group.

For support, commercial and sponsorship inquiries contact Percona.


Entry posted by Aleksandr Kuzminsky |

Add to: delicious | digg | reddit | netscape | Google Bookmarks


table_cache negative scalability

Couple of months ago there was a post by FreshBooks on getting great performance improvements by lowering table_cache variable. So I decided to investigate what is really happening here.

The “common sense” approach to tuning caches is to get them as large as you can if you have enough resources (such as memory). With MySQL common sense however does not always works – we’ve seen performance issues with large query_cache_size also sort_buffer_size and read_buffer_size may not give you better performance if you increase them. I found this also applies to some other buffers.

Even though having previous experience of surprised behavior I did not expect such a table_cache issue – the LRU for cache management is classics and there are scalable algorithms to deal with it. I would expect Monty to implement one of them.

To do the test I have created 100.000 empty tables containing single integer column and no indexes and when ran SELECT * FROM tableN in the loop. Each table in such case is accessed only once and on any but first run each access would require table replacement in table cache based on LRU logic.
MySQL Sandbox helped me to test this with different servers easily.

I did test on CentOS 5.3, Xeon E5405, 16GB RAM and EXT3 file system on the SATA hard drive.

MySQL 5.0.85 Created 100.000 tables in around 3min 40 sec which is about 450 tables/sec – This indicates the “fsync” is lying on this test system as default sync_frm option is used.

With default table_cache=64 accessing all tables take 12 sec which is almost 8500 tables/sec which is a great speed. We can note significant writes to the disk during this read-only benchmark. Why ? Because for MyISAM tables table header has to be modified each time the table is opened. In this case the performance was so great because all 100.000 tables data (first block of index) was placed close by on disk as well as fully cached which made updates to headers very slow. In the production systems with table headers not in OS cache you often will see significantly low numbers – 100 or less.

With significantly larger table_cache=16384 (and appropriately adjusted number of open files) the same operation takes 660 seconds which is 151 tables/sec which is around 50 times slower. Wow. This is the slow down. We can see the load becomes very CPU bound in this case and it looks like some of the table_cache algorithms do not scale well.

The absolute numbers are also very interesting – 151 tables/sec is not that bad if you look at it as an absolute number. So if you tune table cache is “normal” case and is able to bring down your miss rate (opened_tables) to 10/sec or less by using large table_cache you should do so. However if you have so many tables you still see 100+ misses/sec while your data (at least table headers) is well cached so the cost of table cache miss is not very high, you may be better of with significantly reduced table cache size.

The next step for me was to see if the problem was fixed in MySQL 5.1 – in this version table_cache was significantly redone and split in table_open_cache and table_definition_cache and I assumed the behavior may be different as well.

MySQL 5.1.40
I started testing with default table_open_cache=64 and table_definition_cache=256 – the read took about 12 seconds very close to MySQL 5.0.85.
As I increased table_definition_cache to 16384 result remained the same so this variable is not causing the bottleneck. However increasing table_open_cache to 16384 causes scan to take about 780 sec which is a bit worse than MySQL 5.0.85. So the problem is not fixed in MySQL 5.1, lets see how MySQL 5.4 behaves.

MySQL 5.4.2
MySQL 5.4.2 has higher default table_open_cache so I took it down to 64 so we can compare apples to apples. It performs same as MySQL 5.0 and MySQL 5.1 with small table cache.
With table_open_cache increased to 16384 the test took 750 seconds so the problem exists in MySQL 5.4 as well.

So the problem is real and it is not fixed even in Performance focused MySQL 5.4. As we can see large table_cache (or table_open_cache_ values indeed can cause significant performance problems. Interesting enough Innodb has a very similar task of managing its own cache of file descriptors (set by innodb_open_files) As the time allows I should test if Heikki knows how to implement LRU properly so it does not have problem with large number. We’ll see.

Entry posted by peter |

Add to: delicious | digg | reddit | netscape | Google Bookmarks


Chicago & Atlanta -last two training locations of the year

A final update on training for this year – registration for InnoDB/XtraDB training in Chicago (8th December) and Atlanta (10th December) are now open.

While in Atlanta I’ll be giving a talk at the Atlanta PHP User Group on Optimizing MySQL Performance (details to be posted to their website shortly).  If you’re in Chicago and would like me to speak at your group on 7th-8th December, let me know!

Entry posted by Morgan Tocker |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks


Interviews for InfiniDB and TokuDB are next

I forwarded on a list of questions about PBXT to Paul McCullagh today.  While Paul’s busy answering them, I’d like to announce that Robert Dempsey (InfiniDB storage engine) and Bradley C. Kuszmaul (TokuDB storage engine) have also accepted an interview. If you have any questions about either storage engine, please post them here by Friday 20th November.

Entry posted by Morgan Tocker |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by