Feb
09
2010
--

Introducing percona-patches for 5.1

Our patches for 5.0 have attracted significant interest.  You can read about SecondLife’s experience here, as well as what Flickr had to say on their blog.  The main improvements come in both performance gains and improvements to diagnostics (such as the improvements to the slow log output, and INDEX_STATISTICS).

Despite having many requests to port these patches to 5.1, we simply haven’t had the bandwidth as our main focus has been on developing XtraDB and XtraBackup.  Thankfully a customer (who prefers to stay unnamed) as stood up and sponsored the work to move the patches to 5.1.

To refresh, the most interesting patches are:

Two new features which not available for 5.0:

  • In slow.log for Stored Procedure call you can see profiling for each individial query from this procedure, not just call storproc()
  • With userstat you can get additional THREADS_STATISTICS which show similar information to USER/CLIENT_STATISTICS but per THREAD granularity (it’s useful if you have connection pool)

On this stage the patches are available only in source code, you
can get them from Launchpad https://code.launchpad.net/~percona-dev/percona-patches/5.1.43.  Binaries are also on the way, and will be ready soon. We are running intensive stress testing loads on them to provide stable and quality packages.

And to finalize are results for tpce-like benchmark, where I compare MySQL-5.1.43 vs percona-5.1.43.

The results made for TPCE configuration with 2000 customers and 300 tradedays and 16 concurrent users on our R900 server. The dataset is about 25GB, fully fitting into buffer_pool, so disk does not really matter, but data was stored on FusionIO 320GB MLC card.

On chart with results I show amount of TradeResults transactions per 10 sec during 3600 session (more is better)
tpce-like_2000c_300d

As you see with percona patches you can get just about 10x improvement.
Yeah, that sounds too cool, but let me explain where difference comes from.

As I mentioned in tpce workload details the load is very SELECT intensive and these SELECTS are mainly scans by secondary keys ( not Primary Keys), so it hits problems in InnoDB rw-lock implementations and in buffer_pool mutex contention, which alredy fixed in percona-patches ( and in XtraDB and InnoDB-plugin also).

So you are welcome to try it!


Entry posted by Vadim |
18 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Oct
14
2009
--

Tuning for heavy writing workloads

For the my previous post, there was comment to suggest to test db_STRESS benchmark on XtraDB by Dimitri. And I tested and tuned for the benchmark. I will show you the tunings. It should be also tuning procedure for general heavy writing workloads.

At first, <tuning peak performance>. The next, <tuning purge operation> to stabilize performance  and to avoid decreasing performance.

<test condition>

Server:
PowerEdge R900, Four Quad Core E7320 Xeon, 2.13GHz, 32GB Memory, 16X2GB, 667MHz

db_STRESS:
32 sessions, RW=1, dbsize = 1000000, no thinktime

XtraDB: (mysql-5.1.39 + XtraDB-1.0.4-current)
innodb_io_capacity = 4000
innodb_support_xa = false
innodb_file_per_table = true
innodb_buffer_pool_size = 16G
innodb_read_io_threads = 8
innodb_write_io_threads = 8
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 128M
innodb_log_file_size = 512M
innodb_log_files_in_group = 2
innodb_max_dirty_pages_pct = 90
innodb_flush_method = O_DIRECT
(the followings are XtraDB specific general settings)
innodb_ibuf_active_contract = 1
innodb_adaptive_flushing = false
innodb_adaptive_checkpoint = estimate

<tuning peak performance>

At first, tuning the peak performance to use CPU and IO resource more effectively. To avoid mutex/lock contentions are good to use more CPU resource of many CPUs.

purge_thread_test_1ST_TUNE

This graph shows the peak performance in tps of db_STRESS.

At current settings, “base” in the graph is the perfomance. We can confirm the mutex/lock contention roughly by the SEMAPHORES sction of SHOW INNODB STATUS output.

“xx-lock on RW-latch at 0x7f2ff40a3dc0 created in file dict/dict0dict.c line 1627”

It is index->lock, viewing the source file (and it may be HISTORY table). This is the lock for each index tree. We may be able to disperse the lock using by the partitioning of MySQL. Added the following clause to the HISTORY table definition.

“PARTITION BY HASH(REF_OBJECT) PARTITIONS 16”

Now the performance became to “+partitioned” in the graph. Looking the SEMAPHORES section again,

“has waited at handler/ha_innodb.cc line 7275 for 0.0000 seconds the semaphore:
X-lock on RW-latch at 0xd30320 created in file dict/dict0dict.c line 623″

may be the line which appears for the most times (it is dict_operation_lock). It may be partition specific lock contention. The current XtraDB has the variable to tune the contention.

innodb_stats_update_need_lock = 0 (default 1)

It skip the updating statistics which needs the lock. (it only affects for “Data_free:” value of TABLE STATS). And the performance became “+skip_stats” in the graph.  Then, the next contention at SEMAPHORES section is…

“Mutex at 0x1b3e3e78 created file trx/trx0rseg.c line 167”

may be remarkable (it is rseg->mutex). The mutex is for each rollback segments, so we can increase the rsegs to solve the contention problem. XtraDB can increase the rseg.

innodb_extra_rsegments = 64 (affects to initialization of InnoDB)

Recreated database files with the parameter. Then the performance became “+rsegs64”. At last, the next contention may be “Mutex at 0x28ce8e0 created file srv/srv0srv.c line 982”. It is kernel_mutex, currently we don’t have proper solution for that. The setting seems to be enough for now.

<tuning purge operation>

Next, looking the sequential result in more long term.

The next problem is “History list length” growing to huge size. The value is the number of entries in rollback segment. The entries are used for consistent reading of the older transactions. They can be removed when any transactions doesn’t refer the entry. This removing operation for the entries is called “purge” in InnoDB. The purge operation should be done enough on time, because the huge history list affects to performance.

Basically, the purging is done by master_thread (general background thread of InnoDB). The huge history list makes the purge operation slow, and it interferes  with the other tasks of the master_thread (e.g. flushing dirty blocks, treating insert buffer, etc…). Dimitri implemented a purge_thread to devote to the purging, and also XtraDB has similar purge_thread. Though it seems to make the throughput stabilize, it is not enough still for heavy update workloads. A single purge_thread on one CPU is not enough for updates from user threads on the all of other CPUs.

XtraDB can increase the purge_threads from the next release.

innodb_use_purge_thread = 4

seems to be enough for this workload on the server.

The first graph of followings is sequential throughput [tps] up to 3500 sec.

purge_thread_test_TPS

The next is tracking the “History list length” at the same time.

purge_thread_test_HIST_LENGTH

"Norm 1.0.4": Normal InnoDB Plugin 1.0.4 without XtraDB specific options
"xtra p_t 0": XtraDB 1.0.4-new (no purge_thread)
"xtra p_t 1": XtraDB 1.0.4-new (single purge_thread similar to Dimitri's)
"xtra p_t 4": XtraDB 1.0.4-new (4 purge threads)

The graphs show…

  • The purge thread (> 0) helps to stabilize the throughput greatly.
  • Increasing the purge threads can suppress the strong growing of the hitory list
  • The adaptive checkpoint “estimate” needs the purge_thread… (than the adaptive_flushing does)

And the last 300secs’ average tps are…

"Norm 1.0.4": 5725.47
"xtra p_t 0": 4699.33
"xtra p_t 1": 7130.3
"xtra p_t 4": 9118    (about 60%up from Normal Plugin 1.0.4)

In the end, the faster and more stable performance of db_STRESS benchmark is obtained by these tunings of XtraDB.


Entry posted by Yasufumi |
9 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com