Jan
29
2014
--

Percona Server: Thread Pool Improvements for Transactional Workloads

In a previous thread pool post, I mentioned that in Percona Server we used an open source implementation of MariaDB’s thread pool, and enhanced/improved it further. Below I would like to describe some of these improvements for transactional workloads.

When we were evaluating MariaDB’s thread pool implementation, we observed that it improves scalability for AUTOCOMMIT statements. However, it does not scale well with multi-statement transactions. The UPDATE_NO_KEY test which was run as an AUTOCOMMIT statement and inside a transaction gave the following results:

thread_pool.p2.mariadb.v3

After analysis, we identified the major cause of that inefficiency: High latency between individual statements inside transactions. This looked pretty similar to the case when transactions are executed in parallel without thread pool. Latency there is high as well, though the cause of high latency in these two cases is different.

  • In the “one-thread-per-connection” case, with 1000 connections, higher latency is caused by increased contention on accessing MySQL server shared resources like structures/locks/etc.
  • In the case of “pool-of-threads”, 1000 client connections will be organized into thread_pool_size groups (or to be more specific into thread_pool_size queues), and latency here comes not from contention as we have much smaller number of parallel threads. It comes from the execution order of individual statements in transactions. Suppose that you have 100 identical transactions (each with 4 statements in it) in the thread group queue. As transactions are processed and executed sequentially, statements of transaction T1 will be placed at the following positions in the thread pool queue: 1…101…201…301…401. So in case of a uniform workload distances between statements in the transaction will be 100. This way transaction T1 may hold server resources during execution of all statements in the thread pool queue between 1 and 401, and that has a negative impact on performance.

In an ideal world, the number of concurrent transactions does not matter, as long as we keep the number of concurrent statements sufficiently low. Reality is different though. An open transaction which is not currently executing a statement may still block other connections by holding metadata or row-level locks. On top of that, any MVCC implementation should examine states of all open transactions and thus may perform less efficiently with large numbers of transactions (we blogged about InnoDB-specific problems here and here).

In order to help execute transactions as fast as possible we introduced high and low priority queues for thread pool. Now with default thread pool settings, we check every incoming statement, and if it is from an already started transaction we put it into the the high priority queue, otherwise it will go in the low priority queue.

high_priority_diagram.v2

Such reordering allowed to notably reduce latency and resulted in very good scalability up to a very high number of threads. You can find details about this optimization in our documentation.

Now let’s check how these introduced changes will affect the workload we mentioned earlier in this article.

The next graphs show the results for the UPDATE_NO_KEY test that was run as an AUTOCOMMIT statement, and inside a transaction for MariaDB with thread_pool and Percona Server with the thread_pool_high_priority mode=statements – which is very similar to behavior of thread_pool in MariaDB and Percona Server with thread_pool_high_priority mode=transactions – optimization that performs statements reordering of the transactions in the thread pool queue.

thread_pool.p2.mariadb.percona.v2
This works even more efficiently on larger transactions like OLTP_RW from sysbench. See the graphs below for the same set of servers:

IO bound: sysbench dataset 32 tables/12M rows each (~100GB), InnoDB buffer pool=25GB
thread_pool.p2.io_bound

As seen, we get nearly flat throughput with thread_pool_high_prio_mode=transactions even with very high numbers of users connections.

The post Percona Server: Thread Pool Improvements for Transactional Workloads appeared first on MySQL Performance Blog.

Apr
15
2013
--

Memory allocators: MySQL performance improvements in Percona Server 5.5.30-30.2

In addition to the problem with trx_list scan we discussed in Friday’s post, there is another issue in InnoDB transaction processing that notably affects MySQL performance – for every transaction InnoDB creates a read view and allocates memory for this structure from heap. The problem is that the heap for that allocation is destroyed on each commit and thus the read view memory is reallocated on the next transaction.

There are two aspects of this problem:

1) memory allocation is an costly operation and if memory allocator has scalability problems (like allocator from glibc) this will notably slowdown MySQL-transaction creation and many threads will get stuck on glibc/kernel syscalls, which will in turn result in contention on kernel_mutex (trx_sys->mutex in 5.6), as memory allocation occurs under that mutex. See an example of such an issue. Related bugs: BUG#54982, BUG#49169.

2) memory allocation for read-view structure is not a direct malloc() call, but rather goes through the InnoDB heap layer – so InnoDB allocates heap area and then creates requested block(s) there. That optimization helps to avoid fragmentation in case of many small allocations and allows to free all blocks from specific heap at once. But in the case when we need memory only for a single block this 2 layers approach is quite inefficient and in some cases can be the reason for notable MySQL performance drop.

Now in Percona Server, for each connection we use a preallocated read view structure, reuse that memory during the entire connection lifetime and free it at disconnect. If some transactions require a larger amount of memory – we just reallocate memory to fulfill it needs.

To demonstrate the difference we have run sysbench POINT_SELECT test for glibc and jemalloc allocators.

point_select_qps_1024_alloc

Observations:

= MySQL 5.5.30
– throughput of MySQL 5.5.30 with glibc is limited first of all by inefficiency of transaction list handling (see our previous post) and also by bad scalability of glibc malloc itself
– jemalloc helps MySQL 5.5.30 to fix issues with malloc scalability but still scanning of the transaction list
causes performance drop

= MySQL 5.6.10
– in autocommit mode 5.6.10 has no problem with transaction list scanning (due to the read-only transactions optimization), but it still allocates/frees memory for read view structure and that causes drops at high threads with glibc. Jemalloc helps to solve that.

= Percona Server 5.5.30-30.2
– both issues are solved in our recent release and such we have almost no difference in results between runs either with glibc or jemalloc

The post Memory allocators: MySQL performance improvements in Percona Server 5.5.30-30.2 appeared first on MySQL Performance Blog.

Mar
08
2013
--

MySQL performance: Impact of memory allocators (Part 2)

Last time I wrote about memory allocators and how they can affect MySQL performance in general. This time I would like to explore this topic from a bit different angle: What impact does the number of processor cores have on different memory allocators and what difference we will see in MySQL performance in this scenario?

Let me share a conclusion first: If you have a server with more than 8 cores you should use something different than the default glibc memory allocator.
We recommend jemalloc or tcmalloc
.

In my test I will use Dell R720 box(spec), Centos 6.3, upcoming Percona Server 5.5.30 and 3 allocators – stock glibc 2.13, jemalloc-3.1.0, the latest tcmalloc from svn repo. Regarding my selection of jemalloc version see my notes at the end of this post.

Test box has 2xIntel E5/2.2Ghz with 8 real cores per socket – 16 real cores + enabled hyper-threading gives us total – 32 vcpu. In my tests I didn’t see any notable difference between allocators up to 4 vcpu, so on charts below I will highlight results from 4 to 32 vcpu.

As test workload I will use the same 2 sysbench tests – OLTP_RO and POINT_SELECT that I used before.
Sysbench dataset – 16 tables, each 5M rows, uniform distribution.

OLTP_RO test consists of 5 select queries – select_ranges, select_order_ranges, select_distinct_ranges, select_sum_ranges, point_select. Processing these queries will involve notable amount of malloc()/free() operations, so allocator efficiency is the key factor to achieve high throughput in this test.


malloc_test2_oltp_ro

Observations:

  • 4 vcpu – results are almost identical for all allocators (~2500tps)
  • 8 vcpu – results doubled (~5000tps) for jemalloc and tcmalloc, but with glibc malloc we have a drop at 64/128 threads to ~3500tps
  • 16vcpu – increase in throughput and quite stable results for jemalloc and tcmalloc up to 4096 threads (~6300tps) and again drop after 16 threads for glibc to ~4000tps
  • 32vcpu – throughput for jemalloc and tcmalloc jumped to ~12500tps, results stay at this level up to 1024 threads and then tps slightly decreased but still looks ok. For glibc tps drops below results we have observed for 8/16 vcpu(~3100tps).

So difference in OLTP_RO test between glibc and jemalloc/tcmalloc in case of 32vcpu is ~4x.

POINT_SELECT – very simple query – SELECT c FROM sbtest WHERE id=N. Test workload with this query
allows to generate significant load and check server behavior under very high pressure

malloc_test2_point_select

Observations:

  • 4 vcpu – again no difference between allocators (~50,000qps)
  • 8 vcpu – with all allocators we got ~100,000qps. Results for jemalloc/tcmalloc are stable up to 4096 threads, for glibc malloc there is decrease in qps for 2048/4096 threads to ~80.000qps.
  • 16vcpu – with all allocators we got ~140,000qps. For jemalloc/tcmalloc up to 4096 threads, for glibc up to 512 threads, then decrease in throughput to 100,000qps.
  • 32vcpu – with all allocators we got up to ~240,000qps. Then for every allocator we have drop in throughput but at different point and to different level.
    – for glibc malloc drop happened after 256 threads, qps is below the level for 8/16 vcpu. (~80,000qps).
    – for tcmalloc drop happened after 1024 threads, at 2048 thread qps is very close to results for 16vcpu and at 4096 threads qps is ~17,000.
    – for jemalloc drop happened after 1024 threads as well, at 2048 thread qps is very close to results for 16vcpu and at 4096 threads – qps is slightly better than results for 4vcpu (~60,000qps).As you can see in the case of the very high concurrency and notable amount of the small/medium allocations, we have quite poor results for jemalloc/tcmalloc. Even worse than for glibc. This is the very specific case when overhead from the advanced techniques used in these allocators that should help to speed up allocation,purging of the dirty pages, minimize impact of the memory fragmentation is so significant that becomes bottleneck for the query processing. I believe that both allocators can be tuned to handle such cases better – for instance allocate more arenas but that may notably increase memory footprint.

Conclusion:
– if your box has 8 cores or less – there is almost no difference between glibc malloc and alternative allocators
– if your box has more than 8 cores – you should try/evaluate alternative allocators; it can notably boost your MySQL server at no cost. Also, an alternative allocator must be used if you run benchmarks in this configuration, otherwise the performance will be limited by glibc/malloc and not by MySQL.

Notes regarding jemalloc version I’ve used in my tests: I’ve noted notable impact on MySQL performance after version 3.2.0 (see raw results below) so I used jemalloc-3.1.0 in my tests. I suppose that some changes in 3.2.0 like for instance changes re: page run allocation and dirty page purging may have some correlation with decreasing performance in workloads with MySQL.

# Test: POINT_SELECT:throughput, QPS
#
# Set 1 - 5.5.30pre-jemalloc-3.0.0
# Set 2 - 5.5.30pre-jemalloc-3.1.0
# Set 3 - 5.5.30pre-jemalloc-3.2.0
# Set 4 - 5.5.30pre-jemalloc-3.3.0
#
# Threads        Set 1     Set 2     Set 3     Set 4
       1024  236575.74 236862.59 211203.42 215098.20
       2048  154829.26 154348.16 135607.69 137162.29

The post MySQL performance: Impact of memory allocators (Part 2) appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com