Mar
01
2017
--

Open Source Databases on Big Machines: Disk Speed and innodb_io_capacity

In this blog post, I’ll look for the bottleneck that prevented the performance in my previous post from achieving better results.

The powerful machine I used in the tests in my previous post has a comparatively slow disk, and therefore I expected my tests would hit a point when I couldn’t increase performance further due to the disk speed.

Hardware configuration:

Processors: physical = 4, cores = 72, virtual = 144, hyperthreading = yes
Memory: 3.0T
Disk speed: about 3K IOPS
OS: CentOS 7.1.1503
File system: XFS

Versions tested and configuration: same as in the first post of this series (check the post for specifics).

Even though I expected my tests would stop increasing in performance due to the disk speed, I did not observe high IO rates in the 

iostat

 output. I already tested with a full data set that fits in memory. In this case, write performance only affected data flushes and log writes. But we should still see a visible decrease in speed. So I decided to try RW tests totally in memory. I created a ramdisk and put the MySQL datadir on it. Surprisingly, results on the SSD and ramdisk did not differ.

I asked my colleagues from “Postgres Professional” to test PostgreSQL with the ramdisk. They got similar results:

It’s interesting that the value of

innodb_io_capacity

 does not have any effect on this situation. Data for the graph below was taken when I ran tests on ramdisk. I wanted to see if I could control the IO activity of a disk, which is extremely fast by default, using this variable.

This totally contradicts all my past experiences with smaller machines. Percona re-purposed the machine with a faster disk (which I used before, described in this post), so I used a similar one with slower disk speed.

Hardware configuration:

Processors: physical = 2, cores = 12, virtual = 24, hyperthreading = yes
Memory: 47.2G
Disk speed: about 3K IOPS
OS: Ubuntu 14.04.5 LTS (trusty)
File system: ext4

Again, in this case

innodb_io_capacity

 benchmarks with a smaller number of CPU cores showed more predictable results.

Conclusion:

Both MySQL and PostgreSQL on a machine with a large number of CPU cores hit CPU resources limits before disk speed can start affecting performance. We only tested one scenario, however. With other scenarios, the results might be different.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Oct
30
2013
--

InnoDB adaptive flushing in MySQL 5.6: checkpoint age and io capacity

In MySQL 5.6 InnoDB has a dedicated thread (page_cleaner) that’s responsible for performing flushing operations. Page_cleaner performs flushing of the dirty pages from the buffer pool based on two factors:
access pattern  –  the least recently used pages will be flushed by LRU flusher from LRU_list when buffer pool has no free pages anymore;
age – the oldest modified non-flushed pages are part of flush_list structure and will be flushed by flush_list flusher based on several heuristics.

There is a good overview of the page_cleaner and also here you may find some details about flushing in MySQL 5.6. Below I describe several additional aspects of the flush_list flushing that was not really covered yet.

flush_list flushing and checkpoint age

The amount of the aged pages that is possible to keep in the flush_list is limited by the combined size of the innodb log files. So the main purpose of the flush_list flushing is to flush pages from this list with such a rate that will also always allow enough free space in the log files. On the other hand, too aggressive flushing means less write combining, unnecessary load on the I/O subsystem, in the end undoing performance benefits of having larger redo logs.  In MySQL 5.6 the amount of pages to flush is calculated in the InnoDB adaptive routine based on the current checkpoint age with the following formula:

 
percentage of the IO capacity that should be used for flushing =
        ((srv_max_io_capacity / srv_io_capacity) * (lsn_age_factor * sqrt(lsn_age_factor))) / 7.5;

We modeled that formula in R and found that it’s possible to improve it such a way that the curve becomes more flat and as a result flushing becomes less aggressive. That new formula is enabled in Percona Server 5.6 by default.

Rplot04

flush_list flushing and io_capacity

InnoDB provides two variables that allow the control of the background flushing rate – innodb_io_capacity and innodb_io_capacity_max. There is quite a detailed description for these vars. However there are several things that are not really covered in the documentation:

innodb_io_capacity_max is the most important variable in case of adaptive flushing as only that variable actually limiting the flushing rate. See above formula and charts.

innodb_io_capacity is used for limiting IO operations during merging of the insert buffer and flushing in cases of server inactivity/shutdown.

For practical needs, the above means the following:

– if  the MySQL server is in an active state (serving user requests) you need to adjust innodb_io_capacity_max to increase/decrease flushing rate.
– if the MySQL server is in an idle state or performing shutdown flushing of the pages from flush_list will be limited by innodb_io_capacity value only.

– if change_buffering is ON and server is in active state it will allow to use either 5% of innodb_io_capacity or vary rate from 5% to 55%  if more than 50% of insert buffer size was already used.
– if change_buffering is ON and server is idle it will use 100% of innodb_io_capacity for merge operations

The post InnoDB adaptive flushing in MySQL 5.6: checkpoint age and io capacity appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com