May
02
2019
--

MySQL Memory Management, Memory Allocators and Operating System

memory management mysql bug

memory management mysql bugWhen users experience memory usage issues with any software, including MySQL®, their first response is to think that it’s a symptom of a memory leak. As this story will show, this is not always the case.

This story is about a bug.

All Percona Support customers are eligible for bug fixes, but their options vary. For example, Advanced+ customers are offered a HotFix build prior to the public release of software with the patch. Premium customers do not even have to use Percona software: we may port our patches to upstream for them. But for Percona products all Support levels have the right to have a fix.

Even so, this does not mean we will fix every unexpected behavior, even if we accept that behavior to be a valid bug. One of the reasons for such a decision might be that while the behavior is clearly wrong for Percona products, this is still a feature request.

A bug as a case study

A good recent example of such a case is PS-5312 – the bug is repeatable with upstream and reported at bugs.mysql.com/95065

This reports a situation whereby access to InnoDB fulltext indexes leads to growth in memory usage. It starts when someone queries a fulltext index, grows until a maximum, and is not freed for quite a long time.

Yura Sorokin from the Percona Engineering Team investigated if this is a memory leak and found that it is not.

When InnoDB resolves a fulltext query, it creates a memory heap in the function

fts_query_phrase_search

This heap may grow up to 80MB. Additionally, it has a big number of blocks (

mem_block_t

) which are not always used continuously and this, in turn, leads to memory fragmentation.

In the function

exit

, the memory heap is freed. InnoDB does this for each of the allocated blocks. At the end of the function, it calls

free()

which belongs to one of the memory allocator libraries, such as

malloc

or

jemalloc

. From the

mysqld

point of view, everything is done correctly: there is no memory leak.

However while

free()

should release memory when called, it is not required to return it back to the operating system. If the memory allocator decides that the same memory blocks will be required soon, it may still keep them for the

mysqld

process. This explains why you might see that

mysqld

  still uses a lot of memory after the job is finished and all de-allocations are done.

This in practice is not a big issue and should not cause any harm. But if you need the memory to be returned to the operating system quicker, you could try alternative memory allocators, such as jemalloc. The latter was proven to solve the issue with PS-5312.

Another factor which improves memory management is the number of CPU cores: the more we used for the test, the faster the memory was returned to the operating system. This, probably, can be explained by the fact that if you have multiple CPUs, then the memory allocator can dedicate one of them just for releasing memory to the operating system.

The very first implementation of InnoDB full text indexes introduced this flaw. As our engineer Yura Sorokin found:

Options to fix

We have a few options to fix this:

  1. Change implementation of InnoDB fulltext index
  2. Use custom memory library like jemalloc

Both have their advantages and disadvantages.

Option 1 means we are introducing an incompatibility with upstream, which may lead to strange bugs in future versions. This also means a full rewrite of the InnoDB fulltext code which is always risky in GA versions, used by our customers.

Option 2 means we may hit flaws in the jemalloc library which is designed for performance and not for the safest memory allocation.

So we have to choose between these two not ideal solutions.

Since option 1 may lead to a situation when Percona Server will be incompatible with upstream, we prefer option 2 and look forward for the upstream fix of this bug.

Conclusion

If you are seeing a high memory usage by the

mysqld

process, it is not always a symptom of a memory leak. You can use memory instrumentation in Performance Schema to find out how allocated memory is used. Try alternative memory libraries for better processing of allocations and freeing of memory. Search the user manual for

LD_PRELOAD

to find out how to set it up at these pages here and here.

Jul
03
2018
--

Linux OS Tuning for MySQL Database Performance

Linux OS tuning for MySQL database performance

Linux OS tuning for MySQL database performanceIn this post we will review the most important Linux settings to adjust for performance tuning and optimization of a MySQL database server. We’ll note how some of the Linux parameter settings used OS tuning may vary according to different system types: physical, virtual or cloud. Other posts have addressed MySQL parameters, like Alexander’s blog MySQL 5.7 Performance Tuning Immediately After Installation. That post remains highly relevant for the latest versions of MySQL, 5.7 and 8.0. Here we will focus more on the Linux operating system parameters that can affect database performance.

Server and Operating System

Here are some Linux parameters that you should check and consider modifying if you need to improve database performance.

Kernel – vm.swappiness

The value represents the tendency of the kernel  to swap out memory pages. On a database server with ample amounts of RAM, we should keep this value as low as possible. The extra I/O can slow down or even render the service unresponsive. A value of 0 disables swapping completely while 1 causes the kernel to perform the minimum amount of swapping. In most cases the latter setting should be OK:

# Set the swappiness value as root
echo 1 > /proc/sys/vm/swappiness
# Alternatively, using sysctl
sysctl -w vm.swappiness=1
# Verify the change
cat /proc/sys/vm/swappiness
1
# Alternatively, using sysctl
sysctl vm.swappiness
vm.swappiness = 1

The change should be also persisted in /etc/sysctl.conf:

vm.swappiness = 1

Filesystems – XFS/ext4/ZFS
XFS

XFS is a high-performance, journaling file system designed for high scalability. It provides near native I/O performance even when the file system spans multiple storage devices.  XFS has features that make it suitable for very large file systems, supporting files up to 8EiB in size. Fast recovery, fast transactions, delayed allocation for reduced fragmentation and near raw I/O performance with DIRECT I/O.

The default options for mkfs.xfs are good for optimal speed, so the simple command:

# Use default mkfs options
mkfs.xfs /dev/target_volume

will provide best performance while ensuring data safety. Regarding mount options, the defaults should fit most cases. On some filesystems you can see a performance increase by adding the noatime mount option to the /etc/fstab.  For XFS filesystems the default atime behaviour is relatime, which has almost no overhead compared to noatime and still maintains sane atime values.  If you create an XFS file system on a LUN that has a battery backed, non-volatile cache, you can further increase the performance of the filesystem by disabling the write barrier with the mount option nobarrier. This helps you to avoid flushing data more often than necessary. If a BBU (backup battery unit) is not present, however, or you are unsure about it, leave barriers on, otherwise you may jeopardize data consistency. With this options on, an /etc/fstab file should look like the one below:

/dev/sda2              /datastore              xfs     defaults,nobarrier
/dev/sdb2              /binlog                 xfs     defaults,nobarrier

ext4

ext4 has been developed as the successor to ext3 with added performance improvements. It is a solid option that will fit most workloads. We should note here that it supports files up to 16TB in size, a smaller limit than xfs. This is something you should consider if extreme table space size/growth is a requirement. Regarding mount options, the same considerations apply. We recommend the defaults for a robust filesystem without risks to data consistency. However, if an enterprise storage controller with a BBU cache is present, the following mount options will provide the best performance:

/dev/sda2              /datastore              ext4     noatime,data=writeback,barrier=0,nobh,errors=remount-ro
/dev/sdb2              /binlog                 ext4     noatime,data=writeback,barrier=0,nobh,errors=remount-ro

Note: The data=writeback option results in only metadata being journaled, not actual file data. This has the risk of corrupting recently modified files in the event of a sudden power loss, a risk which is minimised with a presence of a BBU enabled controller. nobh only works with the data=writeback option enabled.

ZFS

ZFS is a filesystem and LVM combined enterprise storage solution with extended protection vs data corruption. There are certainly cases where the rich feature set of ZFS makes it an essential option to consider, most notably when advance volume management is a requirement. ZFS tuning for MySQL can be a complex topic and falls outside the scope of this blog. For further reference, there is a dedicated blog post on the subject by Yves Trudeau:

Disk Subsystem – I/O scheduler 

Most modern Linux distributions come with noop or deadline I/O schedulers by default, both providing better performance than the cfq and anticipatory ones. However it is always a good practice to check the scheduler for each device and if the value shown is different than noop or deadline the policy can change without rebooting the server:

# View the I/O scheduler setting. The value in square brackets shows the running scheduler
cat /sys/block/sdb/queue/scheduler
noop deadline [cfq]
# Change the setting
sudo echo noop > /sys/block/sdb/queue/scheduler

To make the change persistent, you must modify the GRUB configuration file:

# Change the line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
# to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=noop"

AWS Note: There are cases where the I/O scheduler has a value of none, most notably in AWS VM instance types where EBS volumes are exposed as NVMe block devices. This is because the setting has no use in modern PCIe/NVMe devices. The reason is that they have a very large internal queue and they bypass the IO scheduler altogether. The setting in this case is none and it is the optimal in such disks.

Disk Subsystem – Volume optimization

Ideally different disk volumes should be used for the OS installation, binlog, data and the redo log, if this is possible. The separation of OS and data partitions, not just logically but physically, will improve database performance. The RAID level can also have an impact: RAID-5 should be avoided as the checksum needed to ensure integrity is costly. The best performance without making compromises to redundancy is achieved by the use of an advanced controller with a battery-backed cache unit and preferably RAID-10 volumes spanned across multiple disks.

AWS Note: For further information about EBS volumes and AWS storage optimisation, Amazon has documentation at the following links:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html

Database settings

System Architecture – NUMA settings

Non-uniform memory access (NUMA) is a memory design where an SMP’s system processor can access its own local memory faster than non-local memory (the one assigned local to other CPUs). This may result in suboptimal database performance and potentially swapping. When the buffer pool memory allocation is larger than size of the RAM available local to the node, and the default memory allocation policy is selected, swapping occurs. A NUMA enabled server will report different node distances between CPU nodes. A uniformed one will report a single distance:

# NUMA system
numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 65525 MB
node 0 free: 296 MB
node 1 cpus: 8 9 10 11 12 13 14 15
node 1 size: 65536 MB
node 1 free: 9538 MB
node 2 cpus: 16 17 18 19 20 21 22 23
node 2 size: 65536 MB
node 2 free: 12701 MB
node 3 cpus: 24 25 26 27 28 29 30 31
node 3 size: 65535 MB
node 3 free: 7166 MB
node distances:
node   0   1   2   3
  0:  10  20  20  20
  1:  20  10  20  20
  2:  20  20  10  20
  3:  20  20  20  10
# Uniformed system
numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 64509 MB
node 0 free: 4870 MB
node distances:
node   0
  0:  10

In the case of a NUMA system, where numactl shows different distances across nodes, the MySQL variable innodb_numa_interleave should be enabled to ensure memory interleaving. Percona Server provides improved NUMA support by introducing the flush_caches variable. When enabled, it will help with allocation fairness across nodes. To determine whether or not allocation is equal across nodes, you can examine numa_maps for the mysqld process with this script:

# The perl script numa_maps.pl will report memory allocation per CPU node:
# 3595 is the pid of the mysqld process
perl numa_maps.pl < /proc/3595/numa_maps
N0        :     16010293 ( 61.07 GB)
N1        :     10465257 ( 39.92 GB)
N2        :     13036896 ( 49.73 GB)
N3        :     14508505 ( 55.35 GB)
active    :          438 (  0.00 GB)
anon      :     54018275 (206.06 GB)
dirty     :     54018275 (206.06 GB)
kernelpagesize_kB:         4680 (  0.02 GB)
mapmax    :          787 (  0.00 GB)
mapped    :         2731 (  0.01 GB)

Conclusion

In this blog post we examined a few important OS related settings and explained how they can be tuned for better database performance.

While you are here …

You might also find value in this recorded webinar Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance

 

The post Linux OS Tuning for MySQL Database Performance appeared first on Percona Database Performance Blog.

Mar
08
2013
--

MySQL performance: Impact of memory allocators (Part 2)

Last time I wrote about memory allocators and how they can affect MySQL performance in general. This time I would like to explore this topic from a bit different angle: What impact does the number of processor cores have on different memory allocators and what difference we will see in MySQL performance in this scenario?

Let me share a conclusion first: If you have a server with more than 8 cores you should use something different than the default glibc memory allocator.
We recommend jemalloc or tcmalloc
.

In my test I will use Dell R720 box(spec), Centos 6.3, upcoming Percona Server 5.5.30 and 3 allocators – stock glibc 2.13, jemalloc-3.1.0, the latest tcmalloc from svn repo. Regarding my selection of jemalloc version see my notes at the end of this post.

Test box has 2xIntel E5/2.2Ghz with 8 real cores per socket – 16 real cores + enabled hyper-threading gives us total – 32 vcpu. In my tests I didn’t see any notable difference between allocators up to 4 vcpu, so on charts below I will highlight results from 4 to 32 vcpu.

As test workload I will use the same 2 sysbench tests – OLTP_RO and POINT_SELECT that I used before.
Sysbench dataset – 16 tables, each 5M rows, uniform distribution.

OLTP_RO test consists of 5 select queries – select_ranges, select_order_ranges, select_distinct_ranges, select_sum_ranges, point_select. Processing these queries will involve notable amount of malloc()/free() operations, so allocator efficiency is the key factor to achieve high throughput in this test.


malloc_test2_oltp_ro

Observations:

  • 4 vcpu – results are almost identical for all allocators (~2500tps)
  • 8 vcpu – results doubled (~5000tps) for jemalloc and tcmalloc, but with glibc malloc we have a drop at 64/128 threads to ~3500tps
  • 16vcpu – increase in throughput and quite stable results for jemalloc and tcmalloc up to 4096 threads (~6300tps) and again drop after 16 threads for glibc to ~4000tps
  • 32vcpu – throughput for jemalloc and tcmalloc jumped to ~12500tps, results stay at this level up to 1024 threads and then tps slightly decreased but still looks ok. For glibc tps drops below results we have observed for 8/16 vcpu(~3100tps).

So difference in OLTP_RO test between glibc and jemalloc/tcmalloc in case of 32vcpu is ~4x.

POINT_SELECT – very simple query – SELECT c FROM sbtest WHERE id=N. Test workload with this query
allows to generate significant load and check server behavior under very high pressure

malloc_test2_point_select

Observations:

  • 4 vcpu – again no difference between allocators (~50,000qps)
  • 8 vcpu – with all allocators we got ~100,000qps. Results for jemalloc/tcmalloc are stable up to 4096 threads, for glibc malloc there is decrease in qps for 2048/4096 threads to ~80.000qps.
  • 16vcpu – with all allocators we got ~140,000qps. For jemalloc/tcmalloc up to 4096 threads, for glibc up to 512 threads, then decrease in throughput to 100,000qps.
  • 32vcpu – with all allocators we got up to ~240,000qps. Then for every allocator we have drop in throughput but at different point and to different level.
    – for glibc malloc drop happened after 256 threads, qps is below the level for 8/16 vcpu. (~80,000qps).
    – for tcmalloc drop happened after 1024 threads, at 2048 thread qps is very close to results for 16vcpu and at 4096 threads qps is ~17,000.
    – for jemalloc drop happened after 1024 threads as well, at 2048 thread qps is very close to results for 16vcpu and at 4096 threads – qps is slightly better than results for 4vcpu (~60,000qps).As you can see in the case of the very high concurrency and notable amount of the small/medium allocations, we have quite poor results for jemalloc/tcmalloc. Even worse than for glibc. This is the very specific case when overhead from the advanced techniques used in these allocators that should help to speed up allocation,purging of the dirty pages, minimize impact of the memory fragmentation is so significant that becomes bottleneck for the query processing. I believe that both allocators can be tuned to handle such cases better – for instance allocate more arenas but that may notably increase memory footprint.

Conclusion:
– if your box has 8 cores or less – there is almost no difference between glibc malloc and alternative allocators
– if your box has more than 8 cores – you should try/evaluate alternative allocators; it can notably boost your MySQL server at no cost. Also, an alternative allocator must be used if you run benchmarks in this configuration, otherwise the performance will be limited by glibc/malloc and not by MySQL.

Notes regarding jemalloc version I’ve used in my tests: I’ve noted notable impact on MySQL performance after version 3.2.0 (see raw results below) so I used jemalloc-3.1.0 in my tests. I suppose that some changes in 3.2.0 like for instance changes re: page run allocation and dirty page purging may have some correlation with decreasing performance in workloads with MySQL.

# Test: POINT_SELECT:throughput, QPS
#
# Set 1 - 5.5.30pre-jemalloc-3.0.0
# Set 2 - 5.5.30pre-jemalloc-3.1.0
# Set 3 - 5.5.30pre-jemalloc-3.2.0
# Set 4 - 5.5.30pre-jemalloc-3.3.0
#
# Threads        Set 1     Set 2     Set 3     Set 4
       1024  236575.74 236862.59 211203.42 215098.20
       2048  154829.26 154348.16 135607.69 137162.29

The post MySQL performance: Impact of memory allocators (Part 2) appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com