Jul
13
2018
--

On MySQL and Intel Optane performance

MySQL 8 versus Percona Server with heavy IO application on Intel Optane

Recently, Dimitri published the results of measuring MySQL 8.0 on Intel Optane storage device. In this blog post, I wanted to look at this in more detail and explore the performance of MySQL 8, MySQL 5.7 and Percona Server for MySQL using a similar set up. The Intel Optane is a very capable device, so I was puzzled that Dimitri chose MySQL options that are either not safe or not recommended for production workloads.

Since we have an Intel Optane in our labs, I wanted to run a similar benchmark, but using settings that we would recommend our customers to use, namely:

  • use innodb_checksum
  • use innodb_doublewrite
  • use binary logs with sync_binlog=1
  • enable (by default) Performance Schema

I still used

charset=latin1

  (even though the default is utf8mb4 in MySQL 8) and I set a total size of InnoDB log files to 30GB (as in Dimitri’s benchmark). This setting allocates big InnoDB log files to ensure there is no pressure from adaptive flushing. Though I have concerns about how it works in MySQL 8, this is a topic for another research.

So let’s see how MySQL 8.0 performed with these settings, and compare it with MySQL 5.7 and Percona Server for MySQL 5.7.

I used an Intel Optane SSD 905P 960GB device on the server with 2 socket Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz CPUs.

To highlight the performance difference I wanted to show, I used a single case: sysbench 8 tables 50M rows each (which is about ~120GB of data) and buffer pool 32GB. I ran sysbench oltp_read_write in 128 threads.

First, let’s review the results for MySQL 8 vs MySQL 5.7

After achieving a steady state – we can see that MySQL 8 does not have ANY performance improvements over MySQL 5.7.

Let’s compare this with Percona Server for MySQL 5.7

MySQL 8 versus Percona Server with heavy IO application on Intel Optane

Percona Server for MySQL 5.7 shows about 60% performance improvement over both MySQL 5.7 and MySQL 8.

How did we achieve this? All our improvements are described here: https://www.percona.com/doc/percona-server/LATEST/performance/xtradb_performance_improvements_for_io-bound_highly-concurrent_workloads.html. In short:

  1. Parallel doublewrite.  In both MySQL 5.7 and MySQL 8 writes are serialized by writing to doublewrite.
  2. Multi-threaded LRU flusher. We reported and proposed a solution here https://bugs.mysql.com/bug.php?id=70500. However, Oracle have not incorporated the solution upstream.
  3. Single page eviction. This is another problematic area in MySQL’s flushing algorithm. The bug https://bugs.mysql.com/bug.php?id=81376 was reported over 2 years ago, but unfortunately it’s still overlooked.

Summarizing performance findings:

  • For Percona Server for MySQL during this workload, I observed 1.4 GB/sec  reads and 815 MB/sec  writes
  • For MySQL 5.7 and MySQL 8 the numbers are 824 MB/sec reads and  530 MB/sec writes.

My opinion is that Oracle focused on addressing the wrong performance problems in MySQL 8 and did not address the real issues. In this benchmark, using real production settings, MySQL 8 does not show any significant performance benefits over MySQL 5.7 for workloads characterized by heavy IO writes.

With this, I should admit that Intel Optane is a very performant storage. By comparison, on Intel 3600 SSD under the same workload, for Percona Server I am able to achieve only 2000 tps, which is 2.5x times slower than with Intel Optane.

Drawing some conclusions

So there are a few outcomes I can highlight:

  • Intel Optane is a very capable drive, it is easily the fastest of those we’ve tested so far
  • MySQL 8 is not able to utilize all the power of Intel Optane, unless you use unsafe settings (which to me is the equivalent of driving 200 MPH on a highway without working brakes)
  • Oracle has focused on addressing the wrong IO bottlenecks and has overlooked the real ones
  • To get all the benefits of Intel Optane performance, use a proper server—Percona Server for MySQL—which is able to utilize more IOPS from the device.

The post On MySQL and Intel Optane performance appeared first on Percona Database Performance Blog.

Jul
12
2018
--

Why MySQL Stored Procedures, Functions and Triggers Are Bad For Performance

Execution map for func1()

MySQL stored procedures, functions and triggers are tempting constructs for application developers. However, as I discovered, there can be an impact on database performance when using MySQL stored routines. Not being entirely sure of what I was seeing during a customer visit, I set out to create some simple tests to measure the impact of triggers on database performance. The outcome might surprise you.

Why stored routines are not optimal performance wise: short version

Recently, I worked with a customer to profile the performance of triggers and stored routines. What I’ve learned about stored routines: “dead” code (the code in a branch which will never run) can still significantly slow down the response time of a function/procedure/trigger. We will need to be careful to clean up what we do not need.

Profiling MySQL stored functions

Let’s compare these four simple stored functions (in MySQL 5.7):

Function 1:

CREATE DEFINER=`root`@`localhost` FUNCTION `func1`() RETURNS int(11)
BEGIN
	declare r int default 0;
RETURN r;
END

This function simply declares a variable and returns it. It is a dummy function

Function 2:

CREATE DEFINER=`root`@`localhost` FUNCTION `func2`() RETURNS int(11)
BEGIN
    declare r int default 0;
    IF 1=2
    THEN
		select levenshtein_limit_n('test finc', 'test func', 1000) into r;
    END IF;
RETURN r;
END

This function calls another function, levenshtein_limit_n (calculates levenshtein distance). But wait: this code will never run – the condition IF 1=2 will never be true. So that is the same as function 1.

Function 3:

CREATE DEFINER=`root`@`localhost` FUNCTION `func3`() RETURNS int(11)
BEGIN
    declare r int default 0;
    IF 1=2 THEN
		select levenshtein_limit_n('test finc', 'test func', 1) into r;
    END IF;
    IF 2=3 THEN
		select levenshtein_limit_n('test finc', 'test func', 10) into r;
    END IF;
    IF 3=4 THEN
		select levenshtein_limit_n('test finc', 'test func', 100) into r;
    END IF;
    IF 4=5 THEN
		select levenshtein_limit_n('test finc', 'test func', 1000) into r;
    END IF;
RETURN r;
END

Here there are four conditions and none of these conditions will be true: there are 4 calls of “dead” code. The result of the function call for function 3 will be the same as function 2 and function 1.

Function 4:

CREATE DEFINER=`root`@`localhost` FUNCTION `func3_nope`() RETURNS int(11)
BEGIN
    declare r int default 0;
    IF 1=2 THEN
		select does_not_exit('test finc', 'test func', 1) into r;
    END IF;
    IF 2=3 THEN
		select does_not_exit('test finc', 'test func', 10) into r;
    END IF;
    IF 3=4 THEN
		select does_not_exit('test finc', 'test func', 100) into r;
    END IF;
    IF 4=5 THEN
		select does_not_exit('test finc', 'test func', 1000) into r;
    END IF;
RETURN r;
END

This is the same as function 3 but the function we are running does not exist. Well, it does not matter as the

select does_not_exit

  will never run.

So all the functions will always return 0. We expect that the performance of these functions will be the same or very similar. Surprisingly it is not the case! To measure the performance I used the “benchmark” function to run the same function 1M times. Here are the results:

+-----------------------------+
| benchmark(1000000, func1()) |
+-----------------------------+
|                           0 |
+-----------------------------+
1 row in set (1.75 sec)
+-----------------------------+
| benchmark(1000000, func2()) |
+-----------------------------+
|                           0 |
+-----------------------------+
1 row in set (2.45 sec)
+-----------------------------+
| benchmark(1000000, func3()) |
+-----------------------------+
|                           0 |
+-----------------------------+
1 row in set (3.85 sec)
+----------------------------------+
| benchmark(1000000, func3_nope()) |
+----------------------------------+
|                                0 |
+----------------------------------+
1 row in set (3.85 sec)

As we can see func3 (with four dead code calls which will never be executed, otherwise identical to func1) runs almost 3x slower compared to func1(); func3_nope() is identical in terms of response time to func3().

Visualizing all system calls from functions

To figure out what is happening inside the function calls I used performance_schema / sys schema to create a trace with ps_trace_thread() procedure

  1. Get the thread_id for the MySQL connection:
    mysql> select THREAD_ID from performance_schema.threads where processlist_id = connection_id();
    +-----------+
    | THREAD_ID |
    +-----------+
    |        49 |
    +-----------+
    1 row in set (0.00 sec)
  2. Run ps_trace_thread in another connection passing the thread_id=49:
    mysql> CALL sys.ps_trace_thread(49, concat('/var/lib/mysql-files/stack-func1-run1.dot'), 10, 0, TRUE, TRUE, TRUE);
    +--------------------+
    | summary            |
    +--------------------+
    | Disabled 0 threads |
    +--------------------+
    1 row in set (0.00 sec)
    +---------------------------------------------+
    | Info                                        |
    +---------------------------------------------+
    | Data collection starting for THREAD_ID = 49 |
    +---------------------------------------------+
    1 row in set (0.00 sec)
  3. At that point I switched to the original connection (thread_id=49) and run:
    mysql> select func1();
    +---------+
    | func1() |
    +---------+
    |       0 |
    +---------+
    1 row in set (0.00 sec)
  4. The sys.ps_trace_thread collected the data (for 10 seconds, during which I ran the
    select func1()

     ), then it finished its collection and created the dot file:

    +-----------------------------------------------------------------------+
    | Info                                                                  |
    +-----------------------------------------------------------------------+
    | Stack trace written to /var/lib/mysql-files/stack-func3nope-new12.dot |
    +-----------------------------------------------------------------------+
    1 row in set (9.21 sec)
    +-------------------------------------------------------------------------------+
    | Convert to PDF                                                                |
    +-------------------------------------------------------------------------------+
    | dot -Tpdf -o /tmp/stack_49.pdf /var/lib/mysql-files/stack-func3nope-new12.dot |
    +-------------------------------------------------------------------------------+
    1 row in set (9.21 sec)
    +-------------------------------------------------------------------------------+
    | Convert to PNG                                                                |
    +-------------------------------------------------------------------------------+
    | dot -Tpng -o /tmp/stack_49.png /var/lib/mysql-files/stack-func3nope-new12.dot |
    +-------------------------------------------------------------------------------+
    1 row in set (9.21 sec)
    Query OK, 0 rows affected (9.45 sec)

I repeated these steps for all the functions above and then created charts of the commands.

Here are the results:

Func1()

Execution map for func1()

Func2()

Execution map for func2()

Func3()

Execution map for func3()

 

As we can see there is a sp/jump_if_not call for every “if” check followed by an opening tables statement (which is quite interesting). So parsing the “IF” condition made a difference.

For MySQL 8.0 we can also see MySQL source code documentation for stored routines which documents how it is implemented. It reads:

Flow Analysis Optimizations
After code is generated, the low level sp_instr instructions are optimized. The optimization focuses on two areas:

Dead code removal,
Jump shortcut resolution.
These two optimizations are performed together, as they both are a problem involving flow analysis in the graph that represents the generated code.

The code that implements these optimizations is sp_head::optimize().

However, this does not explain why it executes “opening tables”. I have filed a bug.

When slow functions actually make a difference

Well, if we do not plan to run one million of those stored functions we will never even notice the difference. However, where it will make a difference is … inside a trigger. Let’s say that we have a trigger on a table: every time we update that table it executes a trigger to update another field. Here is an example: let’s say we have a table called “form” and we simply need to update its creation date:

mysql> update form set form_created_date = NOW() where form_id > 5000;
Query OK, 65536 rows affected (0.31 sec)
Rows matched: 65536  Changed: 65536  Warnings: 0

That is good and fast. Now we create a trigger which will call our dummy func1():

CREATE DEFINER=`root`@`localhost` TRIGGER `test`.`form_AFTER_UPDATE`
AFTER UPDATE ON `form`
FOR EACH ROW
BEGIN
	declare r int default 0;
	select func1() into r;
END

Now repeat the update. Remember: it does not change the result of the update as we do not really do anything inside the trigger.

mysql> update form set form_created_date = NOW() where form_id > 5000;
Query OK, 65536 rows affected (0.90 sec)
Rows matched: 65536  Changed: 65536  Warnings: 0

Just adding a dummy trigger will add 2x overhead: the next trigger, which does not even run a function, introduces a slowdown:

CREATE DEFINER=`root`@`localhost` TRIGGER `test`.`form_AFTER_UPDATE`
AFTER UPDATE ON `form`
FOR EACH ROW
BEGIN
	declare r int default 0;
END
mysql> update form set form_created_date = NOW() where form_id > 5000;
Query OK, 65536 rows affected (0.52 sec)
Rows matched: 65536  Changed: 65536  Warnings: 0

Now, lets use func3 (which has “dead” code and is equivalent to func1):

CREATE DEFINER=`root`@`localhost` TRIGGER `test`.`form_AFTER_UPDATE`
AFTER UPDATE ON `form`
FOR EACH ROW
BEGIN
	declare r int default 0;
	select func3() into r;
END
mysql> update form set form_created_date = NOW() where form_id > 5000;
Query OK, 65536 rows affected (1.06 sec)
Rows matched: 65536  Changed: 65536  Warnings: 0

However, running the code from the func3 inside the trigger (instead of calling a function) will speed up the update:

CREATE DEFINER=`root`@`localhost` TRIGGER `test`.`form_AFTER_UPDATE`
AFTER UPDATE ON `form`
FOR EACH ROW
BEGIN
    declare r int default 0;
    IF 1=2 THEN
		select levenshtein_limit_n('test finc', 'test func', 1) into r;
    END IF;
    IF 2=3 THEN
		select levenshtein_limit_n('test finc', 'test func', 10) into r;
    END IF;
    IF 3=4 THEN
		select levenshtein_limit_n('test finc', 'test func', 100) into r;
    END IF;
    IF 4=5 THEN
		select levenshtein_limit_n('test finc', 'test func', 1000) into r;
    END IF;
END
mysql> update form set form_created_date = NOW() where form_id > 5000;
Query OK, 65536 rows affected (0.66 sec)
Rows matched: 65536  Changed: 65536  Warnings: 0

Memory allocation

Potentially, even if the code will never run, MySQL will still need to parse the stored routine—or trigger—code for every execution, which can potentially lead to a memory leak, as described in this bug.

Conclusion

Stored routines and trigger events are parsed when they are executed. Even “dead” code that will never run can significantly affect the performance of bulk operations (e.g. when running this inside the trigger). That also means that disabling a trigger by setting a “flag” (e.g.

if @trigger_disable = 0 then ...

 ) can still affect performance of bulk operations.

The post Why MySQL Stored Procedures, Functions and Triggers Are Bad For Performance appeared first on Percona Database Performance Blog.

Jul
09
2018
--

Percona Live Europe 2018 Call for Papers is Now Open

Percona Live Europe Open Source Database Conference PLE 2018

Percona Live Europe Open Source Database Conference PLE 2018Announcing the opening of the Percona Live Europe Open Source Database Conference 2018 in Frankfurt, Germany call for papers. It will be open from now until August 10, 2018.

Our theme this year is
Connect. Accelerate. Innovate.

As a speaker at Percona Live Europe, you’ll have the opportunity to CONNECT with your peers—open source database experts and enthusiasts who share your commitment to improving knowledge and exchanging ideas. ACCELERATE your projects and career by presenting at the premier open source database event, a great way to build your personal and company brands. And influence the evolution of the open source software movement by demonstrating how you INNOVATE!

Community initiatives remain core to the open source ethos, and we are proud of the contribution we make with Percona Live Europe in showcasing thought leading practices in the open source database world.

With a nod to innovation, for the first time, this year we are introducing a business track to benefit those business leaders who are exploring the use of open source and are interested in learning more about its costs and benefits.

Speaking Opportunities

The Percona Live Europe Open Source Database Conference 2018 Call for Papers is open until August 10, 2018. We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions. Classes and talks are invited for Foundation (either entry level or of general interest to all), Core (intermediate) and Masterclass (advanced) levels.

If selected, you will receive a complimentary full conference pass.

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. We encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

Topics and Tracks

We want proposals that cover the many aspects of application development using all open source databases, as well as new and interesting ways to monitor and manage database environments. Did you just embrace open source databases this year? What are the technical and business values of moving to or using open source databases? How did you convince your company to make the move? Was there tangible ROI?

Best practices and current trends, including design, application development, performance optimization, HA and clustering, cloud, containers and new technologies, as well as new and interesting ways to monitor and manage database environments—what’s holding your focus? Share your case studies, experiences and technical knowledge with an engaged audience of open source peers.

In the submission entry you will be asked to indicate which of these tracks your proposal best fits: tutorial, business needs; case studies/use cases; operations; or developer.

A few ideas

The conference committee is looking for proposals that cover the many aspects of using, deploying and managing open source databases, including:

  • Open source – Describe the technical and business values of moving to or using open source databases. How did you convince your company to make the move? Was there tangible ROI?
  • Security – All of us have experienced security challenges. Whether they are initiated by legislature (GDPR), bugs (Meltdown/Spectre), experience (external attacks) or due diligence (planning for the worst), when do you have ‘enough’ security? Are you finding that security requirements are preventing your ability to be agile?
  • Serverless, Cloud or On-Premise – The technology landscape is no longer a simple one, and mixing infrastructures has almost become the norm. Are you designing data architectures for the new landscape, and eager to share your experience? Have microservices become an important part of your plans?
  • MySQL – Do you have an opinion on what is new and exciting in MySQL? With the release of MySQL 8.0, are you using the latest features? How and why? Are they helping you solve any business issues, or making deployment of applications and websites easier, faster or more efficient? Did the new release get you to change to MySQL? What do you see as the biggest impact of the MySQL 8.0 release? Do you use MySQL in conjunction with other databases in your environment?
  • MongoDB – How has the 3.6 release improved your experience in application development or time-to-market? How are the new features making your database environment better? What is it about MongoDB 4.0 that excites you? What are your experiences with Atlas? Have you moved to it, and has it lived up to its promises? Do you use MongoDB in conjunction with other databases in your environment?
  • PostgreSQL – Why do you use PostgreSQL as opposed to other SQL options? Have you done a comparison or benchmark of PostgreSQL vs. other types of databases related to your tasks? Why and what were the results? How does PostgreSQL help you with application performance or deployment? How do you use PostgreSQL in conjunction with other databases in your environment?
  • SQL, NewSQL, NoSQL – It’s become a perennial question without an easy answer. How do databases compare, how do you choose the right technology for the job, how do you trade off between features and their benefits in comparing databases? If you have ever tried a hybrid database approach in a single application, how did that work out? How nicely does MongoDB play with MySQL in the real world? Do you have anything to say about using SQL with NoSQL databases?
  • High Availability – What choices are you making to ensure high availability? How do you find the balance between redundancy and cost? Are you using hot backups, and if so, what happened when you needed to rollback on them?
  • Scalability – When did you recognize you needed to address data scale? Did your data growth take you by surprise or were you always in control? Did it take a degradation in performance to get your management to sit up and take notice? How do you plan for scale if you can’t predict demand?
  • What the Future Holds – What do you see as the “next big thing”? What new and exciting features are going to be released? What’s in your next release? What new technologies will affect the database landscape? AI? Machine learning? Blockchain databases? Let us know about innovations you see on the way.

How to respond to the call for papers

For information on how to submit your proposal visit our call for papers page. The conference web pages will be updated throughout the next few weeks and bios, synopsis and slides will be published on those pages after the event.

Sponsorship

If you would like to obtain a sponsor pack for Percona Live Europe Open Source Database Conference 2018, you will find more information including a prospectus on our sponsorship page. You are welcome to contact me, Bronwyn Campbell, directly.

The post Percona Live Europe 2018 Call for Papers is Now Open appeared first on Percona Database Performance Blog.

Jul
03
2018
--

Linux OS Tuning for MySQL Database Performance

Linux OS tuning for MySQL database performance

Linux OS tuning for MySQL database performanceIn this post we will review the most important Linux settings to adjust for performance tuning and optimization of a MySQL database server. We’ll note how some of the Linux parameter settings used OS tuning may vary according to different system types: physical, virtual or cloud. Other posts have addressed MySQL parameters, like Alexander’s blog MySQL 5.7 Performance Tuning Immediately After Installation. That post remains highly relevant for the latest versions of MySQL, 5.7 and 8.0. Here we will focus more on the Linux operating system parameters that can affect database performance.

Server and Operating System

Here are some Linux parameters that you should check and consider modifying if you need to improve database performance.

Kernel – vm.swappiness

The value represents the tendency of the kernel  to swap out memory pages. On a database server with ample amounts of RAM, we should keep this value as low as possible. The extra I/O can slow down or even render the service unresponsive. A value of 0 disables swapping completely while 1 causes the kernel to perform the minimum amount of swapping. In most cases the latter setting should be OK:

# Set the swappiness value as root
echo 1 > /proc/sys/vm/swappiness
# Alternatively, using sysctl
sysctl -w vm.swappiness=1
# Verify the change
cat /proc/sys/vm/swappiness
1
# Alternatively, using sysctl
sysctl vm.swappiness
vm.swappiness = 1

The change should be also persisted in /etc/sysctl.conf:

vm.swappiness = 1

Filesystems – XFS/ext4/ZFS
XFS

XFS is a high-performance, journaling file system designed for high scalability. It provides near native I/O performance even when the file system spans multiple storage devices.  XFS has features that make it suitable for very large file systems, supporting files up to 8EiB in size. Fast recovery, fast transactions, delayed allocation for reduced fragmentation and near raw I/O performance with DIRECT I/O.

The default options for mkfs.xfs are good for optimal speed, so the simple command:

# Use default mkfs options
mkfs.xfs /dev/target_volume

will provide best performance while ensuring data safety. Regarding mount options, the defaults should fit most cases. On some filesystems you can see a performance increase by adding the noatime mount option to the /etc/fstab.  For XFS filesystems the default atime behaviour is relatime, which has almost no overhead compared to noatime and still maintains sane atime values.  If you create an XFS file system on a LUN that has a battery backed, non-volatile cache, you can further increase the performance of the filesystem by disabling the write barrier with the mount option nobarrier. This helps you to avoid flushing data more often than necessary. If a BBU (backup battery unit) is not present, however, or you are unsure about it, leave barriers on, otherwise you may jeopardize data consistency. With this options on, an /etc/fstab file should look like the one below:

/dev/sda2              /datastore              xfs     defaults,nobarrier
/dev/sdb2              /binlog                 xfs     defaults,nobarrier

ext4

ext4 has been developed as the successor to ext3 with added performance improvements. It is a solid option that will fit most workloads. We should note here that it supports files up to 16TB in size, a smaller limit than xfs. This is something you should consider if extreme table space size/growth is a requirement. Regarding mount options, the same considerations apply. We recommend the defaults for a robust filesystem without risks to data consistency. However, if an enterprise storage controller with a BBU cache is present, the following mount options will provide the best performance:

/dev/sda2              /datastore              ext4     noatime,data=writeback,barrier=0,nobh,errors=remount-ro
/dev/sdb2              /binlog                 ext4     noatime,data=writeback,barrier=0,nobh,errors=remount-ro

Note: The data=writeback option results in only metadata being journaled, not actual file data. This has the risk of corrupting recently modified files in the event of a sudden power loss, a risk which is minimised with a presence of a BBU enabled controller. nobh only works with the data=writeback option enabled.

ZFS

ZFS is a filesystem and LVM combined enterprise storage solution with extended protection vs data corruption. There are certainly cases where the rich feature set of ZFS makes it an essential option to consider, most notably when advance volume management is a requirement. ZFS tuning for MySQL can be a complex topic and falls outside the scope of this blog. For further reference, there is a dedicated blog post on the subject by Yves Trudeau:

Disk Subsystem – I/O scheduler 

Most modern Linux distributions come with noop or deadline I/O schedulers by default, both providing better performance than the cfq and anticipatory ones. However it is always a good practice to check the scheduler for each device and if the value shown is different than noop or deadline the policy can change without rebooting the server:

# View the I/O scheduler setting. The value in square brackets shows the running scheduler
cat /sys/block/sdb/queue/scheduler
noop deadline [cfq]
# Change the setting
sudo echo noop > /sys/block/sdb/queue/scheduler

To make the change persistent, you must modify the GRUB configuration file:

# Change the line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
# to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=noop"

AWS Note: There are cases where the I/O scheduler has a value of none, most notably in AWS VM instance types where EBS volumes are exposed as NVMe block devices. This is because the setting has no use in modern PCIe/NVMe devices. The reason is that they have a very large internal queue and they bypass the IO scheduler altogether. The setting in this case is none and it is the optimal in such disks.

Disk Subsystem – Volume optimization

Ideally different disk volumes should be used for the OS installation, binlog, data and the redo log, if this is possible. The separation of OS and data partitions, not just logically but physically, will improve database performance. The RAID level can also have an impact: RAID-5 should be avoided as the checksum needed to ensure integrity is costly. The best performance without making compromises to redundancy is achieved by the use of an advanced controller with a battery-backed cache unit and preferably RAID-10 volumes spanned across multiple disks.

AWS Note: For further information about EBS volumes and AWS storage optimisation, Amazon has documentation at the following links:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage-optimized-instances.html

Database settings

System Architecture – NUMA settings

Non-uniform memory access (NUMA) is a memory design where an SMP’s system processor can access its own local memory faster than non-local memory (the one assigned local to other CPUs). This may result in suboptimal database performance and potentially swapping. When the buffer pool memory allocation is larger than size of the RAM available local to the node, and the default memory allocation policy is selected, swapping occurs. A NUMA enabled server will report different node distances between CPU nodes. A uniformed one will report a single distance:

# NUMA system
numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 65525 MB
node 0 free: 296 MB
node 1 cpus: 8 9 10 11 12 13 14 15
node 1 size: 65536 MB
node 1 free: 9538 MB
node 2 cpus: 16 17 18 19 20 21 22 23
node 2 size: 65536 MB
node 2 free: 12701 MB
node 3 cpus: 24 25 26 27 28 29 30 31
node 3 size: 65535 MB
node 3 free: 7166 MB
node distances:
node   0   1   2   3
  0:  10  20  20  20
  1:  20  10  20  20
  2:  20  20  10  20
  3:  20  20  20  10
# Uniformed system
numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 64509 MB
node 0 free: 4870 MB
node distances:
node   0
  0:  10

In the case of a NUMA system, where numactl shows different distances across nodes, the MySQL variable innodb_numa_interleave should be enabled to ensure memory interleaving. Percona Server provides improved NUMA support by introducing the flush_caches variable. When enabled, it will help with allocation fairness across nodes. To determine whether or not allocation is equal across nodes, you can examine numa_maps for the mysqld process with this script:

# The perl script numa_maps.pl will report memory allocation per CPU node:
# 3595 is the pid of the mysqld process
perl numa_maps.pl < /proc/3595/numa_maps
N0        :     16010293 ( 61.07 GB)
N1        :     10465257 ( 39.92 GB)
N2        :     13036896 ( 49.73 GB)
N3        :     14508505 ( 55.35 GB)
active    :          438 (  0.00 GB)
anon      :     54018275 (206.06 GB)
dirty     :     54018275 (206.06 GB)
kernelpagesize_kB:         4680 (  0.02 GB)
mapmax    :          787 (  0.00 GB)
mapped    :         2731 (  0.01 GB)

Conclusion

In this blog post we examined a few important OS related settings and explained how they can be tuned for better database performance.

While you are here …

You might also find value in this recorded webinar Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance

 

The post Linux OS Tuning for MySQL Database Performance appeared first on Percona Database Performance Blog.

Jun
29
2018
--

MySQL 8.0 Hot Rows with NOWAIT and SKIP LOCKED

MySQL 8.0 hot rows

In MySQL 8.0 there are two new features designed to support lock handling: NOWAIT and SKIP LOCKED. In this post, we’ll look at how MySQL 8.0 handles hot rows. Up until now, how have you handled locks that are part of an active transaction or are hot rows? It’s likely that you have the application attempt to access the data, and if there is a lock on the requested rows, you incur a timeout and have to retry the transaction. These two new features help you to implement sophisticated lock handling scenarios allowing you to handle timeouts better and improve the application’s performance.

To demonstrate I’ll use this product table.

mysql> select @@version;
+-----------+
| @@version |
+-----------+
| 8.0.11    |
+-----------+
1 row in set (0.00 sec)

CREATE TABLE `product` (
`p_id` int(11) NOT NULL AUTO_INCREMENT,
`p_name` varchar(255) DEFAULT NULL,
`p_cost` decimal(19,4) NOT NULL,
`p_availability` enum('YES','NO') DEFAULT 'NO',
PRIMARY KEY (`p_id`),
KEY `p_cost` (`p_cost`),
KEY `p_name` (`p_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;

Let’s run through an example. The transaction below will lock the rows 2 and 3 if not already locked. The rows will get released when our transaction does a COMMIT or a ROLLBACK. Autocommit is enabled by default for any transaction and can be disabled either by using the START TRANSACTION clause or by setting the Autocommit to 0.

Session 1:

mysql> START TRANSACTION;SELECT * FROM mydb.product WHERE p_cost >=20 and p_cost <=30 FOR UPDATE;
Query OK, 0 rows affected (0.00 sec)
+------+--------+---------+----------------+
| p_id | p_name | p_cost  | p_availability |
+------+--------+---------+----------------+
|    2 | Item2  | 20.0000 | YES            |
|    3 | Item3  | 30.0000 | YES            |
+------+--------+---------+----------------+
2 rows in set (0.00 sec)

InnoDB performs row-level locking in such a way that when it searches or scans a table index, it sets shared or exclusive locks on the index records it encounters. Thus, the row-level locks are actually index-record locks.

We can get the details of a transaction such as the transaction id, row lock count etc using the command innodb engine status or by querying the performance_schema.data_locks table. The result from the innodb engine status command can however be confusing as we can see below. Our query only locked rows 3 and 4 but the output of the query reports 5 rows as locked (Count of Locked PRIMARY+ locked selected column secondary index + supremum pseudo-record). We can see that the row right next to the rows that we selected is also reported as locked. This is an expected and documented behavior. Since the table is small with only 5 rows, a full scan of the table is much faster than an index search. This causes all rows or most rows of the table to end up as locked as a result of our query.

Innodb Engine Status :-

---TRANSACTION 205338, ACTIVE 22 sec
3 lock struct(s), heap size 1136, 5 row lock(s)
MySQL thread id 8, OS thread handle 140220824467200, query id 28 localhost root

performance_schema.data_locks (another new feature in 8.0.1):

mysql> SELECT ENGINE_TRANSACTION_ID,
 CONCAT(OBJECT_SCHEMA, '.',
 OBJECT_NAME)TBL,
 INDEX_NAME,count(*) LOCK_DATA
FROM performance_schema.data_locks
where LOCK_DATA!='supremum pseudo-record'
GROUP BY ENGINE_TRANSACTION_ID,INDEX_NAME,OBJECT_NAME,OBJECT_SCHEMA;
+-----------------------+--------------+------------+-----------+
| ENGINE_TRANSACTION_ID | TBL          | INDEX_NAME | LOCK_DATA |
+-----------------------+--------------+------------+-----------+
|                205338 | mydb.product | p_cost     |         3 |
|                205338 | mydb.product | PRIMARY    |         2 |
+-----------------------+--------------+------------+-----------+
2 rows in set (0.04 sec)

mysql> SELECT ENGINE_TRANSACTION_ID as ENG_TRX_ID,
 object_name,
 index_name,
 lock_type,
 lock_mode,
 lock_data
FROM performance_schema.data_locks WHERE object_name = 'product';
+------------+-------------+------------+-----------+-----------+-------------------------+
| ENG_TRX_ID | object_name | index_name | lock_type | lock_mode | lock_data               |
+------------+-------------+------------+-----------+-----------+-------------------------+
|     205338 | product     | NULL       | TABLE     | IX        | NULL                    |
|     205338 | product     | p_cost     | RECORD    | X         | 0x800000000000140000, 2 |
|     205338 | product     | p_cost     | RECORD    | X         | 0x8000000000001E0000, 3 |
|     205338 | product     | p_cost     | RECORD    | X         | 0x800000000000320000, 5 |
|     205338 | product     | PRIMARY    | RECORD    | X         | 2                       |
|     205338 | product     | PRIMARY    | RECORD    | X         | 3                       |
+------------+-------------+------------+-----------+-----------+-------------------------+
6 rows in set (0.00 sec)

Session 1:

mysql> COMMIT;
Query OK, 0 rows affected (0.00 sec)


SELECT FOR UPDATE with innodb_lock_wait_timeout:

The innodb_lock_wait_timeout feature is one mechanism that is used to handle lock conflicts. The variable has default value set to 50 sec and causes any transaction that is waiting for a lock for more than 50 seconds to terminate and post a timeout message to the user. The parameter is configurable based on the requirements of the application.

Let’s look at how this feature works using an example with a select for update query.

mysql> select @@innodb_lock_wait_timeout;
+----------------------------+
| @@innodb_lock_wait_timeout |
+----------------------------+
|                         50 |
+----------------------------+
1 row in set (0.00 sec)

Session 1:

mysql> START TRANSACTION;SELECT * FROM mydb.product WHERE p_cost >=20 and p_cost <=30 FOR UPDATE;
Query OK, 0 rows affected (0.00 sec)
+------+--------+---------+----------------+
| p_id | p_name | p_cost  | p_availability |
+------+--------+---------+----------------+
|    2 | Item2  | 20.0000 | YES            |
|    3 | Item3  | 30.0000 | YES            |
+------+--------+---------+----------------+
2 rows in set (0.00 sec)

Session 2:

mysql> select now();SELECT * FROM mydb.product WHERE p_id=3 FOR UPDATE;select now();
+---------------------+
| now()               |
+---------------------+
| 2018-06-19 05:29:48 |
+---------------------+
1 row in set (0.00 sec)
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
+---------------------+
| now()               |
+---------------------+
| 2018-06-19 05:30:39 |
+---------------------+
1 row in set (0.00 sec)
mysql>

Autocommit is enabled (by default) and as expected the transaction waited for lock wait timeout and exited.

Session 1:

mysql> COMMIT;
Query OK, 0 rows affected (0.00 sec)


NOWAIT:

The NOWAIT clause causes a query to terminate immediately in the case that candidate rows are already locked. Considering the previous example, if the application’s requirement is to not wait for the locks to be released or for a timeout, using the NOWAIT clause is the perfect solution. (Setting the innodb_lock_wait_timeout=1 in session also has the similar effect). 

Session 1:

mysql> START TRANSACTION;SELECT * FROM mydb.product WHERE p_cost >=20 and p_cost <=30 FOR UPDATE;
Query OK, 0 rows affected (0.00 sec)
+------+--------+---------+----------------+
| p_id | p_name | p_cost  | p_availability |
+------+--------+---------+----------------+
|    2 | Item2  | 20.0000 | YES            |
|    3 | Item3  | 30.0000 | YES            |
+------+--------+---------+----------------+
2 rows in set (0.00 sec)

Session 2:

mysql>  SELECT * FROM mydb.product WHERE p_id = 3 FOR UPDATE NOWAIT;
ERROR 3572 (HY000): Statement aborted because lock(s) could not be acquired immediately and NOWAIT is set.
mysql>

Session 1:

mysql> COMMIT;
Query OK, 0 rows affected (0.00 sec)


SKIP LOCKED:

The SKIP LOCKED clause asks MySQL to non-deterministically skip over the locked rows and process the remaining rows based on the where clause. Let’s look at how this works using some examples:

Session 1:

mysql> START TRANSACTION;SELECT * FROM mydb.product WHERE p_cost >=20 and p_cost <=30 FOR UPDATE;
Query OK, 0 rows affected (0.00 sec)
+------+--------+---------+----------------+
| p_id | p_name | p_cost  | p_availability |
+------+--------+---------+----------------+
|    2 | Item2  | 20.0000 | YES            |
|    3 | Item3  | 30.0000 | YES            |
+------+--------+---------+----------------+
2 rows in set (0.00 sec)

Session 2:

mysql> SELECT * FROM mydb.product WHERE p_cost = 30 FOR UPDATE SKIP LOCKED;
Empty set (0.00 sec)
mysql>

mysql> SELECT * from mydb.product where p_id IN (1,2,3,4,5) FOR UPDATE SKIP LOCKED;
+------+--------+---------+----------------+
| p_id | p_name | p_cost  | p_availability |
+------+--------+---------+----------------+
|    1 | Item1  | 10.0000 | YES            |
|    5 | Item5  | 50.0000 | YES            |
+------+--------+---------+----------------+
2 rows in set (0.00 sec)

Session 1:

mysql> COMMIT;
Query OK, 0 rows affected (0.00 sec)

The first transaction is selecting rows 2 and 3 for update(ie locked). The second transaction skips these rows and returns the remaining rows when the SKIP LOCKED clause is used.

Important Notes: As the SELECT … FOR UPDATE clause affects concurrency, it should only be used when absolutely necessary. Make sure to index the column part of the where clause as the SELECT … FOR UPDATE is likely to lock the whole table if proper indexes are not setup for the table. When an index is used, only the candidate rows are locked.

The post MySQL 8.0 Hot Rows with NOWAIT and SKIP LOCKED appeared first on Percona Database Performance Blog.

Jun
15
2018
--

This Week in Data with Colin Charles 42: Security Focus on Redis and Docker a Timely Reminder to Stay Alert

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Much of last week, there was a lot of talk around this article: New research shows 75% of ‘open’ Redis servers infected. It turns out, it helps that one should always read beyond the headlines because they tend to be more sensationalist than you would expect. From the author of Redis, I highly recommend reading Clarifications on the Incapsula Redis security report, because it turns out that in this case, it is beyond the headline. The content is also suspect. Antirez had to write this to help the press (we totally need to help keep reportage accurate).

Not to depart from the Redis world just yet, but Antirez also had some collaboration with the Apple Information Security Team with regards to the Redis Lua subsystem. The details are pretty interesting as documented in Redis Lua scripting: several security vulnerabilities fixed because you’ll note that the Alibaba team also found some other issues. Antirez also ensured that the Redis cloud providers (notably: Redis Labs, Amazon, Alibaba, Microsoft, Google, Heroku, Open Redis and Redis Green) got notified first (and in the comments, compose.io was missing, but now added to the list). I do not know if Linux distributions were also informed, but they will probably be rolling out updates soon.

In the “be careful where you get your software” department: some criminals have figured out they could host some crypto-currency mining software that you would get pre-installed if you used their Docker containers. They’ve apparently made over $90,000. It is good to note that the Backdoored images downloaded 5 million times finally removed from Docker Hub. This, however, was up on the Docker Hub for ten months and they managed to get over 5 million downloads across 17 images. Know what images you are pulling. Maybe this is again more reason for software providers to run their own registries?

James Turnbull is out with a new book: Monitoring with Prometheus. It just got released, I’ve grabbed it, but a review will come shortly. He’s managed all this while pulling off what seems to be yet another great O’Reilly Velocity San Jose Conference.

Releases

A quiet week on this front.

Link List

  • INPLACE upgrade from MySQL 5.7 to MySQL 8.0
  • PostgreSQL relevant: What’s is the difference between streaming replication vs hot standby vs warm standby ?
  • A new paper on Amazon Aurora is out: Amazon Aurora: On Avoiding Distributed Consensus for I/Os, Commits, and Membership Changes. It was presented at SIGMOD 2018, and an abstract: “One of the more novel differences between Aurora and other relational databases is how it pushes redo processing to a multi-tenant scale-out storage service, purpose-built for Aurora. Doing so reduces networking traffic, avoids checkpoints and crash recovery, enables failovers to replicas without loss of data, and enables fault-tolerant storage that heals without database involvement. Traditional implementations that leverage distributed storage would use distributed consensus algorithms for commits, reads, replication, and membership changes and amplify cost of underlying storage.” Aurora, as you know, avoids distributed consensus under most circumstances. Short 8-page read.
  • Dormando is blogging again, and this was of particular interest — Caching beyond RAM: the case for NVMe. This is done in the context of memcached, which I am certain many use.
  • It is particularly heartening to note that not only does MongoDB use Linkbench for some of their performance testing, they’re also contributing to making it better via a pull request.

Industry Updates

Trying something new here… To cover fundraising, and people on the move in the database industry.

  • Kenny Gorman — who has been on the program committee for several Percona Live conferences, and spoken at the event multiple times before — is the founder and CEO of Eventador, a stream-processing as a service company built on Apache Kafka and Apache Flink, has just raised $3.8 million in funding to fuel their growth. They are also naturally spending this on hiring. The full press release.
  • Jimmy Guerrero (formerly of MySQL and InfluxDB) is now VP Marketing & Community at YugaByte DB. YugaByte was covered in column 13 as having raised $8 million in November 2017.

Upcoming appearances

  • DataOps Barcelona – Barcelona, Spain – June 21-22, 2018 – code dataopsbcn50 gets you a discount
  • OSCON – Portland, Oregon, USA – July 16-19, 2018
  • Percona webinar on Maria Server 10.3 – June 26, 2018

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

The post This Week in Data with Colin Charles 42: Security Focus on Redis and Docker a Timely Reminder to Stay Alert appeared first on Percona Database Performance Blog.

Jun
01
2018
--

This Week in Data with Colin Charles 40: a Peak at Blockchain, Lots of MariaDB News, then Back on the Road

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Shortly after the last dispatch, I jetted off for a spot of vacation (which really meant I was checking out the hype behind Blockchain with a database developer lens at the Blockchain Week NYC), and then some customer visits in Seoul, which explains the short hiatus. Here’s to making this more regular as the summer approaches.

I am about to embark on a fairly long trip, covering a few upcoming appearances: Lisbon for the Percona Engineering meeting, SouthEastLinuxFest in Charlotte, the Open Source Data Centre Conference in Berlin and then the DataOps Barcelona event. I have some discount codes: 50% discount for OSDC with the code OSDC_FOR_FRIENDS, and 50% discount for DataOps Barcelona with the code dataopsbcn50. Expect this column to reflect my travels over the next few weeks.

There has been a lot of news on the MariaDB front: MariaDB 10.3.7 went stable/GA! You might have noticed more fanfare around the release name MariaDB TX 3.0, but the reality is you can still get this download from your usual MariaDB Foundation site. It is worth noting that the MariaDB Foundation 2017 financials have also been released. Some may have noticed a couple months back there was a press release titled Report “State of the Open-Source DBMS Market, 2018” by Gartner Includes Pricing Comparison With MariaDB. This led to a Gartner report on the State of the Open-Source DBMS Market, 2018; although the report has since been pulled. Hopefully we see it surface again.

In the meantime, please do try out MariaDB 10.3.7 and it would be great to hear feedback. I also have an upcoming Percona webinar on MariaDB Server 10.3 on June 26 2018 — when the sign up link appears, I will be sure to include it here.

Well written, and something worth discussing: Should Red Hat Buy or Build a Database?. The Twitter discussion is also worth looking at.

Releases

Link List

Upcoming appearances

Feedback

I look forward to receiving feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

The post This Week in Data with Colin Charles 40: a Peak at Blockchain, Lots of MariaDB News, then Back on the Road appeared first on Percona Database Performance Blog.

May
14
2018
--

Installing MySQL 8.0 on Ubuntu 16.04 LTS in Five Minutes

Installing MySQL 8.0 on Ubuntu small

Do you want to install MySQL 8.0 on Ubuntu 16.04 LTS? In this quick tutorial, I show you exactly how to do it in five minutes or less.

This tutorial assumes you don’t have MySQL or MariaDB installed. If you do, it’s necessary to uninstall them or follow a slightly more complicated upgrade process (not covered here).

Step 1: Install MySQL APT Repository

Ubuntu 16.04 LTS, also known as Xenial, comes with a choice of MySQL 5.7 and MariaDB 10.0.

If you want to use MySQL 8.0, you need to install the MySQL/Oracle Apt repository first:

wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb
dpkg -i mysql-apt-config_0.8.10-1_all.deb

The MySQL APT repository installation package allows you to pick what MySQL version you want to install, as well as if you want access to Preview Versions. Let’s leave them all as default:

Installing MySQL 8.0 on Ubuntu

Step 2: Update repository configuration and install MySQL Server

apt-get update
apt-get install mysql-server

Note: Do not forget to run “apt-get update”, otherwise you can get an old version of MySQL from Ubuntu repository installed.

The installation process asks you to set a password for the root user:

Installing MySQL 8.0 on Ubuntu 1

I recommend you set a root password for increased security. If you do not set a password for the root account, “auth_socket” authentication is enabled. This ensures only the operating system’s “root” user can connect to MySQL Server without a password.

Next, the installation script asks you whether to use Strong Password Encryption or Legacy Authentication:

Installing MySQL 8.0 on Ubuntu 2

While using strong passwords is recommend for security purposes, not all applications and drivers support this new authentication method. Going with Legacy Authentication is a safer choice

All Done

You should have MySQL 8.0 Server running. You can test it by connecting to it with a command line client:

Installing MySQL 8.0 on Ubuntu 3

As you can see, it takes just a few simple steps to install MySQL 8.0 on Ubuntu 16.04 LTS.

Installing MySQL 8.0 on Ubuntu 16.04 LTS is easy. Go ahead give it a try!

The post Installing MySQL 8.0 on Ubuntu 16.04 LTS in Five Minutes appeared first on Percona Database Performance Blog.

May
11
2018
--

This Week in Data with Colin Charles 39: a valuable time spent at rootconf.in

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

rootconf.in 2018 just ended, and it was very enjoyable to be in Bangalore for the conference. The audience was large, the conversations were great, and overall I think this is a rather important conference if you’re into the “DevOps” movement (or are a sysadmin!). From the data store world, Oracle MySQL was a sponsor, as was MyDBOPS (blog), and Elastic. There were plenty more, including Digital Ocean/GoJek/Walmart Labs — many MySQL users.

I took a handful of pictures with people, and here are some of the MyDBOPS team and myself.  They have over 20 employees, and serve the Indian market at rates that would be more palatable than straight up USD rates. Traveling through Asia, many businesses always do find local partners and offer local pricing; this really becomes more complex in the SaaS space (everyone pays the same rate generally) and also the services space.

Colin at Rootconf with Oracle
Some of the Oracle MySQL team who were exhibiting were very happy they got a good amount of traffic to the booth based on stuff discussed at the talk and BOF.

From a talk standpoint, I did a keynote for an hour and also a BoF session for another hour (great discussion, lots of blog post ideas from there), and we had a Q&A session for about 15 minutes. There were plenty of good conversations in the hallway track.

A quick observation that I notice happens everywhere: many people don’t realize features that have existed in MySQL since 5.6/5.7.  So they are truly surprised with stuff in 8.0 as well. It is clear there is a huge market that would thrive around education. Not just around feature checklists, but also around how to use features. Sometimes, this feels like the MySQL of the mid-2000’s — getting apps to also use new features, would be a great thing.

Releases

This seems to have been a quiet week on the releases front.

Are you a user of Amazon Aurora MySQL? There is now the Amazon Aurora Backtrack feature, which allows you to go back in time. It is described to work as:

Aurora uses a distributed, log-structured storage system (read Design Considerations for High Throughput Cloud-Native Relational Databases to learn a lot more); each change to your database generates a new log record, identified by a Log Sequence Number (LSN). Enabling the backtrack feature provisions a FIFO buffer in the cluster for storage of LSNs. This allows for quick access and recovery times measured in seconds.

Link List

Upcoming appearances

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

 

The post This Week in Data with Colin Charles 39: a valuable time spent at rootconf.in appeared first on Percona Database Performance Blog.

May
04
2018
--

How Binary Logs Affect MySQL 8.0 Performance

As part of my benchmarks of binary logs, I’ve decided to check how the recently released MySQL 8.0 performance is affected in similar scenarios, especially as binary logs are enabled by default. It is also interesting to check how MySQL 8.0 performs against the claimed performance improvements in redo logs subsystem.

I will use a similar setup as in my last blog with MySQL 8.0, using the utf8mb4 charset.

I have a few words about MySQL 8.0 tuning. Dimitri’s recommends in his blog posts using innodb_undo_log_truncate=off and innodb_doublewrite=0. However, in my opinion, using these setting are the same as participating in a car race without working breaks: you will drive very fast, but it will end badly. So, contrary to Dimitri’s recommendations I used innodb_undo_log_truncate=on and innodb_doublewrite=1.

Servers Comparison

For the first run, let’s check the results without binary logs vs. with binary logs enabled, but with sync_binlog=1 for Percona Server for MySQL 5.7 vs. MySQL 8.0.


MySQL 8.0 Performance

In tabular form:

Binary log Buffer pool, GB MYSQL8 PS57 Ratio PS57/MySQL8
binlog 5 768.0375 771.5532 1.00
binlog 10 1224.535 1245.496 1.02
binlog 20 1597.48 1625.153 1.02
binlog 30 1859.603 1979.328 1.06
binlog 40 2164.329 2388.804 1.10
binlog 50 2572.827 2942.082 1.14
binlog 60 3158.408 3528.791 1.12
binlog 70 3883.275 4535.281 1.17
binlog 80 4390.69 5246.567 1.19
nobinlog 5 788.9388 783.155 0.99
nobinlog 10 1290.035 1294.098 1.00
nobinlog 20 1745.464 1743.759 1.00
nobinlog 30 2109.301 2158.267 1.02
nobinlog 40 2508.28 2649.695 1.06
nobinlog 50 3061.196 3334.766 1.09
nobinlog 60 3841.92 4168.089 1.08
nobinlog 70 4772.747 5140.316 1.08
nobinlog 80 5727.795 5947.848 1.04

 

Binary Log Effect

MySQL 8.0 Performance 2

In tabular form:

Buffer pool, GB server binlog nobinlog Ratio nobinlog / binlog
5 MYSQL8 768.0375 788.9388 1.03
5 PS57 771.5532 783.155 1.02
10 MYSQL8 1224.535 1290.0352 1.05
10 PS57 1245.496 1294.0983 1.04
20 MYSQL8 1597.48 1745.4637 1.09
20 PS57 1625.153 1743.7586 1.07
30 MYSQL8 1859.603 2109.3005 1.13
30 PS57 1979.328 2158.2668 1.09
40 MYSQL8 2164.329 2508.2799 1.16
40 PS57 2388.804 2649.6945 1.11
50 MYSQL8 2572.827 3061.1956 1.19
50 PS57 2942.082 3334.7656 1.13
60 MYSQL8 3158.408 3841.9203 1.22
60 PS57 3528.791 4168.0886 1.18
70 MYSQL8 3883.275 4772.7466 1.23
70 PS57 4535.281 5140.316 1.13
80 MYSQL8 4390.69 5727.795 1.30
80 PS57 5246.567 5947.8477 1.13

 

Conclusions

It seems that binary logs have quite an effect MySQL 8.0, and we see up to a 30% performance penalty as opposed to the 13% for Percona Server for MySQL 5.7.

In general, for in-memory workloads, Percona Server for MySQL 5.7 outperforms MySQL 8.0 by 10-20% with binary logs enabled, and 4-9% without binary logs enabled.

For io-bound workloads (buffer pool size <= 30GB), the performance numbers for Percona Server for MySQL and MySQL are practically identical.

Hardware spec

Supermicro server:

  • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • 2 sockets / 28 cores / 56 threads
  • Memory: 256GB of RAM
  • Storage: SAMSUNG  SM863 1.9TB Enterprise SSD
  • Filesystem: ext4/xfs
  • Percona-Server-5.7.21-20
  • OS: Ubuntu 16.04.4, kernel 4.13.0-36-generic

Extra Raw Results, Scripts and Config

My goal is to provide fully repeatable benchmarks. I have shared all scripts and settings I used in the following GitHub repo:

https://github.com/Percona-Lab-results/201805-sysbench-tpcc-mysql8

 

The post How Binary Logs Affect MySQL 8.0 Performance appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com