Aug
11
2017
--

Learning MySQL 5.7: Q & A

MySQL 5.7

MySQL 5.7In this post I’ll answer questions I received in my Wednesday, July 19, 2017, webinar Learning MySQL 5.7!

First, thank you all who attended the webinar. The link to the slides and the webinar recording can be found here.

I received a number of interesting questions in the webinar that I’ve followed up with below.

Would there be a big difference on passing from 5.1 to 5.6 before going to 5.7 or, at this point, would it be roughly the same?

The biggest risk of jumping between versions, in this case 5.1 to 5.6, is reverting in case of problems. Rollbacks don’t happen often, but they do happen and you have to make sure you have the infrastructure in place whenever you decide to execute. These upgrade steps are not officially supported by Oracle nor even recommended here at Percona. Having said that, as long as your tests (checksums, pt-upgrade) and rollback plan works, this shouldn’t be a problem.

One unforgettable issue I have personally encountered is an upgrade from 5.1 via dump and reload to 5.6. The 5.6 version ran with ROW binlog format preventing replication back to 5.1 because of the limitation with the TIMESTAMP columns. Similarly, downgrading without replication means you have to deal with changes to the MySQL system schema, which obviously require some form of downtime.

Additionally, replication from 5.7 to 5.5 will not work because of the additional metadata information that 5.7 creates (i.e., GTID even when GTID is disabled).

After in-place upgrade a Percona XtraDB Cluster from 5.5 to 5.7 (through 5.6),

innodb_file_per_table

 is enabled by default and the database is now almost twice the size. It was a 40 GB DB now it’s 80 GB due to every table has its own file but ibdata1 is still 40 GB. Is there any solution for this (that doesn’t involve mysqldump and drop tables) and how can this be avoided in future upgrades?

The reason this might be the case is that after upgrading, a number (or possibly all) of tables were [re]created. This would obviously create separate tablespaces for each. One way I can think of reclaiming that disk space is through a familiar upgrade path:

  1. Detach one of the nodes and make is an async replica of the remaining nodes in the cluster
  2. Dump and reload data from this node, then resume replication
  3. Join the other nodes from the cluster as additional nodes of a new cluster using the async replica
  4. Once there is only one node remaining in the original cluster, you can switch to the new cluster for production
  5. Rejoin the last node from the original cluster into the new cluster to complete the process

Depending on the semantics of your switch, it may or may not involve a downtime. For example, if you use ProxySQL this should be a transparent operation.

One way to avoid this problem is by testing. Testing the upgrade process in a lab will expose this kind of information even before deploying the new version into production, allowing you to adjust your process accordingly.

What is a possible impact on upgrades going from the old table format to Barracuda?

So far I am not aware of any negative impact – except if you upgrade and need to downgrade but have since created indexes with prefixes larger than what was supported on the previous version (see large_index_prefix and Barracuda documentation).

Upgrading to Barracuda and one of the supported row formats specifically allows memory constrained systems to save a little more. With BLOB/TEXT column stored off the page, they will not fill the buffer pool unless they are needed.

How do you run mysql_upgrade in parallel?

Good question, I actually wrote about it here.

Can you elaborate on ALTER progress features, and is it also applicable to “Optimization ” query?

I was not able to get more details on the “Optimization” part of this question. I can only assume this too was meant to be table rebuild via OPTIMIZE TABLE. First I would like to point out that OPTIMIZE has been an online DDL operation from 5.6 (with few limitations). As such, there is almost no point in monitoring. Also, for the cases where the online DDL does not apply to OPTIMIZE, under the hood, this is ALTER TABLE .. FORCE – a full table rebuild.

Now, for the actual ALTER process doing a table copy/rebuild, MySQL 5.7 provides some form of progress indication as to how much work has been done. However, it does not necessarily provide an estimate of the actual time it would take to complete. Each ALTER process has different phases which can vary under different conditions. Alternatively, you can also employ other ways of monitoring progress as described in the post.

We are migrated from 5.7.11 to 5.7.17 Percona Server and facing “

Column 1 of table 'x.x' cannot be converted from type 'varchar(100)' to type 'varchar(100)'

”.

This is interesting – what we have seen so far are errors with different datatypes or sizes, which most likely means inconsistency from the table structures if the error is coming from replication. We will need more information on what steps were taken during the upgrade to tell what happened here. Our forums would be the best place to continue this conversation. To begin with, perhaps slave_type_conversions might help if the table structures in replication are the same.

Is the Boost Geometry almost on par with Postgres GIS functions?

I cannot answer this with authority or certainty. I’ve used GIS functions in MySQL, but not developed code for it. Although Boost::Geometry was chosen because of its well-designed API, rapid development and license compatibility, it does not necessarily mean it is more mature than PostGIS (which is widely adopted).

What is the best bulk insert method for MySQL 5.7?

The best option can be different in many situations, so we have to put context here. For this reason, let me give some example scenarios and what might work best:

  • On an upgrade process where you are doing a full dump and reload, parallelizing the process by using mydumper/myloader or mysqlpump will save a lot of time depending the hardware resource available.
  • Bulk INSERT from your application that happens at regular intervals – multi-row inserts are always ideal to reduce disk writes per insert. LOAD DATA INFILE is also a popular option if you can.

Again, thank you for attending the webinar – if you have additional questions head on out to the Percona Forums!

Jul
14
2017
--

A Little Trick Upgrading to MySQL 5.7

Upgrading to MySQL 5.7

Upgrading to MySQL 5.7In this blog post, I’ll look at a trick we use at Percona when upgrading to MySQL 5.7.

I’ll be covering this subject (and others) in my webinar Learning MySQL 5.7 on Wednesday, July 19, 2017.

We’ve been doing upgrades for quite a while here are Percona, and we try to optimize, standardize and improve this process to save time. When upgrading to MySQL 5.7, more often than not you need to run REPAIR or ALTER via mysql_upgrade to a number of MySQL tables. Sometimes a few hundred, sometimes hundreds of thousands.

One way to cut some time from testing or executing mysql_upgrade is to combine it with mysqlcheck. This identifies tables that need to be rebuilt or repaired. The first step is to capture the output of this process:

revin@acme:~$ mysqlcheck --check-upgrade --all-databases > mysql-check.log

This provides a lengthy output of what needs to be done to successfully upgrade our tables. On my test data, I get error reports like the ones below. I’ll need to take the specified action against them:

ads.agency
error    : Table upgrade required. Please do "REPAIR TABLE `agency`" or dump/reload to fix it!
store.categories
error    : Table rebuild required. Please do "ALTER TABLE `categories` FORCE" or dump/reload to fix it!

Before we run through this upgrade, let’s get an idea of how long it would take for a regular mysql_upgrade to complete on this dataset:

revin@acme:~$ time mysql_upgrade
Enter password:
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Checking system database.
mysql.columns_priv                                 OK
mysql.db                                           OK
...
mysql.user                                         OK
Upgrading the sys schema.
Checking databases.
ads.account_preference_assoc         OK
...
Repairing tables
...
ads.agency
Note     : TIME/TIMESTAMP/DATETIME columns of old format have been upgraded to the new format.
status   : OK
...
`store`.`categories`
Running  : ALTER TABLE `store`.`categories` FORCE
status   : OK
...
Upgrade process completed successfully.
Checking if update is needed.
real	25m57.482s
user	0m0.024s
sys	0m0.072s

On a cold server, my baseline above took about 25 minutes.

The second step on our time-saving process is to identify the tables that need some action (in this case, REPAIR and ALTER … FORCE). Generate the SQL statements to run them and put them into a single SQL file:

revin@acme:~$ for t in $(cat mysql-check.log |grep -B1 REPAIR | egrep -v 'REPAIR|--');
	do echo "mysql -e 'REPAIR TABLE $t;'" >> upgrade.sql; done
revin@acme:~$ for t in $(cat mysql-check.log |grep -B1 ALTER | egrep -v 'ALTER|--');
	do echo "mysql -e 'ALTER TABLE $t FORCE;'" >> upgrade.sql; done

My upgrade.sql file will have something like this:

mysql -e 'ALTER TABLE store.categories FORCE;'
mysql -e 'REPAIR TABLE ads.agency;'

Now we should be ready to run these commands in parallel as the third step in the process:

revin@acme:~$ time parallel -j 4 -- < upgrade.sql
...
real	17m31.448s
user	0m1.388s
sys	0m0.616s

Getting some parallelization is not bad, and the process improved by about 38%. If we are talking about multi-terabyte data sets, then it is already a big gain.

On the other hand, my dataset has a few tables that are bigger than the rest. Since mysqlcheck processes them in a specific order, one of the threads was processing most of them instead of spreading them out evenly to each thread by size. To fix this, we need to have an idea of the sizes of each table we will be processing. We can use a query from the INFORMATION_SCHEMA.TABLES for this purpose:

revin@acme:~$ for t in $(cat mysql-check.log |grep -B1 ALTER | egrep -v 'ALTER|--');
	do d=$(echo $t|cut -d'.' -f1); tbl=$(echo $t|cut -d'.' -f2);
	s=$(mysql -BNe "select sum(index_length+data_length) from information_schema.tables where table_schema='$d' and table_name='$tbl';");
	echo "$s |mysql -e 'ALTER TABLE $t FORCE;'" >> table-sizes.sql; done
revin@acme:~$ for t in $(cat mysql-check.log |grep -B1 REPAIR | egrep -v 'REPAIR|--');
	do d=$(echo $t|cut -d'.' -f1); tbl=$(echo $t|cut -d'.' -f2);
	s=$(mysql -BNe "select sum(index_length+data_length) from information_schema.tables where table_schema='$d' and table_name='$tbl';");
	echo "$s |mysql -e 'REPAIR TABLE $t;'" >> table-sizes.sql; done

Now my table-sizes.sql file will have contents like below, which I can sort and pass to the parallel command again and cut even more time!

32768 |mysql -e 'REPAIR TABLE ads.agency;'
81920 |mysql -e 'ALTER TABLE store.categories FORCE;'

revin@acme:~$ cat table-sizes.sql |sort -rn|cut -d'|' -f2 > upgrade.sql
revin@acme:~$ time parallel -j 4 -- < upgrade.sql
...
real	8m1.116s
user	0m1.260s
sys	0m0.624s

This go-around, my total execution time is 8 minutes – a good 65% improvement. To wrap it up, we will need to run mysql_upgrade one last time so that the system tables are also upgraded, the tables are checked again and then restart the MySQL server as instructed by the manual:

revin@acme:~$ time mysql_upgrade --force

The whole process should be easy to automate and script, depending on your preference. Lastly: YMMV. If you have one table that is more than half the size of your total data set, there might not be big gains.

If you want to learn more about upgrading to MySQL 5.7, come to my webinar on Wednesday, July 19: Learning MySQL 5.7. This process is only one of the phases in a multi-step upgrade process when moving to 5.7. I will discuss them in more detail next week. Register now from the link below, and I’ll talk to you soon!

Oct
18
2016
--

Upgrading to MySQL 5.7? Beware of the new STRICT mode

STRICT mode

STRICT modeThis blog post discusses the ramifications of STRICT mode in MySQL 5.7.

In short

By default, MySQL 5.7 is much “stricter” than older versions of MySQL. That can make your application fail. To temporarily fix this, change the

SQL_MODE

 to

NO_ENGINE_SUBSTITUTION

 (same as in MySQL 5.6):

mysql> set global SQL_MODE="NO_ENGINE_SUBSTITUTION";

MySQL 5.7, dates and default values

The default

SQL_MODE

 in MySQL 5.7 is:

ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

That makes MySQL operate in “strict” mode for transactional tables.

“Strict mode controls how MySQL handles invalid or missing values in data-change statements such as INSERT or UPDATE. A value can be invalid for several reasons. For example, it might have the wrong data type for the column, or it might be out of range. A value is missing when a new row to be inserted does not contain a value for a non-NULL column that has no explicit DEFAULT clause in its definition. (For a NULL column, NULL is inserted if the value is missing.) Strict mode also affects DDL statements such as CREATE TABLE.”
http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sql-mode-strict

That also brings up an interesting problem with the default value for the date/datetime column. Let’s say we have the following table in MySQL 5.7, and want to insert a row into it:

mysql> CREATE TABLE `events_t` (
-> `id` int(11) NOT NULL AUTO_INCREMENT,
-> `event_date` datetime NOT NULL,
-> `profile_id` int(11) DEFAULT NULL,
-> PRIMARY KEY (`id`),
-> KEY `event_date` (`event_date`),
-> KEY `profile_id` (`profile_id`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=latin1
-> ;
Query OK, 0 rows affected (0.02 sec)
mysql> insert into events_t (profile_id) values (1);
ERROR 1364 (HY000): Field 'event_date' doesn't have a default value

The

event_date

 does not have a default value, and we are inserting a row without a value for

event_date

. That causes an error in MySQL 5.7. If we can’t use NULL, we will have to create a default value. In strict mod,e we can’t use “0000-00-00” either:

mysql> alter table events_t change event_date event_date datetime NOT NULL default '0000-00-00 00:00:00';
ERROR 1067 (42000): Invalid default value for 'event_date'
mysql> alter table events_t change event_date event_date datetime NOT NULL default '2000-00-00 00:00:00';
ERROR 1067 (42000): Invalid default value for 'event_date'

We have to use a real date:

mysql> alter table events_t change event_date event_date datetime NOT NULL default '2000-01-01 00:00:00';
Query OK, 0 rows affected (0.00 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> insert into events_t (profile_id) values (1);
Query OK, 1 row affected (0.00 sec)

Or, a most likely much better approach is to change the application logic to:

  • allow NULLs, or
  • always insert the real dates (i.e. use NOW() function), or
  • change the table field to timestamp and update it automatically if no value has been assigned
Further reading

Read the Morgan Tocker’s article on how to transition to MySQL 5.7, and check the full sql_mode documentation

Oct
12
2016
--

MySQL 5.7 Performance Tuning Immediately After Installation

MySQL 5.7 Performance Tuning

This blog updates Stephane Combaudon’s blog on MySQL performance tuning, and covers MySQL 5.7 performance tuning immediately after installation.

A few years ago, Stephane Combaudon wrote a blog post on Ten MySQL performance tuning settings after installation that covers the (now) older versions of MySQL: 5.1, 5.5 and 5.6. In this post, I will look into what to tune in MySQL 5.7 (with a focus on InnoDB).

The good news is that MySQL 5.7 has significantly better default values. Morgan Tocker created a page with a complete list of features in MySQL 5.7, and is a great reference point. For example, the following variables are set by default:

In MySQL 5.7, there are only four really important variables that need to be changed. However, there are other InnoDB and global MySQL variables that might need to be tuned for a specific workload and hardware.

To start, add the following settings to my.cnf under the [mysqld] section. You will need to restart MySQL:

[mysqld]
# other variables here
innodb_buffer_pool_size = 1G # (adjust value here, 50%-70% of total RAM)
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 1 # may change to 2 or 0
innodb_flush_method = O_DIRECT

Description:

Variable Value
innodb_buffer_pool_size Start with 50% 70% of total RAM. Does not need to be larger than the database size
innodb_flush_log_at_trx_commit
  • 1   (Default)
  • 0/2 (more performance, less reliability)
innodb_log_file_size 128M – 2G (does not need to be larger than buffer pool)
innodb_flush_method O_DIRECT (avoid double buffering)

 

What is next?

Those are a good starting point for any new installation. There are a number of other variables that can increase MySQL performance for some workloads. Usually, I would setup a MySQL monitoring/graphing tool (for example, the Percona Monitoring and Management platform) and then check the MySQL dashboard to perform further tuning.

What can we tune further based on the graphs?

InnoDB buffer pool size. Look at the graphs:

MySQL 5.7 Performance Tuning

MySQL 5.7 Performance Tuning

As we can see, we can probably benefit from increasing the InnoDB buffer pool size a bit to ~10G, as we have RAM available and the number of free pages is small compared to the total buffer pool.

InnoDB log file size. Look at the graph:

MySQL 5.7 Performance Tuning

As we can see here, InnoDB usually writes 2.26 GB of data per hour, which exceeds the total size of the log files (2G). We can now increase the

innodb_log_file_size

 variable and restart MySQL. Alternatively, use “show engine InnoDB status” to calculate a good InnoDB log file size.

Other variables

There are a number of other InnoDB variables that can be further tuned:

innodb_autoinc_lock_mode

Setting innodb_autoinc_lock_mode =2 (interleaved mode) can remove the need for table-level AUTO-INC lock (and can increase performance when multi-row insert statements are used to insert values into tables with auto_increment primary key). This requires

binlog_format=ROW

  or

MIXED

  (and ROW is the default in MySQL 5.7).

innodb_io_capacity and innodb_io_capacity_max

This is a more advanced tuning, and only make sense when you are performing a lot of writes all the time (it does not apply to reads, i.e. SELECTs). If you really need to tune it, the best method is knowing how many IOPS the system can do. For example, if the server has one SSD drive, we can set

innodb_io_capacity_max=6000

 and

innodb_io_capacity=3000

 (50% of the max). It is a good idea to run the sysbench or any other benchmark tool to benchmark the disk throughput.

But do we need to worry about this setting? Look at the graph of buffer pool’s “dirty pages“:

screen-shot-2016-10-03-at-7-19-47-pm

In this case, the total amount of dirty pages is high, and it looks like InnoDB can’t keep up with flushing them. If we have a fast disk subsystem (i.e., SSD), we might benefit from increasing

innodb_io_capacity

 and innodb_io_capacity_max.

Conclusion or TL;DR version

The new MySQL 5.7 defaults are much better for general purpose workloads. At the same time, we still need to configure InnoDB variables to take advantages of the amount of RAM on the box. After installation, follow these steps:

  1. Add InnoDB variables to my.cnf (as described above) and restart MySQL
  2. Install a monitoring system, (e.g., Percona Monitoring and Management platform)
  3. Look at the graphs and determine if MySQL needs to be tuned further
Jul
19
2016
--

Upcoming Webinar Wednesday July 20, 11 am PDT: Practical MySQL Performance Optimization

MySQL Performance Optimization

Practical MySQL Performance OptimizationAre you looking to improve your MySQL performance? Application success is often limited by poor MySQL performance. Please join Percona CEO and Founder Peter Zaitsev for this exclusive webinar on Wednesday, July 20th, 2016 at 11:00 AM PDT (UTC – 7) as he presents “Practical MySQL Performance Optimization“.

Peter Zaitsev discusses how to get excellent MySQL performance while being practical. In other words, spending time on what gives you the best return. The webinar updates Peter’s ever-popular Practical MySQL Performance Optimization presentation. It covers the important points for improving MySQL performance. It also includes a discussion of the new tools and features in the latest MySQL 5.7 release, as well as their most important aspects – so you can employ them for maximum database performance.

Areas covered:

  • Hardware
  • MySQL Configuration
  • Schema and Queries
  • Application Architecture
  • MySQL 5.7 New Tools and Features

Peter will highlight practical approaches and techniques for optimizing your time. He will also focus on the queries that are most important for your application. At the end of this webinar, you will know how to optimize MySQL performance in the most practical way.

register-now

Practical MySQL Performance OptimizationPeter Zaitsev, CEO

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Data Performance Blog. He has also been tapped as a contributor to Fortune and DZone, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of percona.com’s most popular downloads. Peter lives in North Carolina with his wife and two children. In his spare time, Peter enjoys travel and spending time outdoors.

Jun
07
2016
--

Severe performance regression in MySQL 5.7 crash recovery

MySQL 5.7 Crash Recovery

In this post, we’ll discuss some insight I’ve gained regarding severe performance regression in MySQL 5.7 crash recovery.

Working on different InnoDB log file sizes in my previous post:

What is a big innodb_log_file_size?

I tried to understand how we can make InnoDB crash recovery faster, but found a rather surprising 5.7 crash recovery regression.

Basically, crash recovery in MySQL 5.7 is two times slower, due to this issue: https://bugs.mysql.com/bug.php?id=80788. InnoDB now performs the log scan twice, compared to a single scan in MySQL 5.6 (no surprise that there is performance degradation).

Fortunately, there is a proposed patch for MySQL 5.7, so I hope it will be improved soon.

As for general crash recovery improvement, my opinion is that it would be much improved by making it multi-threaded. Right now it is significantly limited by the single thread that reads and processes log entries one-by-one. With the current hardware, consisting of tens of cores and fast SSD, we can improve crash recovery by utilizing all the resources we have.

One small improvement that can be made is to disable PERFORMANCE_SCHEMA during recovery (these stats are not needed anyway).

Jun
03
2016
--

MySQL 5.7 By Default 1/3rd Slower Than 5.6 When Using Binary Logs

binary logs make MySQL 5.7 slower

binary logs make MySQL 5.7 slower

Researching a performance issue, we came to a startling discovery:

MySQL 5.7 + binlogs is by default 37-45% slower than MySQL 5.6 + binlogs when otherwise using the default MySQL settings

Test server MySQL versions used:
i7, 8 threads, SSD, Centos 7.2.1511
mysql-5.6.30-linux-glibc2.5-x86_64
mysql-5.7.12-linux-glibc2.5-x86_64

mysqld –options:

--no-defaults --log-bin=mysql-bin --server-id=2

Run details:
Sysbench version 0.5, 4 threads, socket file connection

Sysbench Prepare: 

sysbench --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --oltp-auto-inc=off --mysql-engine-trx=yes --mysql-table-engine=innodb --oltp_table_size=1000000 --oltp_tables_count=1 --mysql-db=test --mysql-user=root --db-driver=mysql --mysql-socket=/path_to_socket_file/your_socket_file.sock prepare

Sysbench Run:

sysbench --report-interval=10 --oltp-auto-inc=off --max-time=50 --max-requests=0 --mysql-engine-trx=yes --test=/usr/share/doc/sysbench/tests/db/oltp.lua --init-rng=on --oltp_index_updates=10 --oltp_non_index_updates=10 --oltp_distinct_ranges=15 --oltp_order_ranges=15 --oltp_tables_count=1 --num-threads=4 --oltp_table_size=1000000 --mysql-db=test --mysql-user=root --db-driver=mysql --mysql-socket=/path_to_socket_file/your_socket_file.sock run

Results:

5.6.30: transactions: 7483 (149.60 per sec.)
5.7.12: transactions: 4689 (93.71 per sec.)  — That is a 37.36% decrease!

Note: on high-end systems with premium IO (think Fusion-IO, memory-only, high-end SSD with good caching throughput), the difference would be much smaller or negligible.

The reason?

A helpful comment from Shane Bester on a related bug report made me realize what was happening. Note the following in the MySQL Manual:

“Prior to MySQL 5.7.7, the default value of sync_binlog was 0, which configures no synchronizing to disk—in this case, the server relies on the operating system to flush the binary log’s contents from time to time as for any other file. MySQL 5.7.7 and later use a default value of 1, which is the safest choice, but as noted above can impact performance.” — https://dev.mysql.com/doc/refman/5.7/en/replication-options-binary-log.html#sysvar_sync_binlog

The culprit is thus the

--sync_binlog=1

 change which was made in 5.7.7 (in 5.6 it is 0 by default). While this may indeed be “the safest choice,” one has to wonder why Oracle chose to implement this default change in 5.7.7. After all, there are many other options t aid crash safety.

A related blog post  from the MySQL HA team states;

“Indeed, [with sync_binlog=1,] it increases the total number of fsyncs called, but since MySQL 5.6, the server groups transactions and fsync’s them together, which minimizes greatly a potential performance hit.” — http://mysqlhighavailability.com/replication-defaults-in-mysql-5-7-7/ (ref item #4)

This seems incorrect given our findings, unless perhaps it requires tuning some other option.

This raises some actions points/questions for Oracle’s team: why change this now? Was 5.6 never crash-safe in terms of binary logging? How about other options that aid crash safety? Is anything [before 5.7.7] really ACID compliant by default?

In 2009 my colleague Peter Zaitsev had already posted on performance matters in connection with sync_binlog issues. More than seven years later, the questions asked in his post may still be valid today;

“May be opening binlog with O_DSYNC flag if sync_binlog=1 instead of using fsync will help? Or may be binlog pre-allocation would be good solution.” — PZ

Testing the same setup again, but this time with

sync_binlog=0

  and

sync_binlog=1

  synchronized/setup on both servers, we see;

Results for sync_binlog=0:

5.6.30: transactions: 7472 (149.38 per sec.)
5.7.12: transactions: 6594 (131.86 per sec.)  — A 11.73% decrease

Results for sync_binlog=1:

5.6.30: transactions: 3854 (77.03 per sec.)
5.7.12: transactions: 4597 (91.89 per sec.)  — A 19.29% increase

Note: the increase here is to some extent negated by the fact that enabling sync_binlog is overall still causes a significant (30% on 5.7 and 48% on 5.6) performance drop. Also interesting is that this could be the effect of “tuning the defaults” of/in 5.7, and it also makes one think about the possibility o further defaults tuning/optimization in this area.

Results for sync_binlog=100:

5.6.30: transactions: 7564 (151.12 per sec.)
5.7.12: transactions: 6515 (130.22 per sec.) — A 13.83% decrease

Thus, while 5.7.12 made some improvements when it comes to

--sync_binlog=1

, when

--sync_binlog

 is turned off or is set to 100, we still see a ~11% decrease in performance. This is the same when not using binary logging at all, as a test with only

--no-defaults

  (i.e. 100% vanilla out-of-the-box MySQL 5.6.30 versus MySQL 5.7.12) shows;

Results without binlogs enabled:

5.6.30: transactions: 7891 (157.77 per sec.)
5.7.12: transactions: 6963 (139.22 per sec.)  — A 11.76% decrease

This raises another question for Oracle’s team: with four threads, there is a ~11% decrease in performance for 5.7.12 versus 5.6.30 (both vanilla)?

Discussing this internally, we were interested to see whether the arbitrary low number of four threads skewed the results and perhaps only showed a less realistic use case. However, testing with more threads, the numbers became worse still:

Results with 100 threads:

5.6.30. transactions: 20216 (398.89 per sec.)
5.7.12. transactions: 11097 (218.43 per sec.) — A 45.24% decrease

Results with 150 threads:

5.6.30. transactions: 11852 (233.01 per sec.)
5.7.12. transactions: 6606 (129.80 per sec.) — A 44.29% decrease

The findings in this article were compiled from a group effort.

May
23
2016
--

Percona XtraBackup 2.4.3 is now available

Percona XtraBackup 2.4.3


Percona XtraBackup 2.4.3Percona
is glad to announce the GA release of Percona XtraBackup 2.4.3 on May 23rd, 2016. Downloads are available from our download site and from apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups

New Features:

  • Percona XtraBackup has implemented new --reencrypt-for-server-id option. Using this option allows users to start the server instance with different server_id from the one the encrypted backup was taken from, like a replication slave or a Galera node. When this option is used, xtrabackup will, as a prepare step, generate a new master key with ID based on the new server_id, store it into keyring file and re-encrypt the tablespace keys inside of tablespace headers.

Bugs Fixed:

  • Running DDL statements on Percona Server 5.7 during the backup process could in some cases lead to failure while preparing the backup. Bug fixed #1555626.
  • MySQL 5.7 can sometimes skip redo logging when creating an index. If such ALTER TABLE is being issued during the backup, the backup would be inconsistent. xtrabackup will now abort with an error message if such ALTER TABLE has been done during the backup. Bug fixed #1582345.
  • .ibd files for remote tablespaces were not copied back to the original location pointed by the .isl files. Bug fixed #1555423.
  • When called with insufficient parameters, like specifying the empty --defaults-file option, Percona XtraBackup could crash. Bug fixed #1566228.
  • The documentation states that the default value for –ftwrl-wait-query-type is all, however it was update. Changed the default value to reflect the documentation. Bug fixed #1566315.
  • When –keyring-file-data option was specified, but no keyring file was found, xtrabackup would create an empty one instead of reporting an error. Bug fixed #1578607.
  • If ALTER INSTANCE ROTATE INNODB MASTER KEY was run at the same time when xtrabackup --backup was bootstrapping it could catch a moment when the key was not written into the keyring file yet and xtrabackup would overwrite the keyring with the old copy of a keyring, so the new key would be lost. Bug fixed #1582601.
  • The output of the --slave-info option was missing an apostrophe. Bug fixed #1573371.

Release notes with all the bugfixes for Percona XtraBackup 2.4.3 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Mar
17
2016
--

Percona Server 5.7 performance improvements

Percona Server 5.7 performance improvements

Percona Server 5.7 performance improvementsIn this blog post, we’ll be discussing Percona Server 5.7 performance improvements.

Starting from the Percona Server 5.6 release, we’ve introduced several significant changes that help address performance problems for highly-concurrent I/O-bound workloads. Some of our research and improvements were re-implemented for MySQL 5.7 – one of the best MySQL releases. But even though MySQL 5.7 showed progress in various aspects of scalability and performance, we’ve found that it’s possible to push I/O bound workload limits even further.

Percona Server 5.7.11 currently has two major performance features in this area:

  • Multi-threaded LRU flusher. In a limited form, this feature exists in Percona Server 5.6. We split the LRU flusher thread out of the existing page cleaner thread, and it is now solely tasked with flushing the flush list. Along with several other important changes, this notably improved I/O bound workload performance. MySQL 5.7 has also made a step forward by introducing a pool of page cleaner threads that should help improve parallelism in flushing. However, we believe that the current approach is not good enough – especially for LRU flushing. In one of our next Percona Server 5.7 performance improvements posts, we’re going to describe aspects of MT flushing, and why it’s especially important to have an independent MT LRU flusher.
  • Parallel doublewrite buffer. For ages, MySQL has had only one doublewrite buffer for flushing data pages. So even if you had several threads for flushing you couldn’t efficiently use them – doublewrite quickly became a bottleneck. We’ve changed that by attaching two doublewrite buffers to each buffer pool instance: one for each type of page flushing (LRU and flush list). This completely avoids any doublewrite contention, regardless of the flusher thread count. We’ve also moved the doublewrite buffer out of the system tablespace so you can now configure its location.

Now let’s review the results of a sysbench OLTP_RW, I/O-bound scenario. Below are the key settings that we used in our test:

  • dataset 100GB
  • innodb_buffer_pool_size=25GB
  • innodb_doublwrite=1
  • innodb_flush_log_at_trx_commit=1

5711.blog.n1.v1

While evaluating MySQL 5.7 RC we observed a performance drop in I/O-bound workloads, and it looked very similar to MySQL 5.6 behavior. The reason for the drop is the lack of free pages in the buffer pool. Page cleaner threads are unable to perform enough LRU flushing to keep up with the demand, and the query threads resort to performing single page flushes. This results in increased contention between all the of the flushing structures (especially the doublewrite buffer).

For ages (Vadim discussed this ten years ago!) InnoDB has had a universal workaround for most scalability issues: the innodb_thread_concurrency system variable. It allows you to limit the number of active threads within InnoDB and reduce shared resource contention. However, it comes with a trade-off in that the maximum possible performance is also limited.

To understand the effect, we ran the test two times with two different InnoDB concurrency settings:

  • innodb_thread_concurrency=0: with this default value Percona Server 5.7 shows the best results, while MySQL 5.7 shows sharply decreasing performance with more than 64 concurrent clients.
  • innodb_thread_concurrency=64: limiting the number of threads inside InnoDB affects throughput for Percona Server slightly (with a small drop from the default setting), but for MySQL that setting change is a huge help. There were no drops in performance after 64 threads, and it’s able to maintain this performance level up to 4k threads (with some variance).

To understand the details better, let’s zoom into the test run with 512 threads:

5711.blog.n2.v4

The charts above show that contentions significantly affect unrestricted concurrency throughput, but affect latency even worse. Limiting concurrency helps to address contentions, but even with this setting Percona Server shows 15-25% better.

Below you can see the contention situation for each of the above runs. The graphs show total accumulated waiting time across all threads per synchronization object (per second). For example, the absolute hottest object across all graphs is the doublewrite mutex in MySQL-5.7.11 (without thread concurrency limitation). It has about 17 seconds of wait time across 512 client threads for each second of run time.

 

5711.blog.n4.v6

mysql server settings

innodb_log_file_size=10G
innodb_doublewrite=1
innodb_flush_log_at_trx_commit=1
innodb_buffer_pool_instances=8
innodb_change_buffering=none
innodb_adaptive_hash_index=OFF
innodb_flush_method=O_DIRECT
innodb_flush_neighbors=0
innodb_read_io_threads=8
innodb_write_io_threads=8
innodb_lru_scan_depth=8192
innodb_io_capacity=15000
innodb_io_capacity_max=25000
loose-innodb-page-cleaners=4
table_open_cache_instances=64
table_open_cache=5000
loose-innodb-log_checksum-algorithm=crc32
loose-innodb-checksum-algorithm=strict_crc32
max_connections=50000
skip_name_resolve=ON
loose-performance_schema=ON
loose-performance-schema-instrument='wait/synch/%=ON',

Conclusion

If you are already testing 5.7, consider giving Percona Server a spin – especially if your workload is I/O bound. We’ve worked hard on Percona Server 5.7 performance improvements. In upcoming posts, we will delve into the technical details of our LRU flushing and doublewrite buffer changes.

Mar
15
2016
--

Percona Server 5.7.11-4 is now available

Percona Server

Percona Server 5.7.11-4Percona is glad to announce the GA release of Percona Server 5.7.11-4 on March 15, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.7.11, including all the bug fixes in it, Percona Server 5.7.11-4 is the current GA release in the Percona Server 5.7 series. All of Percona’s software is open-source and free, all the details of the release can be found in the 5.7.11-4 milestone at Launchpad.

New Features:

Bugs Fixed:

  • If pid-file option wasn’t specified with the full path, Ubuntu/Debian sysvinit script wouldn’t notice if the server is actually running which would lead to timeout or in some cases even hang. Bug fixed #1549333.
  • Buffer pool may fail to remove dirty pages for a particular tablesspace from the flush list, as requested by, for example, DROP TABLE or TRUNCATE TABLE commands. This could lead to a crash. Bug fixed #1552673.
  • Audit Log Plugin worker thread may crash on write call writing fewer bytes than requested. Bug fixed #1552682 (upstream #80606).
  • Percona Server 5.7 systemd script now takes the last option specified in my.cnf if the same option is specified multiple times. Previously it would try to take all values which would break the script and server would fail to start. Bug fixed #1554976.
  • mysqldumpslow script has been removed because it was not compatible with Percona Server extended slow query log format. Please use pt-query-digest from Percona Toolkit instead. Bug fixed #856910.

Other bugs fixed: #1521120, #1549301 (upstream #80496), and #1554043 (upstream #80607).

Release notes for Percona Server 5.7.11-4 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com