Percona Server for MySQL Highlights – binlog_space_limit

Percona Server for MySQL

Percona Server for MySQLI think it is often confusing to compare upstream MySQL and Percona Server for MySQL, and some helpful information can be found in the introductory notes. But what does that really mean for an ordinary DBA, especially if none of the known big extra features are important in a particular use case?

In this article, I would like to start a series of short blog posts highlighting small, often less known, but potentially useful features, available in Percona Server for MySQL. Let’s start with a relatively new feature.

Limit the disk space used by binary logs problem

Historically, dealing with binary logs and the disk space used by them was quite challenging. The only thing that let you control this is the expire_logs_days variable, which can be useless in some scenarios (binlogs growing faster than can be purged) and does not solve all problems. Therefore, in order to avoid disk full and server crashes, it is often needed to create smart scripts that monitor the situation and call PURGE BINARY LOGS TO when necessary.

This isn’t very convenient and so a feature request appeared in September 2012 – https://bugs.mysql.com/bug.php?id=66733 to use the already existing logic for relay logs and apply it for binary logs as well. It has not been addressed in upstream yet.

We attempted to deal with it in Percona Server by introducing max_binlog_files variable (Percona Server 5.5.27, October 2012). This was a good step forward, but still not as good as expected. Even though limiting the number of binary log files gives much better control in terms of disk space used by them, it has one serious downside. It does not take into account situations when there may be binary logs lot smaller or a lot bigger then the max_binlog_size specifies.

Most common situations like that may be that the server rotates to a new binary log faster due to FLUSH LOGS issued, or due to restarts. A couple of such FLUSH commands in a row may completely ruin the ability to point in time recovery by losing a large part of the history of the binary log. An opposite case is when a binary log gets bigger than maximum size due to large transactions (which are never going to be split).

Recently, a new variable in MySQL 8.0 has replaced the expire_logs_days– the binlog_expire_logs_secondsand also a period of 30 days became the new default (previously it was unlimited).  It is certainly better to have more precise control on binary logs history then “days”, for example, we can set the rotation period to 2 days 6 hours and 30 minutes or whatever best matches the backup schedule. Again, a good time limit control on binary logs may not be sufficient in all cases and won’t help much when it is hard to predict how much data will be written or changed over time. Only space-based limits can actually save us from hitting the disk space problem.

Expected solution is there

Finally, something better has arrived, the binlog_space_limit variable, introduced in Percona Server 5.7.23 on September 2018 (ported to 8.0 as well). The idea is very simple: it does the same thing as relay_log_space_limit does for the relay logs. Finally, both sides of binary logging have comparable functionality when it comes to controlling the space on the disk.

How to use it

When the binlog_space_limit variable is set to 0 (default), the feature is turned off and no limit is set apart from the one imposed by expire_logs_days. I think it is a good idea is to dedicate a separate disk partition for the binary logs (or binary and redo logs), big enough to keep as many of them as needed for a good backup and replication strategy, and set the variable to around 95% of this partition size. Remember, that even though max_binlog_sizedefines the maximum size of a single binary log, this is not a hard limit. This is because a single transaction is never split between multiple binlog files and has to be written to the same binary log as a whole.

Let’s say the max_binlog_size is set to 1GB (default and maximum), and the binlog_space_limit is set to 10GB, but it happens that a really big transaction changes 3GB of data. What will happen with the binlog_space_limit set is that in order to store that new 3GB in the binary log (when the total size of logs is going to hit the limit), it will remove as many old binary logs as needed first, in this case, up to three oldest ones. This way there is no risk in hitting disk full situation even if the free disk space is smaller then the new binary log addition would need. But still, I would recommend reserving at least little more than 1GB of free disk space for such partition, just in case.

Example settings for a 16GB disk partition, dedicated for binary logs only, may look like this:

max_binlog_size = 512M
binlog_space_limit = 15G

One thing that could be still improved about this functionality is to make this variable dynamic. But to stay consistent with the one for relay logs, it would be best if the upstream makes this one dynamic first ?

Best binlog rotation strategy?

There may be still cases where I think you would rather prefer to stay with a time-based limit. Let’s imagine a situation where the full backups are made every week, and the binlog_space_limit is set to hold an average total size of two weeks of writes, so all should be safe. However, some heavy maintenance tasks alter a large portion of the data, creating more binary logs in one day then usually it takes two weeks. This may also break the point in time recovery capability. So if possible, maybe better to give an ample supply for the disk partition and specify 8 days rotation via expire_logs_days or binlog_expire_logs_secondsinstead.

Do you like it?

I am very interested to find out if this feature was already noticed by the community and how useful you think it is. Your comments are more than welcome!


MySQL Ripple: The First Impression of a MySQL Binlog Server

MySQL Ripple

MySQL RippleJust about a month ago, Pavel Ivanov released Ripple under the Apache-2.0 license. Ripple is a MySQL binlog server: software which receives binary logs from MySQL or MariaDB servers and delivers them to another MySQL or MariaDB server. Practically ,this is an intermediary master which does not store any data, except the binary logs themselves, and does not apply events. This solution allows saving of a lot of resources on the server, which acts only as a middle-man between the master and its actual slave(s).

The intermediary server, keeping binary logs only and not doing any other job, is a prevalent use case which allows us to remove IO (binlog read) and network (binlog retrieval via network) load from the actual master and free its resources for updates. The intermediary master, which does not do any work, distributes binary logs to slaves connected to it. This way you can have an increased number of slaves, attached to such a server, without affecting the application, running updates.

Currently, users exploit the Blackhole storage engine to emulate similar behavior. But Blackhole is just a workaround: it still executes all the events in the binary logs, requires valid MySQL installation, and has a lot of issues. Such a pain!

Therefore a new product which can do the same job and is released with an open source license is something worth trying.

A simple test

For this blog, I did a simple test. First, I installed it as described in the README file. Instructions are pretty straightforward, and I successfully built the server on my Ubuntu 18.04.2 LTS laptop. Guidelines suggest to install


, and I replaced


which I had already on my machine. Probably this was not needed, but since the tool claims to support both MySQL and MariaDB binary log formats, I preferred to install the MariaDB client.

There is no manual of usage instructions. However, the tool supports


  command, and it is, again, straightforward.

The server can be started with options:

$./bazel-bin/rippled -ripple_datadir=./data -ripple_master_address= -ripple_master_port=13001 -ripple_master_user=root -ripple_server_ports=15000


  • -ripple-datadir

     : datadir where Ripple stores binary logs

  • -ripple_master_address

     : master host

  • -ripple_master_port

     : master port

  • -ripple_master_user

     : replication user

  • -ripple_server_ports

     : comma-separated ports which Ripple will listen

I did not find an option for securing binary log retrieval. The slave can connect to the Ripple server with any credentials. Have this in mind when deploying Ripple in production.

Now, let’s run a simple test. I have two servers. Both running on localhost, one with port 13001 (master) and another one on port 13002 (slave). The command line which I used to start


 , points to the master. Binary logs are stored in the data directory:

$ ls -l data/
total 14920
-rw-rw-r-- 1 sveta sveta 15251024 Mar 6 01:43 binlog.000000
-rw-rw-r-- 1 sveta sveta 71 Mar 6 00:50 binlog.index

I pointed the slave to the Ripple server with the command

mysql> change master to master_host='',master_port=15000, master_user='ripple';
Query OK, 0 rows affected, 1 warning (0.02 sec)

Then started the slave.

On the master, I created the database


  and ran sysbench


test for a single table. After some time, I stopped the load and checked the content of the table on master and slave:

master> select count(*) from sbtest1;
| count(*) |
| 10000 |
1 row in set (0.08 sec)
master> checksum table sbtest1;
| Table | Checksum |
| sbtest.sbtest1 | 4162333567 |
1 row in set (0.11 sec)
slave> select count(*) from sbtest1;
| count(*) |
| 10000 |
1 row in set (0.40 sec)
slave> checksum table sbtest1;
| Table | Checksum |
| sbtest.sbtest1 | 1797645970 |
1 row in set (0.13 sec)
slave> checksum table sbtest1;
| Table | Checksum |
| sbtest.sbtest1 | 4162333567 |
1 row in set (0.10 sec)

It took some time for the slave to catch up, but everything was applied successfully.

Ripple has nice verbose logging:

$ ./bazel-bin/rippled -ripple_datadir=./data -ripple_master_address= -ripple_master_port=13001 -ripple_master_user=root -ripple_server_ports=15000
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0306 15:57:13.641451 27908 rippled.cc:48] InitPlugins
I0306 15:57:13.642007 27908 rippled.cc:60] Setup
I0306 15:57:13.642937 27908 binlog.cc:307] Starting binlog recovery
I0306 15:57:13.644090 27908 binlog.cc:350] Scanning binlog file: binlog.000000
I0306 15:57:13.872016 27908 binlog.cc:417] Binlog recovery complete
binlog file: binlog.000000, offset: 15251088, gtid: 6ddac507-3f90-11e9-8ee9-00163e000000:0-0-7192
I0306 15:57:13.872050 27908 rippled.cc:106] Recovered binlog
I0306 15:57:13.873811 27908 mysql_server_port_tcpip.cc:150] Listen on host: localhost, port: 15000
I0306 15:57:13.874282 27908 rippled.cc:62] Start
I0306 15:57:13.874511 27910 mysql_master_session.cc:181] Master session starting
I0306 15:57:13.882601 27910 mysql_client_connection.cc:148] connected to host:, port: 13001
I0306 15:57:13.895349 27910 mysql_master_session.cc:137] Connected to host:, port: 13001, server_id: 1, server_name:
W0306 15:57:13.898556 27910 mysql_master_session.cc:197] master does not support semi sync
I0306 15:57:13.898583 27910 mysql_master_session.cc:206] start replicating from '6ddac507-3f90-11e9-8ee9-00163e000000:0-0-7192'
I0306 15:57:13.899031 27910 mysql_master_session.cc:229] Master session entering main loop
I0306 15:57:13.899550 27910 binlog.cc:626] Update binlog position to end_pos: binlog.000000:15251152, gtid: 0-0-7192
I0306 15:57:13.899572 27910 binlog.cc:616] Skip writing event [ Previous_gtids len = 67 ]
I0306 15:57:13.899585 27910 binlog.cc:626] Update binlog position to end_pos: binlog.000000:15251152, gtid: 0-0-7192


it may be good to run more tests before using Ripple in production, and to explore its other options, but from a first view it seems to be a very nice and useful product.

Photo by Kishor on Unsplash


Webinar 6/27: MySQL Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance

performance troubleshooting MySQL monitoring tools

performance troubleshooting MySQL monitoring toolsPlease join Percona’s Principal Support Escalation Specialist Sveta Smirnova as she presents Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance on Wednesday, June 27th at 11:00 AM PDT (UTC-7) / 2:00 PM EDT (UTC-4).


During the MySQL Troubleshooting webinar series, I covered many monitoring and logging tools such as:

  • General, slow, audit, binary, error log files
  • Performance Schema
  • Information Schema
  • System variables
  • Linux utilities
  • InnoDB monitors
  • PMM

However, I did not spend much time on the impact these instruments have on overall MySQL performance. And they do have an impact.

And this is the conflict many people face. MySQL Server users try exploring these monitoring instruments, see that they slow down their installations, and turn them off. This is unfortunate. If the instrument that can help you resolve a problem is OFF, you won’t have good and necessary information to help understand when, how and why the issue occurred. In the best case, you’ll re-enable instrumentation and wait for the next disaster occurrence. In the worst case, you try various fix options without any real knowledge if they solve the problem or not.

This is why it is important to understand the impact monitoring tools have on your database, and therefore how to minimize it.

Understanding and controlling the impact of MySQL monitoring tools

In this webinar, I cover why certain monitoring tools affect performance, and how to minimize the impact without turning the instrument off. You will learn how to monitor safely and effectively.

Register Now


Sveta Smirnova

Principal Support Escalation Specialist

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can quickly solve typical issues and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona, Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

The post Webinar 6/27: MySQL Troubleshooting Best Practices: Monitoring the Production Database Without Killing Performance appeared first on Percona Database Performance Blog.


MySQL Bug 72804 Workaround: “BINLOG statement can no longer be used to apply query events”

MySQL Bug 72804

MySQL Bug 72804In this blog post, we’ll look at a workaround for MySQL bug 72804.

Recently I worked on a ticket where a customer performed a point-in-time recovery PITR using a large set of binary logs. Normally we handle this by applying the last backup, then re-applying all binary logs created since the last backup. In the middle of the procedure, their new server crashed. We identified the binary log position and tried to restart the PITR from there. However, using the option


, the restore failed with the error “The BINLOG statement of type Table_map was not preceded by a format description BINLOG statement.” This is a known bug and is reported as MySQL Bug #72804: “BINLOG statement can no longer be used to apply Query events.”

I created a small test to demonstrate a workaround that we implemented (and worked).

First, I ran a large import process that created several binary logs. I used a small value in 


 and tested using the database “employees” (a standard database used for testing).Then I dropped the database.

mysql> set sql_log_bin=0;
Query OK, 0 rows affected (0.33 sec)
mysql> drop database employees;
Query OK, 8 rows affected (1.25 sec)

To demonstrate the recovery process, I joined all the binary log files into one SQL file and started an import.

sveta@Thinkie:~/build/ps-5.7/mysql-test$ ../bin/mysqlbinlog var/mysqld.1/data/master.000001 var/mysqld.1/data/master.000002 var/mysqld.1/data/master.000003 var/mysqld.1/data/master.000004 var/mysqld.1/data/master.000005 > binlogs.sql
sveta@Thinkie:~/build/ps-5.7/mysql-test$ GENERATE_ERROR.sh binlogs.sql
sveta@Thinkie:~/build/ps-5.7/mysql-test$ mysql < binlogs.sql
ERROR 1064 (42000) at line 9020: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'inserting error

I intentionally generated a syntax error in the resulting file with the help of the GENERATE_ERROR.sh script (which just inserts a bogus SQL statement in a random row). The error message clearly showed where the import stopped: line 9020. I then created a file that cropped out the part that had already been imported (lines 1- 9020), and tried to import this new file.

sveta@Thinkie:~/build/ps-5.7/mysql-test$ tail -n +9021 binlogs.sql >binlogs_rest.sql
sveta@Thinkie:~/build/ps-5.7/mysql-test$ mysql < binlogs_rest.sql
ERROR 1609 (HY000) at line 134: The BINLOG statement of type `Table_map` was not preceded by a format description BINLOG statement.

Again, the import failed with exactly the same error as the customer. The reason for this error is that the BINLOG statement – which applies changes from the binary log – expects that the format description event gets run in the same session as the binary log import, but before it. The format description existed initially at the start of the import that failed at line 9020. The later import (from line 9021 on) doesn’t contain this format statement.

Fortunately, this format is the same for the same version! We can simply take it from the beginning the SQL log file (or the original binary file) and put into the file created after the crash without lines 1-9020.

With MySQL versions 5.6 and 5.7, this event is located in the first 11 rows:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ head -n 11 binlogs.sql | cat -n
     1	/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
     3	DELIMITER /*!*/;
     4	# at 4
     5	#170128 17:58:11 server id 1  end_log_pos 123 CRC32 0xccda074a 	Start: binlog v 4, server v 5.7.16-9-debug-log created 170128 17:58:11 at startup
     6	ROLLBACK/*!*/;
     7	BINLOG '
    10	AUoH2sw=
    11	'/*!*/;

The first six rows are meta information, and rows 6-11 are the format event itself. The only thing we need to export into our resulting file is these 11 lines:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ head -n 11 binlogs.sql > binlogs_rest_with_format.sql
sveta@Thinkie:~/build/ps-5.7/mysql-test$ cat binlogs_rest.sql >> binlogs_rest_with_format.sql
sveta@Thinkie:~/build/ps-5.7/mysql-test$ mysql < binlogs_rest_with_format.sql

After this, the import succeeded!


MySQL Limitations Part 2: The Binary Log

This is the second in a series on what’s seriously limiting MySQL in certain circumstances (links: part 1). In the first part, I wrote about single-threaded replication. Upstream from the replicas is the primary, which enables replication by writing a so-called “binary log” of events that modify data in the server. The binary log is a real limitation in MySQL.

The binary log is necessary not only for replication, but for point-in-time recovery, too. Given a backup and the corresponding binary log position, you can replay the binary log and roll forward the state of your server to a desired point in time.

But enabling the binary log reduces MySQL’s performance dramatically. It is not the logging itself that’s the problem — writing the log is usually not much additional work. It’s ensuring consistency and durability that is expensive. Flushing it to disk adds an fsync call for every transaction. And the server performs an XA transaction between InnoDB and the binary log. This adds more fsync calls, and causes mutex contention, and prevents group commit, and probably other things that aren’t coming to mind now.

The performance reduction can be an order of magnitude or more.

What’s the solution? I’m not sure I can summarize it concisely. There is a lot of complexity, and honestly I don’t understand some of the server internals fully enough to have a 50-thousand-foot view of it all. The binary logging and replication code, and its interaction with InnoDB, is difficult to understand. Kristian Nielsen has an extensive series of posts on group commit alone.

I think that a full fix might require significant architectural changes to MySQL. This will be hard. Maybe Drizzle is going in a good direction — time will tell. All of the solutions that I can think of are too simplistic. For example, doing replication through the InnoDB transaction log would work fine if a) all the data were in InnoDB, and b) InnoDB’s data didn’t have to be synchronized with the .frm files (and Drizzle has gotten rid of the .frm files, hooray), and c) privileges and other changes to the non-InnoDB data in MySQL were handled manually.

It could work if you just made sure that you didn’t change privileges or schema, but that’s a description of a pretty limited, clunky replication system from the user’s point of view. Still, I have considered it. There would need to be a mechanism of transporting the log files, and InnoDB would have to be put into a state of constant “recovery,” and it would have to be modified to be available read-only in this state so that it could be used for read queries. This can be done, of course. It’s just a matter of how hard it is.

It’s worth noting that PBXT does replication through its transaction logs, so there’s even precedent for this among MySQL storage engines. And there is Galera’s multi-master synchronization technology to look at, too.

Entry posted by Baron Schwartz |

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com