The MySQL Clone Wars: Plugin vs. Percona XtraBackup

MySQL Plugin vs. Percona XtraBackup

MySQL Plugin vs. Percona XtraBackupLarge replication topologies are quite common nowadays, and this kind of architecture often requires a quick method to rebuild a replica from another server.

The Clone Plugin, available since MySQL 8.0.17, is a great feature that allows cloning databases out of the box. It is easy to rebuild a replica or to add new nodes to a cluster using the plugin. Before the release of the plugin, the best open-source alternative was Percona XtraBackup for MySQL Databases.

In this blog post, we compare both alternatives for cloning purposes. If you need to perform backups, Percona XtraBackup is a better tool as it supports compression and incremental backups, among other features not provided by the plugin. The plugin supports compression only for network transmission, not for storage.

But one of the plugin’s strong points is simplicity. Once installed and configured, cloning a database is straightforward. Just issuing a command from the destination database is enough.

Percona XtraBackup, on the other side, is a more complex tool. The cloning process involves several stages: backup, stream, write, and prepare. These stages can take place in parallel: we can stream the backup to the new server using netcat and, at the same time, we can write it into the destination directory. The only stage that is sequential is the last one: prepare.

Test Characteristics

We used sysbench to create 200 tables of 124Mb each for a total of 24Gb. Both source and replica virtual machines run 4 cores, 8 Gb RAM, and 60Gb storage. We created the disks on the same datastore.

During the tests, we did not generate additional operations on the database. We measured only the clone process, reducing the benchmark complexity. Otherwise, we would have to take into consideration things like application response time, or the number of transactions executed. This is beyond the scope of this assessment.

We tested different combinations of clone and Percona XtraBackup operations. For XtraBackup, we tested 1 to 4 threads, with and without compression. In the case of compression, we allocated the same number of threads to compression and decompression. For the clone plugin, we tested auto (which lets the server decide how many threads will perform the clone) and 1 to 4 threads. We also tested with and without compression. Finally, we executed all the tests using three different network limits: 500mbps, 1000mbps, and 4000mbps. These make a total of 54 tests, executed 12+1 times each.

All times are in seconds. In the graphs below, lower values are better.



Out of the required parameters to operate the clone plugin, the following were set up accordingly in the recipient server:

  • clone_max_concurrency=<maximum number of threads> Defines the maximum number of threads used for a remote cloning operation with autotune enabled. Otherwise, this is the exact number of threads that remote cloning uses.
  • clone_autotune_concurrency If enabled the clone operation uses up to clone_max_concurrency threads. The default is 16.
    • clone_enable_compression If enabled, the remote clone operation will use compression.

Percona XtraBackup

To stream the backup we used the xbstream format and sent the data to the remote server using netcat. We applied the following parameters:

  • parallel=<number of threads> Xtrabackup and xbstream parameter that defines the number of threads used for backup and restore operations.
  • rebuild-threads The number of threads used for the rebuild (prepare) operation.
  • decompress_threads and compress_threads Xtrabackup and xbstream parameters that define the number of threads used for compression operations.

Some people use additional parameters like innodb-read-io-threads, innodb-write-io-threads, or innoDB-io-capacity, but these parameters only affect the behavior of InnoDB background threads. They have no impact during backup and restore operations.



No compression

For the lower bandwidth tests, the number of threads used does not make a difference. Once we increase bandwidth we see that time cuts by half when we move from one thread to two. Going beyond that value improves slightly. Probably we reach the disk i/o limit.

Clone Plugin performance without compression.

The auto option is consistently the fastest one.


Compression is supposed to improve performance for lower bandwidth connections, but we see that this is not the case. Bandwidth has no impact on execution time and compression makes the clone slower. Again auto gives the best results, equivalent to 4 threads.
clone plugin with compression

Percona XtraBackup

No Compression

Without compression, we see again that the number of threads does not make any difference in the lower bandwidth test. When we increase bandwidth, the number of threads is important, but we quickly reach i/o limits.

Percona Xtrabackup stream without compression


When using compression, we see that requires less time to complete in almost every case compared with the option without compression, even when bandwidth is not the limit.

Percona Xtrabackup stream with compression


We see that, when using compression, the clone plugin is the slower option while Percona XtraBackup gives great results for all bandwidths. Without compression, the clone plugin is faster when using more than 2 threads. XtraBackup is faster for fewer threads.

Xtrabackup vs. Clone plugin - results summary

Below, we have a chart comparing the worst and best results. As expected, the worst results correspond to one thread executions.

The Clone Plugin is a great option for simplicity. Percona XtraBackup is excellent to save bandwidth and provides better results with fewer threads. With enough threads and bandwidth available, both solutions provide comparable results.


MySQL 8.0.22: SHOW PROCESSLIST Version 2 – Now Available From PERFORMANCE_SCHEMA


SHOW PROCESSLIST Version 2The “SHOW PROCESSLIST” command is very famous and very useful for MySQL DBAs. It will help you to understand the ongoing thread activities and their current states. By default, the “show processlist” output details will be collected from the thread manager, and it needs the global mutex. From MySQL 8.0.22, we have an alternative way to get the process details from the PERFORMANCE_SCHEMA. It doesn’t need the global mutex. ?

Note: We also have the non-blocking SYS schema views “processlist” and “x$processlist”, which provide more complete information than the SHOW PROCESSLIST statement and the INFORMATION_SCHEMA.PROCESSLIST and PERFORMANCE_SCHEMA.PROCESSLIST. But, we can’t integrate this with the “SHOW PROCESSLIST” command.

In this blog, I am going to explain the complete details about the new processlist implementation using PERFORMANCE_SCHEMA.

“SHOW PROCESSLIST” Using Thread Manager (default)

  • This is the default method.
  • The default “show processlist” implementation iterates across active threads from within the thread manager while holding a global mutex.
  • Negatively impacts performance.
  • Particularly impacts the busy systems quite badly.
  • The INFORMATION_SCHEMA.PROCESSLIST is one of the sources of process information. This will also use the thread manager to collect the metrics.
  • By default, “mysqladmin processlist” also uses the thread manager to get the details.

The following statements are equivalent:

Mysqladmin processlist --verbose

“SHOW PROCESSLIST” Using Performance Schema

  • Available from MySQL 8.0.22.
  • It collects the thread details from the PERFORMANCE_SCHEMA>PROCESSLIST table.
  • Global mutex is not needed.
  • Helps to avoid the performance impact during querying the “show processlist”, particularly in busy systems.
  • The implementation also applies to “mysqladmin processlist”

The following statements are equivalent:

Mysqladmin processlist --verbose


mysql> desc performance_schema.processlist;
| Field   | Type            | Null | Key | Default | Extra |
| ID      | bigint unsigned | NO   | PRI | NULL    |       |
| USER    | varchar(32)     | YES  |     | NULL    |       |
| HOST    | varchar(255)    | YES  |     | NULL    |       |
| DB      | varchar(64)     | YES  |     | NULL    |       |
| COMMAND | varchar(16)     | YES  |     | NULL    |       |
| TIME    | bigint          | YES  |     | NULL    |       |
| STATE   | varchar(64)     | YES  |     | NULL    |       |
| INFO    | longtext        | YES  |     | NULL    |       |
8 rows in set (0.00 sec)

mysql> desc information_schema.processlist;
| Field   | Type            | Null | Key | Default | Extra |
| ID      | bigint unsigned | NO   |     |         |       |
| USER    | varchar(32)     | NO   |     |         |       |
| HOST    | varchar(261)    | NO   |     |         |       |
| DB      | varchar(64)     | YES  |     |         |       |
| COMMAND | varchar(16)     | NO   |     |         |       |
| TIME    | int             | NO   |     |         |       |
| STATE   | varchar(64)     | YES  |     |         |       |
| INFO    | varchar(65535)  | YES  |     |         |       |
8 rows in set (0.00 sec)


  • Make sure the PERFORMANCE_SCHEMA is enabled at the server startup.
  • Make sure MySQL was configured and built with the thread instrumentations enabled.

MySQL provides a variable “performance_schema_show_processlist” to enable this feature. Once we enable the variable, the “SHOW PROCESSLIST” command will start to show the details from the “PERFORMANCE_SCHEMA.PROCESSLIST” table instead of the thread manager.

The variable has a global scope, no need to restart the MySQL server.

mysql> show global variables like 'performance_schema_show_processlist';
| Variable_name                       | Value |
| performance_schema_show_processlist | OFF   |
1 row in set (0.08 sec)

mysql> set global performance_schema_show_processlist='ON';
Query OK, 0 rows affected (0.00 sec)

mysql> \r
Connection id:    23
Current database: *** NONE ***

mysql> show global variables like 'performance_schema_show_processlist';
| Variable_name                       | Value |
| performance_schema_show_processlist | ON    |
1 row in set (0.00 sec)



mysql> show processlist\G
*************************** 1. row ***************************
     Id: 5
   User: event_scheduler
   Host: localhost
     db: NULL
Command: Daemon
   Time: 2461
  State: Waiting on empty queue
   Info: NULL
*************************** 2. row ***************************
     Id: 23
   User: root
   Host: localhost
     db: NULL
Command: Query
   Time: 0
  State: executing
   Info: show processlist
2 rows in set (0.00 sec)

You can also query the “performance_schema.processlist” table to get the thread information.

mysql> select * from performance_schema.processlist\G
*************************** 1. row ***************************
     ID: 5
   USER: event_scheduler
   HOST: localhost
     DB: NULL
   TIME: 2448
  STATE: Waiting on empty queue
*************************** 2. row ***************************
     ID: 23
   USER: root
   HOST: localhost
     DB: NULL
   TIME: 0
  STATE: executing
   INFO: select * from performance_schema.processlist
2 rows in set (0.00 sec)

“mysqladmin processlist” output from “performance_schema”:

[root@mysql8 vagrant]# mysqladmin processlist
| Id | User            | Host      | db | Command | Time | State                  | Info             |
| 5  | event_scheduler | localhost |    | Daemon  | 2631 | Waiting on empty queue |                  |
| 24 | root            | localhost |    | Query   | 0    | executing              | show processlist |


  • To avoid having some threads ignored, leave the “performance_schema_max_thread_instances” and “performance_schema_max_thread_classes” system variables set to their default value (default = -1, meaning the parameter will be autosized during the server startup).
  • To avoid having some STATE column values be empty, leave the “performance_schema_max_stage_classes” system variable set to its default (default = -1, meaning the parameter will be autosized during the server startup).

How to Store MySQL Audit Logs in MongoDB in a Maintenance-Free Setup

Store MySQL Audit Logs in MongoDB

Store MySQL Audit Logs in MongoDBI was once helping one of our customers on how to load MySQL audit logs into a MySQL database and analyze them. But immediately I thought: “Hey, this is not the most efficient solution! MySQL or typical RDBMS, in general, were not really meant to store logs after all.”

So, I decided to explore an alternative – which seemed more sensible to me – and use MongoDB as the storage for logs, for three main reasons:

  • schema-less nature fits well to the audit log nature, where different types of events may use different fields
  • speaks JSON natively and the audit plugin can use JSON format
  • has capped collections feature, which allows avoiding additional maintenance overhead

Just to mention, audit logging is available in MySQL Enterprise Edition but a similar, yet free, solution, is available in Percona Server for MySQL. In both cases, it works by installing the audit log plugin.

Ad Hoc Import

The simplest scenario is to just set the audit log format to JSON:

audit_log_format = JSON

And as soon as it collects some data, import the log file into MongoDB collection via the mongoimport command, like this:

# mongoimport --username percona --password P3rc0n4 --host --port 27017 --db auditlogs --collection audit1 --file /var/lib/mysql/audit.log
2020-12-31T16:24:43.782+0000 connected to:
2020-12-31T16:24:44.316+0000 imported 25462 documents

mongo > db.audit1.countDocuments({})

Of course, this works, but I prefer an automated solution, so I looked at available options for live-streaming the logs.


The first thing that looked useful is the ability to send the audit log directly to syslog instead of a file. Knowing that both rsyslog, as well as syslog-ng, have MongoDB output modules, it felt like a very easy approach. So I installed the rsyslog-mongodb module package on my test Ubuntu VM with running Percona Server for MySQL, configured audit log with:

audit_log_handler = syslog
audit_log_format = JSON

Rsyslog (version 8.2) example configuration with:

# cat /etc/rsyslog.d/49-ship-syslog.conf
db="auditlogs" collection="mysql_node1_log")

This worked, however, inserted documents looked like this:

mongo > db.mysql_node1_log.findOne().pretty()
"_id" : ObjectId("5fece941f17f487c7d1d158b"),
"msg" : " {\"audit_record\":{\"name\":\"Connect\",\"record\":\"7_1970-01-01T00:00:00\",\"timestamp\":\"2020-12-30T20:55:29Z\",\"connection_id\":\"9\",\"status\":0,\"user\":\"root\",\"priv_user\":\"root\",\"os_login\":\"root\",\"proxy_user\":\"\",\"host\":\"localhost\",\"ip\":\"\",\"db\":\"\"}}"

Basically, because of syslog escaping the double quote symbols, the whole audit record appears as a single string inside MongoDB collection, instead of a JSON object. No matter what I tried, like custom templates and property values in rsyslog, I could not disable escaping. Therefore, although feeding MongoDB with audit logs works this way, it becomes pretty useless when it comes to analyzing the logs later. The same issue applies to syslog-ng and the syslog-ng-mod-mongodb module. And since MongoDB does not offer before-insert triggers, I could not easily “fix” the inserted data on the fly.

Fluentd For The Rescue!

This forced me to look for alternative solutions. One of them would be using FIFO file and tail the audit log continuously to feed it, and then read from it to insert logs to mongodb. I wanted a more robust way, though, and decided to try Fluentd instead. It was created as a versatile log collector machine, highly flexible, prepared to work with many different applications out of the box, but most importantly, it is an open source project and speaks JSON natively. Making it to do the job I wanted turned out to be easier than I expected.

Here is what I did:

  • Installed the Fluentd package (I chose td-agent variant here for an even easier user experience)
  • Installed MongoDB plugin for Fluentd with (don’t use the usual ‘gem install’ here):
td-agent-gem install fluent-plugin-mongo

  • Configured audit log as a source and output directive for MongoDB:
# cat /etc/td-agent/td-agent.conf
 @type tail
 path /var/lib/mysql/audit.log
 pos_file /var/log/td-agent/audit.access_log.pos
  @type json
 tag mongo.audit.log
<match mongo.audit.log>
 @type mongo
 database auditlogs #(required)
 collection audit_log #(optional; default="untagged")
 capped_size 100m
 host #(optional; default="localhost")
 port 27017 #(optional; default=27017)
 user percona
 password P3rc0n4
  flush_interval 1s

  • Added the user used by Fluentd to mysql group to allow it to read from the audit log:
# id td-agent
uid=114(td-agent) gid=121(td-agent) groups=121(td-agent)
# usermod -a -G mysql td-agent
# id td-agent
uid=114(td-agent) gid=121(td-agent) groups=121(td-agent),120(mysql)

audit_log_handler = file
audit_log_format = JSON
audit_log_file = audit.log
audit_log_rotate_on_size = 10M
audit_log_rotations = 3

  • Restarted both services to apply changes:
# systemctl restart mysql
# systemctl restart td-agent

  • Checked the Fluentd log to see if it reads the audit log as expected, also for when Percona Server for MySQL rotates it:
# tail -f /var/log/td-agent/td-agent.log
2020-12-31 02:41:39 +0000 [info]: adding match pattern="mongo.audit.log" type="mongo"
2020-12-31 02:41:40 +0000 [info]: #0 following tail of /var/lib/mysql/audit.log
2020-12-31 02:52:14 +0000 [info]: #0 detected rotation of /var/lib/mysql/audit.log; waiting 5 seconds
2020-12-31 02:52:14 +0000 [info]: #0 following tail of /var/lib/mysql/audit.log

  • Ran sysbench against MySQL instance and verified the new collection in MongoDB gets updated:
mongo > db.audit_log.countDocuments({})

mongo > db.audit_log.stats()
 "ns" : "auditlogs.audit_log",
 "size" : 104857293,
 "count" : 281245,
 "avgObjSize" : 372,
 "storageSize" : 26357760,
 "capped" : true,
 "max" : -1,
 "maxSize" : 104857600,

Yay, it works like a charm! Not only are the audit logs rotated automatically on Percona Server for MySQL, but also on MongoDB the destination collection size cap works as well, so I am safe when it comes to disk space on both hosts!

Here, there is a little caveat – if for some reason you drop the destination collection manually on MongoDB, incoming inserts will make it re-created without the capped setting! Therefore, either let the collection be created by Fluentd on its service startup or create it manually with a capped setting, and don’t drop it later.

Now, we can try some example aggregations to get some useful audit stats:

mongo > db.audit_log.aggregate([ { $group: { _id: {name: "$audit_record.name", command: "$audit_record.command_class"}, count: {$sum:1}}}, { $sort: {count:-1}} ])
{ "_id" : { "name" : "Execute", "command" : "error" }, "count" : 267086 }
{ "_id" : { "name" : "Query", "command" : "begin" }, "count" : 14054 }
{ "_id" : { "name" : "Close stmt", "command" : "error" }, "count" : 76 }
{ "_id" : { "name" : "Query", "command" : "show_variables" }, "count" : 7 }
{ "_id" : { "name" : "Query", "command" : "select" }, "count" : 6 }
{ "_id" : { "name" : "Quit" }, "count" : 5 }
{ "_id" : { "name" : "Query", "command" : "show_tables" }, "count" : 4 }
{ "_id" : { "name" : "Init DB", "command" : "error" }, "count" : 2 }
{ "_id" : { "name" : "Field List", "command" : "show_fields" }, "count" : 2 }
{ "_id" : { "name" : "Query", "command" : "show_databases" }, "count" : 2 }
{ "_id" : { "name" : "Connect" }, "count" : 1 }

mongo > db.audit_log.aggregate([ { $match: { "audit_record.status": {$gt: 0} } }, { $group: { _id: {command_class: "$audit_record.command_class", status: "$audit_record.status"}, count: {$sum:1}}}, { $sort: {count:-1}} ])
{ "_id" : { "command_class" : "error", "status" : 1049 }, "count" : 2 }
{ "_id" : { "command_class" : "show_tables", "status" : 1046 }, "count" : 2 }
{ "_id" : { "command_class" : "create_table", "status" : 1050 }, "count" : 2 }
{ "_id" : { "command_class" : "drop_table", "status" : 1051 }, "count" : 2 }
{ "_id" : { "command_class" : "drop_table", "status" : 1046 }, "count" : 2 }
{ "_id" : { "command_class" : "create_table", "status" : 1046 }, "count" : 1 }
{ "_id" : { "command_class" : "create_table", "status" : 1113 }, "count" : 1 }




Percona 2020 Recap: Great Content and Software Releases

Percona 2020 content and releases

Percona 2020 content and releasesThe Percona team provided the community with some excellent content and several new releases in 2020. I wanted to highlight some of your favorites (based on popularity) if you missed them.

First up is our most-read blog from last year, which ironically was published before 2020. Ananias Tsalouchidis’s blog on when you should use Aurora and when should you use RDS MYSQL continued to attract readers all year long. People don’t always understand the key differences between the two, so having a guide is great and timely for many.

What about the most read blogs or watched videos published in 2020?

PostgreSQL Takes Our Most-Read Spot of 2020

The Percona blog is known for its great in-depth MySQL coverage, but experts in the MongoDB and PostgreSQL space have also written some quality content over the last few years. It is exciting to see that the most popular blog published last year was outside of MySQL: Ibrar Ahmed’s deep dive into handling null values in PostgreSQL.

Interested in the top six PostgreSQL reads from 2020? Here they are:

We also had some fantastic conference talks this year you may want to check out. Here are the most-watched PostgreSQL videos of 2020:

Awesome PostgreSQL talks and blogs from the community:

Our Percona University Online posted its first PostgreSQL training last year; if you are looking for a deeper understanding of indexes (and who isn’t), check out our training, Deep Dive Into PostgreSQL Indexes.

MySQL is Still as Popular as Ever

Even though PostgreSQL took this year’s top spot, not too far behind was a great blog series by our CEO Peter Zaitsev on solving MySQL bottlenecks. His three-part series, 18 things you can do to remove MySQL Bottlenecks caused by high traffic, was not only highly read, but it also spawned one of the most-watched webinars of the year. Scalability and performance are critical to any application and can mean life or death for any application. A vital read and a great link to bookmark for when you have one of those odd performance issues you can not seem to find!

Interested in the top five MySQL reads from 2020? Here they are:

Interested in watching some outstanding MySQL sessions? Check out some of the most-watched MySQL sessions of 2020:

Awesome MySQL talks and blogs from the community:

Our Percona University Online posted its first MySQL training; if you are looking at how to upgrade to MySQL 8, it is worth watching. Check out the training, How to Upgrade to MySQL 8.0.

The Staying Power of MongoDB is Undeniable

MongoDB growth in 2020 was undeniable, which is why it’s no surprise that another one of our top blogs was on MongoDB. Percona most-read tech blog on MongoDB published in 2020 was Vinicius Grippa’s must-read work outlining the best practices for running MongoDB. If you are new or old to MongoDB, it is worth reading and double-checking to ensure you have MongoDB optimized.

Interested in the top five MongoDB reads from 2020? Here they are:

Interested in watching some MongoDB sessions? Check out some of the most-watched MongoDB sessions of 2020:

Awesome MongoDB talks and blogs from the community:

More Popular Blogs and Discussions

Sometimes topics cross databases and delve into general advice. Let’s look at some of the more popular talks and blogs that are not tied to a specific database.

If you like videos, you may want to check out these great Percona Live Sessions from last year:

Other Popular Blogs:

Finally, Some Great Percona Software Released This Year

Here is the list of interesting software changes and news on Percona software in 2020:

Percona Distributions for MongoDB and MySQL:

  • What are Percona distributions? We take the best components from the community and ensure they work together. This way, you know your backup, HA, monitoring, etc., will all work together seamlessly.

Percona XtraDB Cluster 8.0 (PXC) was released, with improved performance, scalability, and security. Long sought after features include:

  • Streaming replication to support larger transactions
  • More granular and improved logging and troubleshooting options
  • Multiple system tables help find out more about the state of the cluster state.
  • Percona XtraDB Cluster 8.0 now automatically launches the upgrade as needed (even for minor releases), avoiding manual intervention and simplifying operation in the cloud.

Percona Distribution for PostgreSQL 13. Version 13 of PostgreSQL was a leap forward, and our distribution was updated to support all the additional functionality. Better indexing, better performance, and better security! Sign me up!

Percona Monitoring And Management (PMM) jumped forward from 2.2 to 2.13 adding some very cool features like:

  • Alert manager integration and integrated alerting
  • A brand new Query Analyzer with awesome features to allow you to find problem queries quicker and more efficiently
  • Enhanced metrics for AWS RDS monitoring
  • Added support for External Exporters so you can monitor 3rd party and custom services through the installed PMM-agent
  • New security threat tool allows for alerts and visibility into the most common security issues
  • Support for group replication
  • Better MongoDB and PostgreSQL monitoring
  • Better support for larger environments (Monitor More Stuff Faster)
  • Plus a ton of misc small enhancements!

Percona Kubernetes Operator for Percona XtraDB Cluster continued to evolve with several new features helping users build their own DYI DBaaS:

  • Auto-Tuning MySQL Parameters
  • Integration with Percona Monitoring and Management
  • Full data encryption at rest
  • Support for Percona XtraDB Cluster 8.0
  • Support for the latest version of Open Shift and Amazon’s Elastic Container Service
  • Dual support for ProxySQL and HA Proxy
  • Automated minor upgrades
  • Clone backups to set up a new PXC cluster on a different Kubernetes cluster

Percona Kubernetes Operator for Percona Server for MongoDB added several features, including:

  • Support for Percona Server for MongoDB 4.4
  • Automated management of system users
  • Support for the latest version of Open Shift and Amazon’s Elastic Container Service
  • Automated minor upgrades

While 2020 was far from the best year for many of us and we are glad it is behind us, it did generate some good content that we can use in 2021 and going forward to help us better manage and run our databases. Thanks for reading and happy database tuning!


MySQL Backup and Recovery Best Practices

MySQL Backup and Recovery Best Practices

MySQL Backup and Recovery Best PracticesIn this blog, we will review all the backup and restore strategies for MySQL, the cornerstones of any application. There are a few options, depending on your topology, MySQL versions, etc. And based on that, there are some questions we need to ask ourselves to make sure we make the right choices.

How many backups we need to keep safe, or what’s the best retention policy for us?

This means the number of backups to safeguard, whether local or remote (external fileserver, cloud). The retention policy can be daily, weekly, or monthly, depending on the free space available.

What is the Recovery Time Objective?

The Recovery Time Objective (RTO) refers to the amount of time that may pass during a disruption before it exceeds the maximum allowable threshold specified in the Business Continuity Plan.

The key question related to RTO is, “How quickly must the data on this system be restored?”

What is the Recovery Point Objective?

The Recovery Point Objective (RPO) is the duration of time and service level within which a business process must be stored after a disaster in order to avoid unacceptable consequences associated with a break in continuity.

The key question related to RPO is, “How much data can we lose?”

Different Types of Backups

There are two backup types: physical and logical.

  • Physical (Percona XtraBackup, RDS/LVM Snapshots, MySQL Enterprise Backup), and also you can use cp or rsync command lines to copy the datadir as long as mysql is down/stopped.
  • Logical (mysqldump, mydumper, mysqlpump, mysql shell only for mysql 8)

Also is recommended to take a copy of binlog files, why? Well, this will help us to recover until the last transaction.

Why are backups needed?

Backups are needed in case of multiple problems:

  • Host Failure: We can get multiple problems from disks stalled or broken disks. Also from cloud services, our DB instance can be broken and it’s non-accessible.
  • Corrupted Data: This can happen on a power outage, MySQL wasn’t able to write correctly and close the file, sometimes when MySQL starts again it cannot start due to corrupted data and the crash recovery process cannot fix it.
  • Inconsistent Data: When a human mistake, delete/update erroneous data over the primary or replica node.
  • DataCenter Failure: power outage or internet provider issues.
  • Legislation/Regulation: provide consistent business value and customer satisfaction.

Now let me explain those different types of backups mentioned above, but before I continue, it’s important to configure a new and dedicated replica node for backups purposes, due to the high CPU load to avoid any issue on any other replica node (AKA backup server).

Logical Backup

This is a dump from logical database structure (CREATE DATABASE, CREATE TABLE statements) and content (INSERT statements). This is recommended to be used against smaller amounts of data. The disadvantage of this method is slower (backup and restore) if you compare it with physical backups. Using mydumper you can backup and restore a single database or a single table if it’s needed, and this is useful to copy some data to a different environment to run tests. Also, mydumper can take a consistent (as long as all the tables are InnoDB engine) backup and provides accurate master and slave log positions.

The output is larger than for physical backup, particularly when saved in text format, but it can be compressed on the fly depending on the software you are using. Mydumper can compress and mysqldump needs to add a pipe to redirect the output to gzip, for example.

Logical backups are used to address data corruption or the need to restore a subset of tables.

Physical (Raw) Backup

In short, this consists of exact copies of database directories and files. This can be a copy for all or a part from MySQL datadir directory. This kind of backup is most used to restore or create a new replica node easily and quickly and is used to address host failure. It’s recommended to restore using the same MySQL version. I recommend using Percona XtraBackup because it can include any related files such as configuration files like cnf config files.

Snapshot Backups

Some file system implementations enable “snapshots” to be taken. These provide logical copies of the file system at a given point in time, without requiring a physical copy of the entire file system. MySQL itself does not provide the capability for taking file system snapshots but it is available using third-party solutions such as LVM or ZFS.

The disadvantage is that sometimes physical backups do not compress much, because data is usually in a binary format and sometimes the table is already compressed.

Binary Log Backups

Binlog backups specifically address RPO. Binary log files contain records of each SQL query executed that made changes.

From MySQL 5.6 on, you can use mysqlbinlog to stream binary logs from a remote server. You can combine binlog backups with Percona XtraBackup or mydumper backup to allow restoration up to the end of the most-recently-backed-up binary log.

Incremental / Differential Backups

An incremental backup is a backup of everything that has changed since the last backup (a binary log backup is a special case of an incremental backup). This is a very good option if the dataset size is huge, as you can take a full backup at the beginning of the week and run incremental backups per day. Also, the backup size is smaller than the full backup.

The main risks associated with incremental backups are:

– A single corrupt incremental backup may invalidate all the others

– Incremental backups typically negatively affect the RTO

For a differential backup, it copies the differences from your last backup, and the advantage is that a lot of data does not change from one backup to the next, so the result can be significantly smaller backups. This saves disk space.

Percona XtraBackup supports both incremental and differential backups.

Offsite Storage

It’s highly recommended to copy all the backup methods to another place, like the cloud or an external file server, so in case of host failure or data center failure, you have another copy.

Not all the backup files need to be uploaded to the cloud, sometimes the time you need to spend in the download is bigger than the time consumed in the recovery process.

A good approach is to keep 1-7 days locally on the backup server in case a fast recovery is needed, and this depends on your business regulations.


Backups have sensitive data, so it’s highly recommended to encrypt, especially for offsite storage. This adds more time when you need to restore a backup but it keeps your data safe.

GPG is a good option to encrypt backups, and if you use this option or some other alternative, don’t forget to get a copy of the keys/passphrase. If you lose it, your backups will be useless.

Restore Testing

Depending on your business, it’s highly recommended to test your backups at least once per month. This action validates your backups are not corrupted and it provides critical metrics on recovery time. This process should be automated to get the full backup, restore it, and finally configure this server as a replica from the current primary or another replica. This is good as well to validate that the replication process has no errors.

Many customers are using this methodology to refresh their QA/STG environment to have fresh data from production backups.

In addition to the above, it is recommended to create a manual or automated restore documentation process to keep all the steps together, so in case of disaster, you can follow it without wasting time.

Retention Requirements

Last but not least, it is very important to keep multiple copies of different backup types.

Our best recommendation is:

  • One or two physical backups locally on the backup server (as long as space allows it).
  • Seven daily and four weekly logical backups locally on the backup server.
  • 30 days of binlog backups locally on the backup server.
  • For offsite backups (like S3, Google Cloud, etc.), keep monthly backups for one year or more.

For local backups, keep in mind you will need a minimum of 2.5 times the current dataset size as free disk space to save/meet these retention policies. Don’t forget to encrypt all the backup types!

Legal or regulatory requirements may also dictate how long data must be archived.

Percona Can Help

Percona can help you choose, implement, and optimize the most appropriate MySQL backup and recovery solution for your MySQL ecosystem. If your current solution unexpectedly fails, we can facilitate your recovery with onsite, remote, or emergency consulting services. We can also help you take steps to prevent another occurrence. Every situation is unique and we will work with you to create the most effective solution for your business.

Contact Us


Webinar January 26: Using Percona Monitoring and Management to Identify and Troubleshoot Problematic MySQL Queries

Troubleshoot Problematic MySQL Queries webinar

Troubleshoot Problematic MySQL Queries webinarJoin us as Michael Coburn, Percona Product Manager, discusses two methods to identify and troubleshoot problematic MySQL queries using the RED Method and Percona Monitoring and Management (PMM) Query Analytics. He will also highlight specific Dashboards in PMM that visualize the rate, errors, and duration of MySQL events that may be impacting the stability and performance of your database instance.

Please join Michael Coburn, Product Manager, Percona, on Tuesday, January 26th, 2021 at 2:30 pm for his webinar “Using Percona Monitoring and Management to Identify and Troubleshoot Problematic MySQL Queries”.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.


MySQL Group Replication – How to Elect the New Primary Node

MySQL Group Replication Primary Node

MySQL Group Replication Primary NodeIn this blog, I am going to explain the different ways of electing the PRIMARY node in MySQL group replication. Before MySQL 8.0.2, primary election was based on the member’s UUID, with the lowest UUID elected as the new primary in the event of a failover.

From MySQL 8.0.2: We can select the node to be promoted as a PRIMARY using the server weight ( group_replication_member_weight ). This can be achieved during the failure of the current primary node.

From MySQL 8.0.12: We can promote any node as a PRIMARY using the function “group_replication_set_as_primary”. This can be set anytime without any failures of nodes.


 I have installed the 3 node group replication cluster. I am using Percona Server for MySQL 8.0.22.

mysql> select member_host,member_state,member_role,member_version from performance_schema.replication_group_members;
| member_host   | member_state | member_role | member_version |
| | ONLINE       | SECONDARY   | 8.0.22         |
| | ONLINE       | PRIMARY     | 8.0.22         |
| | ONLINE       | SECONDARY   | 8.0.22         |
3 rows in set (0.03 sec)

I am planning to take the current PRIMARY node “” to maintenance for OS patching. When I bring the current PRIMARY node down, I need to promote the server “” as a PRIMARY member. Let see how this can be achieved in the following ways.

  • Using server weight (group_replication_member_weight)
  • Using function “group_replication_set_as_primary”

Using server weight (group_replication_member_weight):

This approach is not straightforward. The new node will be promoted as a primary member when the current primary member goes down. Currently, I have the same weight on all my nodes.

[root@innodb1 ~]# mysql -e "select @@hostname, @@group_replication_member_weight\G"
*************************** 1. row ***************************
@@group_replication_member_weight: 50

[root@innodb2 ~]#  mysql -e "select @@hostname, @@group_replication_member_weight\G"
*************************** 1. row ***************************
@@group_replication_member_weight: 50

[root@innodb3 ~]# mysql -e "select @@hostname, @@group_replication_member_weight\G"
*************************** 1. row ***************************
@@group_replication_member_weight: 50

I am going to increase the weight on the server “” so that it will be elected as a PRIMARY member when taking down the “”.

At “”,

mysql> set global group_replication_member_weight = 70;
Query OK, 0 rows affected (0.00 sec)

mysql> select @@group_replication_member_weight;
| @@group_replication_member_weight |
|                                70 |
1 row in set (0.00 sec)

Note: To set the weight, you don’t need to execute the STOP/START GROUP_REPLICATION.

The weight was increased to 70 on the server “”. Now, I am going to bring down the current PRIMARY node “”.

At “”,

mysql> stop group_replication;
Query OK, 0 rows affected (4.29 sec)

mysql> select member_host,member_state,member_role,member_version from performance_schema.replication_group_members;
| member_host   | member_state | member_role | member_version |
| | OFFLINE      |             |                |
1 row in set (0.01 sec)

The node was left from the cluster.

At “”,

mysql> select member_host,member_state,member_role,member_version from performance_schema.replication_group_members;
| member_host   | member_state | member_role | member_version |
| | ONLINE       | PRIMARY     | 8.0.22         |
| | ONLINE       | SECONDARY   | 8.0.22         |
2 rows in set (0.04 sec)

You can see that “” was selected as a new PRIMARY node.

Using function “group_replication_set_as_primary”:

This method is very straightforward and no need to fail the current PRIMARY node to switch the primary member.

| member_host   | member_state | member_role | member_version |
| | ONLINE       | SECONDARY   | 8.0.22         |
| | ONLINE       | PRIMARY     | 8.0.22         |
| | ONLINE       | SECONDARY   | 8.0.22         |
3 rows in set (0.00 sec)

Now, we need to execute the function “group_replication_set_as_primary” with the member UUID.

At “”,

mysql> show global variables like 'server_uu%';
| Variable_name | Value                                |
| server_uuid   | c5aed435-d58d-11ea-bb26-5254004d77d3 |
1 row in set (0.00 sec)

mysql> select group_replication_set_as_primary('c5aed435-d58d-11ea-bb26-5254004d77d3');
| group_replication_set_as_primary('c5aed435-d58d-11ea-bb26-5254004d77d3') |
| Primary server switched to: c5aed435-d58d-11ea-bb26-5254004d77d3         |
1 row in set (1.03 sec)

mysql> select member_host,member_state,member_role,member_version from performance_schema.replication_group_members;
| member_host   | member_state | member_role | member_version |
| | ONLINE       | PRIMARY     | 8.0.22         |
| | ONLINE       | SECONDARY   | 8.0.22         |
| | ONLINE       | SECONDARY   | 8.0.22         |
3 rows in set (0.00 sec)

“” was elected as a PRIMARY node.

Note: You can execute the function “group_replication_set_as_primary_node” on any server of the cluster.

This approach is very useful, and if you want to switch the PRIMARY member to a highly configured server or for any other reason, it will be greatly helpful.


Full Read Consistency Within Percona Kubernetes Operator for Percona XtraDB Cluster

Full Read Consistency Within Percona Kubernetes Operator

Full Read Consistency Within Percona Kubernetes OperatorThe aim of Percona Kubernetes Operator for Percona XtraDB Cluster is to be a special type of controller introduced to simplify complex deployments. The Operator extends the Kubernetes API with custom resources. The Operator solution is using Percona XtraDB Cluster (PXC) behind the hood to provide a highly available, resilient, and scalable MySQL service in the Kubernetes space. 

This solution comes with all the advantages/disadvantages provided by Kubernetes, plus some advantages of its own like the capacity to scale reads on the nodes that are not Primary.

Of course, there are some limitations like the way PXC handles DDLs, which may impact the service, but there is always a cost to pay to get something, expecting to have it all for free is unreasonable.     

In this context, we need to talk and cover what is full read consistency in this solution and why it is important to understand the role it plays.  

Stale Reads

When using Kubernetes we should talk about the service and not about the technology/product used to deliver such service. 

In our case, the Percona Operator is there to deliver a MySQL service. We should then see that as a whole, as a single object. To be more clear what we must consider is NOT the fact we have a cluster behind the service but that we have a service that to be resilient and highly available, use a cluster. 

We should not care if a node/pod goes down unless the service is discontinued.

What we have as a plus in the Percona Operator solution is a certain level of READ scalability. This achieved optimizing the use of the non PRIMARY nodes, and instead of having them sitting there applying only replicated data, the Percona Operator provides access to them to scale the reads.  

But… there is always a BUT ? 

Let us start with an image:


By design, the apply and commit finalize in Galera (PXC) may have (and has) a delay between nodes. This means that, if using defaults, applications may have inconsistent reads if trying to access the data from different nodes than the Primary. 

It provides access using two different solutions:

  • Using HAProxy (default)
  • Using ProxySQL



When using HAProxy you will have 2 entry points:

  • cluster1-haproxy, which will point to the Primary ONLY, for reads and writes. This is the default entry point for the applications to the MySQL database.
  • cluster1-haproxy-replicas, which will point to all three nodes and is supposed to be used for READS only. This is the PLUS you can use if your application has READ/WRITE separation.

Please note that at the moment there is nothing preventing an application to use the cluster1-haproxy-replicas also for write, but that is dangerous and wrong because will generate a lot of certification conflicts and BF abort given it will distribute writes all over the cluster impacting on performance as well (and not giving you any write scaling):


[marcotusa@instance-1 ~]$ for i in `seq 1 100`; do mysql -h cluster1-haproxy-replicas -e "insert into test.iamwritingto values(null,@@hostname)";done
| host           | count(host) |
| cluster1-pxc-1 |          34 |
| cluster1-pxc-2 |          33 |
| cluster1-pxc-0 |          33 |

When using ProxySQL the entry point is a single one, but you may define query rules to automatically split the R/W requests coming from the application. This is the preferred method when an application has no way to separate the READS from the writes.

Here I have done a comparison of the two methods, HAProxy and ProxySQL.

Now, as mentioned above, by default, PXC (any Galera base solution) comes with some relaxed settings, for performance purposes. This is normally fine in many standard cases, but if you use the Percona Operator and use the PLUS of scaling reads using the second access point with HAproxy or Query Rules with Proxysql, you should NOT have stale reads, given the service must provide consistent data, as if you are acting on a single node. 

To achieve that you can change the defaults and change the parameter in PXC wsrep_sync_wait. 

When changing the parameter wsrep_sync_wait as explained in the documentation, the node initiates a causality check, blocking incoming queries while it catches up with the cluster. 

Once all data on the node receiving the READ request is commit_finalized, the node performs the read.

But this has a performance impact, as said before.

What Is The Impact?

To test the performance impact I had used a cluster deployed in GKE, with these characteristics:

  • 3 Main nodes n2-standard-8 (8 vCPUs, 32 GB memory)
  • 1 App node n2-standard-8 (8 vCPUs, 32 GB memory)
  • PXC pods using:
    •  25GB of the 32 available 
    • 6 CPU of the 8 available
  • HAProxy:
    • 600m CPU
    • 1GB RAM
  • PMM agent
    • 500m CPU
    • 500 MB Ram

In the application node, I used sysbench running two instances, one in r/w mode the other only reads. Finally, to test the stale read, I used the stale read test from my test suite.

Given I was looking for results with a moderate load, I just used 68/96/128 threads per sysbench instance. 


Marco, did we have or not have stale reads? Yes, we did:

I had from 0 (with very light load) up to 37% stale reads with a MODERATED load, where moderated was the 128 threads sysbench running. 

Setting wsrep_sync_wait=3 of course I had full consistency.  But I had performance loss:

As you can see, I had an average loss of 11% in case of READS:

While for writes the average loss was 16%. 


At this point, we need to stop and think about what is worth doing. If my application is READs heavy and READs scaling, it is probably worth enabling the full synchronicity given scaling on the additional node allows me to have 2x or more READs. 

If instead my application is write critical, probably losing also ~16% performance is not good.

Finally if my application is stale reads tolerant, I will just go with the defaults and get all the benefits without penalties.

Also keep in mind that Percona Kubernetes Operator for Percona XtraDB Cluster is designed to offer a MySQL service so the state of the single node is not as critical as if you are using a default PXC installation, PODs are by nature ephemeral objects while service is resilient.


Percona Kubernetes Operator for Percona XtraDB Cluster






Percona Kubernetes Operator for Percona XtraDB Cluster: HAProxy or ProxySQL?

Percona Kubernetes Operator HAProxy or ProxySQL

Percona Kubernetes Operator HAProxy or ProxySQLPercona Kubernetes Operator for Percona XtraDB Cluster comes with two different proxies, HAProxy and ProxySQL. While the initial version was based on ProxySQL, in time, Percona opted to set HAProxy as the default Proxy for the operator, without removing ProxySQL. 

While one of the main points was to guarantee users to have a 1:1 compatibility with vanilla MySQL in the way the operator allows connections, there are also other factors that are involved in the decision to have two proxies. In this article, I will scratch the surface of this why.

Operator Assumptions

When working with the Percona Operator, there are few things to keep in mind:

  • Each deployment has to be seen as a single MySQL service as if a single MySQL instance
  • The technology used to provide the service may change in time
  • Pod resiliency is not guaranteed, service resiliency is
  • Resources to be allocated are not automatically calculated and must be identified at the moment of the deployment
  • In production, you cannot set more than 5 or less than 3 nodes when using PXC

There are two very important points in the list above.

The first one is that what you get IS NOT a Percona XtraDB Cluster (PXC), but a MySQL service. The fact that Percona at the moment uses PXC to cover the service is purely accidental and we may decide to change it anytime.

The other point is that the service is resilient while the pod is not. In short, you should expect to see pods stopping to work and being re-created. What should NOT happen is that service goes down. Trying to debug each minor issue per node/pod is not what is expected when you use Kubernetes. 

Given the above, review your expectations… and let us go ahead. 

The Plus in the Game (Read Scaling)

As said, what is offered with Percona Operator is a MySQL service. Percona has added a proxy on top of the nodes/pods that help the service to respect the resiliency service expectations. There are two possible deployments:

  • HAProxy
  • ProxySQL

Both allow optimizing one aspect of the Operator, which is read scaling. In fact what we were thinking was, given we must use a (virtually synchronous) cluster, why not take advantage of that and allow reads to scale on the other nodes when available? 

This approach will help all the ones using POM to have the standard MySQL service but with a plus. 

But, with it also comes with some possible issues like READ/WRITE splitting and stale reads. See this article about stale reads on how to deal with it. 

For R/W splitting we instead have a totally different approach in respect to what kind of proxy we implement. 

If using HAProxy, we offer a second entry point that can be used for READ operation. That entrypoint will balance the load on all the nodes available. 

Please note that at the moment there is nothing preventing an application to use the cluster1-haproxy-replicas also for write, but that is dangerous and wrong because will generate a lot of certification conflicts and BF abort, given it will distribute writes all over the cluster impacting on performance as well (and not giving you any write scaling). It is your responsibility to guarantee that only READS will go through that entrypoint.

If instead ProxySQL is in use, it is possible to implement automatic R/W splitting. 

Global Difference and Comparison

At this point, it is useful to have a better understanding of the functional difference between the two proxies and what is the performance difference if any. 

As we know HAProxy acts as a level 4 proxy when operating in TCP mode, it also is a forward-proxy, which means each TCP connection is established with the client with the final target and there is no interpretation of the data-flow.

ProxySQL on the other hand is a level 7 proxy and is a reverse-proxy, this means the client establishes a connection to the proxy who presents itself as the final backend. Data can be altered on the fly when it is in transit. 

To be honest, it is more complicated than that but allows me the simplification. 

On top of that, there are additional functionalities that are present in one (ProxySQL) and not in the other. The point is if they are relevant for use in this context or not. For a shortlist see below (source is from ProxySQL blog but data was removed) : 

As you may have noticed HAProxy is lacking some of that functionalities, like R/W split, firewalling, and caching, proper of the level 7 implemented in ProxySQL.  

The Test Environment

To test the performance impact I had used a cluster deployed in GKE, with these characteristics:

  • 3 Main nodes n2-standard-8 (8 vCPUs, 32 GB memory)
  • 1 App node n2-standard-8 (8 vCPUs, 32 GB memory)
  • PXC pods using:
    •  25GB of the 32 available 
    • 6 CPU of the 8 available
  • HAProxy:
    • 600m CPU
    • 1GB RAM
  • PMM agent
    • 500m CPU
    • 500 MB Ram
  • Tests using sysbench as for (https://github.com/Tusamarco/sysbench), see in GitHub for command details.

What I have done is to run several tests running two Sysbench instances. One only executing reads, while the other reads and writes. 

In the case of ProxySQL, I had R/W splitting thanks to the Query rules, so both sysbench instances were pointing to the same address. While testing HAProxy I was using two entry points:

  • Cluster1-haproxy – for read and write
  • Cluster1-haproxy-replicas – for read only

Then I also compare what happens if all requests hit one node only. For that, I execute one Sysbench in R/W mode against one entry point, and NO R/W split for ProxySQL.

Finally, sysbench tests were executed with the –reconnect option to force the tests to establish new connections.

As usual, tests were executed multiple times, on different days of the week and moments of the day. Data reported is a consolidation of that, and images from Percona Monitoring and Management (PMM) are samples coming from the execution that was closest to the average values. 

Comparing Performance When Scaling Reads

These tests imply that one node is mainly serving writes while the others are serving reads. To not affect performance, and given I was not interested in maintaining full read consistency, the parameter wsrep_sync_wait was kept as default (0). 


HAProxy ProxySQL

A first observation shows how ProxySQL seems to keep a more stable level of requests served. The increasing load penalizes HAProxy reducing if ? the number of operations at 1024 threads.

HAProxy ProxySQL HAProxy ProxySQL read comparison

Digging a bit more we can see that HAProxy is performing much better than ProxySQL for the WRITE operation. The number of writes remains almost steady with minimal fluctuations. ProxySQL on the other hand is performing great when the load in write is low, then performance drops by 50%.

For reads, we have the opposite. ProxySQL is able to scale in a very efficient way, distributing the load across the nodes and able to maintain the level of service despite the load increase. 

If we start to take a look at the latency distribution statistics (sysbench histogram information), we can see that:

latency HAProxy latency ProxySQL

In the case of low load and writes, both proxies stay on the left side of the graph with a low value in ms. HAProxy is a bit more consistent and grouped around 55ms value, while ProxySQL is a bit more sparse and spans between 190-293ms.

About reads we have similar behavior, both for the large majority between 28-70ms. We have a different picture when the load increases:  

ProxySQL is having some occurrences where it performs better, but it spans in a very large range, from ~2k ms to ~29k ms. While HAProxy is substantially grouped around 10-11K ms. As a result, in this context, HAProxy is able to better serve writes under heavy load than ProxySQL. 

Again, a different picture in case of reads.

Here ProxySQL is still spanning on a wide range ~76ms – 1500ms, while HAProxy is more consistent but less efficient, grouping around 1200ms the majority of the service. This is consistent with the performance loss we have seen in READ when using high load and HAProxy.  

Comparing When Using Only One Node

But let us now discover what happens when using only one node. So using the service as it should be, without the possible Plus of read scaling. 

Percona Kubernetes Operator for Percona XtraDB Cluster

The first thing I want to mention is strange behavior that was consistently happening (no matter what proxy used) at 128 threads. I am investigating it but I do not have a good answer yet on why the Operator was having that significant drop in performance ONLY with 128 threads.

Aside from that, the results were consistently showing HAProxy performing better in serving read/writes. Keep in mind that HAProxy just establishes the connection point-to-point and is not doing anything else. While ProxySQL is designed to eventually act on the incoming stream of data. 

This becomes even more evident when reviewing the latency distribution. In this case, no matter what load we have, HAProxy performs better:

As you can notice, HAProxy is less grouped than when we have two entry points, but it is still able to serve more efficiently than ProxySQL.


As usual, my advice is to use the right tool for the job, and do not force yourself into something stupid. And as clearly stated at the beginning, Percona Kubernetes Operator for Percona XtraDB Cluster is designed to provide a MySQL SERVICE, not a PXC cluster, and all the configuration and utilization should converge on that.

ProxySQL can help you IF you want to scale a bit more on READS using the possible plus. But this is not guaranteed to work as it works when using standard PXC. Not only do you need to have a very good understanding of Kubernetes and ProxySQL if you want to avoid issues, but with HAProxy you can scale reads as well, but you need to be sure you have R/W separation at the application level.

In any case, utilizing HAProxy for the service is the easier way to go. This is one of the reasons why Percona decided to shift to HAProxy. It is the solution that offers the proxy service more in line with the aim of the Kubernetes service concept. It is also the solution that remains closer to how a simple MySQL service should behave.

You need to set your expectations correctly to avoid being in trouble later.


Percona Kubernetes Operator for Percona XtraDB Cluster


Wondering How to Run Percona XtraDB Cluster on Kubernetes? Try Our Operator!

The Criticality of a Kubernetes Operator for Databases



Streaming MySQL Backups with Percona XtraBackup – Another Alternative

Streaming MySQL Backups with Percona XtraBackupToday we are going to present you with another way to make a database copy between servers using our Percona XtraBackup utility.  What is the difference with many others existing on the web?  Not many, but they are interesting in terms of performance and availability.

We will combine the xbstream utility with the power of pigz and socat, to take advantage of the multi-processing in the case of having multiple processors, and at the same time, reduce the use of network bandwidth in cases where this component is a bottleneck.  So let’s explain each component:

socat: This stands for SOcket CAT. It is a utility for data transfer between two addresses.

What makes socat so versatile is the fact that an address can represent a network socket, any file descriptor, a Unix domain datagram or stream socket, TCP and UDP (over both IPv4 and IPv6), SOCKS 4/4a over IPv4/IPv6, SCTP, PTY, datagram, and stream sockets, named and unnamed pipes, raw IP sockets, OpenSSL, or on Linux even any arbitrary network device

pigz: This stands for parallel implementation of gzip, and it is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data.

xbstream: (with parallelism) to process several files in parallel.

Packages required: pigz, socat, and, of course, Percona XtraBackup

Servers Involved in the Task

Source: the source database where the backup will be taken (MySQL 5.7 installed on CentOS 7.8)

Target: the destination where the backup will be send (MySQL 5.7 installed on CentOS 7.8)


1. Install the required packages in the Source and Target nodes:

Source # yum install -y pigz socat
Target # yum install -y pigz socat

If you haven’t installed Percona XtraBackup (in both servers), please follow the instructions explained in https://www.percona.com/doc/percona-xtrabackup/2.4/index.html#installation

Make sure you have a user with proper privileges to do the backup on Source db:

| Grants for bkpuser@localhost                                              |

2. On Target, execute:

Stop current database service (if any):

Target # systemctl stop mysqld

Delete datadir contents (assuming it was installed with default settings) and make sure you’re logged in to the Target server!!

Target # rm -rf /var/lib/mysql/*

Finally, we’re going to execute the command to receive the backup from the source (Source):

Target # socat -u TCP-LISTEN:4444,reuseaddr stdio | pigz -dc -p 4 - | xbstream —p 4 -x -C /var/lib/mysql

3. On Source, execute the command to send the backup to target (Target).

Source # xtrabackup --defaults-file=/etc/my.cnf --backup --user=bkpuser --password=Bkpuser123! --stream=xbstream --parallel 4 --no-timestamp  --target-dir=/tmp | pigz -k -1 -p4 - | socat -u stdio TCP:Target:4444

You will see an output like this:

xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --server-id=1 --log_bin=/var/lib/mysql/mysql-bin --innodb_log_file_size=200M --innodb_log_files_in_group=2 --open_files_limit=65535 --parallel=4
xtrabackup: recognized client arguments: --backup=1 --user=bkpuser --password=* --stream=xbstream --target-dir=/tmp
200822 11:10:16  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'bkpuser'  (using password: YES).
200822 11:10:16  version_check Connected to MySQL server
200822 11:10:16  version_check Executing a version check against the server...
200822 11:10:16  version_check Done.
200822 11:10:16 Connecting to MySQL server host: localhost, user: bkpuser, password: set, port: not set, socket: not set
Using server version 5.7.30-log
xtrabackup version 2.4.20 based on MySQL server 5.7.26 Linux (x86_64) (revision id: c8b4056)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 65535, set to 65535
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 209715200
InnoDB: Number of pools: 1
200822 11:10:16 >> log scanned up to (6724690490)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 2 for mysql/plugin, old maximum was 0
xtrabackup: Starting 4 threads for parallel data files transfer
200822 11:10:16 [01] Streaming ./ibdata1

4. Once finished with step 3, you will see an output like this on the Target node:

MySQL binlog position: filename 'mysql-bin.000091', position '102205647'
200822 11:10:21 [00] Streaming <STDOUT>
200822 11:10:21 [00]        ...done
200822 11:10:21 [00] Streaming <STDOUT>
200822 11:10:21 [00]        ...done
xtrabackup: Transaction log of lsn (4308505553) to (4308505562) was copied.
200822 11:10:21 completed OK!

And step 2 will be finished too, so you must execute the following commands in Target node:

Target # xtrabackup --prepare --use-memory=1G --target-dir=/var/lib/mysql/

From the documentation:

After you made a backup with the xtrabackup –backup option, you’ll first need to prepare it in order to restore it. Data files are not point-in-time consistent until they’ve been prepared, because they were copied at different times as the program ran, and they might have been changed while this was happening. If you try to start InnoDB with these data files, it will detect corruption and crash itself to prevent you from running on damaged data. The xtrabackup –prepare step makes the files perfectly consistent at a single instant in time, so you can run InnoDB on them.

Target # chown -R mysql:mysql  /var/lib/mysql/*
Target # systemctl start mysqld

And you have cloned a new database!

You can, of course, set up this new database as a replica with these additional steps executed in the Target node:

Look at the content of the file xtrabackup_binlog_info, it will be something like:

Target # cat /var/lib/mysql/xtrabackup_binlog_info

mysql-bin.000091 102205647

(We’re assuming they have created the following user/grants in the Source db, if not, create it)

mysql> show grants for replicator;
| Grants for replicator@%                            |
| GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%' |

Connect to the database, and run:

Target # mysql -u root -p

mysql> change master to master_host='Source',master_port=3306,master_user='replicator',master_password='R3pl1c4t10n!',master_log_file='mysql-bin.000091',master_log_pos=102205647;
Query OK, 0 rows affected (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> pager egrep -i "Master_Host|Master_User|Master_Port|file|behind"

mysql> show slave status\G
                  Master_Host: master
                  Master_User: replicator
                  Master_Port: 3306
              Master_Log_File: mysql-bin.000091
               Relay_Log_File: relay.000001
        Relay_Master_Log_File: mysql-bin.000091
        Seconds_Behind_Master: 0
             Master_Info_File: mysql.slave_master_info

1 row in set (0.00 sec)

And that’s all.  Happy streaming!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com