Jul
07
2021
--

Migrating Ownership of Your Stored Routines, Views, and Triggers in MySQL

Migrating Ownership MySQL

Migrating Ownership MySQL“It would be nice to have an option, that would allow to suppress the DEFINER statement in the CREATE VIEW statements generated by mysqldump. This would help when transferring data structures between databases with different security models.” 

TLDR;

Use mysqlpump with option

--skip-definer

instead of

mysqldump

.

The Story

This was requested as MySQL Bug #24680 on Nov 29, 2006. This feature request got large Community support. Even if we cannot see the number of people who voted for this request, the number of comments is impressive.

The request is very reasonable:

mysqldump

is widely used during application development and it is a very common practice to migrate database structure between developers’ machines and to the production servers.

Imagine a situation where developer Sveta creates a database and adds few objects with

DEFINER

  clauses there. We will use only one for this post but in reality, she can have dozens.

mysql> CREATE VIEW large_tables AS SELECT * FROM information_schema.tables WHERE DATA_LENGTH > 100000000;
Query OK, 0 rows affected (0,01 sec)

Once you create a view default

DEFINER

  is the user who created this view:

mysql> SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME FROM information_schema.views WHERE TABLE_NAME='large_tables';
+---------+--------------+--------------+
| DEFINER | TABLE_SCHEMA | TABLE_NAME   |
+---------+--------------+--------------+
| sveta@% | definers     | large_tables |
+---------+--------------+--------------+
1 row in set (0,01 sec)

And this causes issues when another user tries to import such a view into a different server:

mysql> CREATE USER production;
Query OK, 0 rows affected (0,01 sec)

mysql> GRANT ALL ON definers.* TO production@'%';
Query OK, 0 rows affected (0,01 sec)

mysql> GRANT SESSION_VARIABLES_ADMIN ON *.* TO production@'%';
Query OK, 0 rows affected (0,01 sec)

$ mysqldump -usveta definers | mysql -uproduction production
ERROR 1227 (42000) at line 61: Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation

Here is the content of line 61:

$ mysqldump -usveta definers | head -n 62 | tail
/*!50001 DROP VIEW IF EXISTS `large_tables`*/;
/*!50001 SET @saved_cs_client          = @@character_set_client */;
/*!50001 SET @saved_cs_results         = @@character_set_results */;
/*!50001 SET @saved_col_connection     = @@collation_connection */;
/*!50001 SET character_set_client      = utf8mb4 */;
/*!50001 SET character_set_results     = utf8mb4 */;
/*!50001 SET collation_connection      = utf8mb4_0900_ai_ci */;
/*!50001 CREATE ALGORITHM=UNDEFINED */
/*!50013 DEFINER=`sveta`@`%` SQL SECURITY DEFINER */

So this is a

CREATE VIEW

  operation that failed during import.

Unfortunately,

mysqldump

still does not have an option that allows migrating definers.

But since August 2015 and MySQL 5.7.8 we have a solution that, unfortunately, was overlooked in favor of the famous tool

mysqldump

.

Version 5.7.8 and all which created after it, come with a new dump tool:

mysqlpump

  that has the option

--skip-definer

  and allows to migrate database objects without any issue:

$ mysqlpump -h127.0.0.1 -P3306 -usveta --skip-definer definers | mysql -h127.0.0.1 -P13000 -uproduction definers
Dump completed in 17

$ mysql -h127.0.0.1 -P13000 -uproduction definers -e "SHOW FULL TABLES"
+--------------------+------------+
| Tables_in_definers | Table_type |
+--------------------+------------+
| large_tables       | VIEW       |
+--------------------+------------+

$ mysql -h127.0.0.1 -P13000 -uproduction definers -e "SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME FROM information_schema.views WHERE TABLE_NAME='large_tables';"
+--------------+--------------+--------------+
| DEFINER      | TABLE_SCHEMA | TABLE_NAME   |
+--------------+--------------+--------------+
| production@% | definers     | large_tables |
+--------------+--------------+--------------+

Note that

mysqlpump

automatically adds

CREATE DATABASE

  into the dump and full path to the database objects. E.g.

CREATE ALGORITHM=UNDEFINED VIEW `definers`.`large_tables` AS select …

  Therefore this method cannot be used to migrate view, routine, or trigger definitions between different databases on the same server.

For more information about

mysqlpump

 and why you should switch to this tool from

mysqldump

 read this blog post, The mysqlpump Utility.

Apr
02
2021
--

Percona XtraBackup Point-In-Time Recovery for the Single Database

Percona XtraBackup Point-In-Time Recovery

Percona XtraBackup Point-In-Time RecoveryRecovering to a particular time in the past is called Point-In-Time Recovery (PITR). With PITR you can rollback unwanted DELETE without WHERE clause or any other harmful command.

PITR with Percona XtraBackup is pretty straightforward and perfectly described in the user manual. You need to restore the data from the backup, then apply all binary logs created or updated after the backup was taken, but skip harmful event(s).

However, if your data set is large you may want to recover only the affected database or table. This is possible but you need to be smart when filtering events from the binary log. In this post, I will show how to perform such a partial recovery using Percona XtraBackup, mysql command-line client, and mysqlbinlog programs only. There is an alternative approach that involves creating a fake source server, that is described in MySQL Point in Time Recovery the Right Way. You may consider it, especially if you need to apply changes to a single table.

Percona XtraBackup Point-In-Time Recovery

For our example we will create data first, then run DROP and DELETE commands on two different tables. Then we will rollback these commands.

First, let’s assume we have a server with two databases: test and sbtest. We are using GTIDs and row-based binary log format. We also run the server with the option innodb_file_per_table=1 and all our InnoDB tables use individual tablespaces. Otherwise, the individual restore method would not work.

mysql> show tables from sbtest;
+------------------+
| Tables_in_sbtest |
+------------------+
| sbtest1          |
| sbtest2          |
| sbtest3          |
| sbtest4          |
| sbtest5          |
| sbtest6          |
| sbtest7          |
| sbtest8          |
+------------------+
8 rows in set (0.00 sec)

mysql> show tables from test;
+----------------+
| Tables_in_test |
+----------------+
| bar            |
| baz            |
| foo            |
+----------------+
3 rows in set (0.00 sec)

We will experiment with tables foo and bar. We assume that at the time of our first backup, each of the tables contained five rows. Tables in the database sbtest also contain data, but it does not really matter for our experiment.

mysql> select count(*) from foo;
+----------+
| count(*) |
+----------+
|        5 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from bar;
+----------+
| count(*) |
+----------+
| 5        |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from baz;
+----------+
| count(*) |
+----------+
| 0        |
+----------+
1 row in set (0.00 sec)

Since we want to restore individual tables, we need to make a preparation before taking a backup: store database structure. We will do it with help of the mysqldump command. In this example, I store structure per database to make partial PITR easier, but you are free to use the option --all-databases.

mysqldump --no-data --set-gtid-purged=OFF --triggers --routines --events test > test_structure.sql
mysqldump --no-data --set-gtid-purged=OFF --triggers --routines --events sbtest > sbtest_structure.sql

Then we are ready to take the backup.

xtrabackup --parallel=8 --target-dir=./full_backup --backup

I am using the option --parallel to speed up the backup process.

Now let’s do some testing. First, let’s update rows in the table foo.

mysql> update foo set f1=f1*2;
Query OK, 5 rows affected (0.01 sec)
Rows matched: 5 Changed: 5 Warnings: 0

mysql> select * from foo;
+----+------+
| id | f1   |
+----+------+
|  1 |    2 |
|  2 |    4 |
|  3 |    6 |
|  4 |    8 |
|  5 |   10 |
+----+------+
5 rows in set (0.00 sec)

And then drop it and delete all rows from the table bar.

mysql> drop table foo;
Query OK, 0 rows affected (0.02 sec)

mysql> delete from bar;
Query OK, 5 rows affected (0.01 sec)

Finally, let’s insert a few rows into the tables bar and baz.

mysql> insert into bar(f1) values(6),(7),(8),(9),(10);
Query OK, 5 rows affected (0.01 sec)
Records: 5 Duplicates: 0 Warnings: 0

mysql> insert into baz(f1) values(1),(2),(3),(4),(5);
Query OK, 5 rows affected (0.01 sec)
Records: 5 Duplicates: 0 Warnings: 0

Assume that the DROP TABLE and DELETE command was an accident and we want to restore the state of the tables foo and bar as they were before these unlucky statements.

First, we need to prepare the backup.

Since we are interested in restoring only tables in the database test we need to prepare the backup with a special option --export that exports tablespaces in a way that they could be later imported:

xtrabackup --prepare --export --target-dir=./full_backup

Now the directory for the database test contains not only table definition files (.frm, only before 8.0) and tablespace files (.ibd) but also configuration files (.cfg).

Since we want all changes that happened after backup and before the problematic DROP TABLE and DELETE statements were applied, we need to identify which binary log and position were actual at the backup time. We can find it in the xtrabackup_binlog_info file:

$ cat full_backup/xtrabackup_binlog_info
master-bin.000004 1601 0ec00eed-87f3-11eb-acd9-98af65266957:1-56

Now we are ready to perform restore.

First, let’s restore the table foo from the backup. Restoring individual tablespaces requires the ALTER TABLE ... IMPORT TABLESPACE command. This command assumes that the table exists in the server. However, in our case, it was dropped and therefore we need to re-create it.

We will recreate the full database test from the file test_structure.sql

Since we do not want these administrative tasks to be re-applied, I suggest disabling binary logging for the session which will recreate the database structure.

mysql> set sql_log_bin=0;
Query OK, 0 rows affected (0.00 sec)

mysql> source test_structure.sql
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.00 sec)
....

Once tables are recreated discard their tablespaces. I will show an example for the table foo. Adjust the code for the rest of the tables.

mysql> alter table foo discard tablespace;
Query OK, 0 rows affected (0.01 sec)

Then, in another terminal, copy the tablespace and configuration files from the backup to the database directory:

cp full_backup/test/foo.{ibd,cfg} var/mysqld.1/data/test/

And, finally, import the tablespace:

mysql> alter table foo import tablespace;
Query OK, 0 rows affected (0.05 sec)

Repeat for the other tables in the database test.

Now you can enable binary logging back.

You can do the same task in a script. For example:

for table in `mysql test --skip-column-names --silent -e "show tables"`
> do
>   mysql test -e "set sql_log_bin=0; alter table $table discard tablespace"
>   cp full_backup/test/$table.{ibd,cfg} var/mysqld.1/data/test/
>   mysql test -e "set sql_log_bin=0; alter table $table import tablespace"
> done

Our tables are recovered but do not have the updates made after the backup.

mysql> select * from foo;
+----+------+
| id | f1   |
+----+------+
|  1 |    1 |
|  2 |    2 |
|  3 |    3 |
|  4 |    4 |
|  5 |    5 |
+----+------+
5 rows in set (0.00 sec)

mysql> select * from bar;
+----+------+
| id | f1   |
+----+------+
|  1 |    1 |
|  2 |    2 |
|  3 |    3 |
|  4 |    4 |
|  5 |    5 |
+----+------+
5 rows in set (0.00 sec)

mysql> select * from baz;
Empty set (0.00 sec)

Therefore, we need to restore data from the binary logs.

To do it we first need to identify the GTID of the disaster event. It can be done if we dump all binary logs updated and created after backup into a dump file and then search for the DROP TABLE and DELETE statements and skipping them.

First, let’s check which binary logs do we have.

mysql> show binary logs;
+-------------------+-----------+
| Log_name          | File_size |
+-------------------+-----------+
| master-bin.000001 |   1527476 |
| master-bin.000002 |      3035 |
| master-bin.000003 |      1987 |
| master-bin.000004 |      2466 |
| master-bin.000005 |       784 |
+-------------------+-----------+
5 rows in set (0.00 sec)

So we need to parse them, starting from the log master-bin.000004 and position 1601:

mysqlbinlog --start-position=1601 -vvv --base64-output=decode-rows --database=test master-bin.000004 master-bin.000005 > binlog_test.sql

I used options -vvv that prints SQL representation of row events, so we can find the one which we want to skip and --base64-output=decode-rows to not print row events at all. We will not use this file for the restore, only for searching the DROP TABLE and DELETE events.

Here they are, at the positions 2007 and 2123, with GTID 0ec00eed-87f3-11eb-acd9-98af65266957:58 and 0ec00eed-87f3-11eb-acd9-98af65266957:59

# at 2007
#210321 13:29:58 server id 1 end_log_pos 2123 CRC32 0xd1eb9854 Query thread_id=138 exec_time=0 error_code=0
use `test`/*!*/;
SET TIMESTAMP=1616322598/*!*/;
DROP TABLE `foo` /* generated by server */
/*!*/;
# at 2123
#210321 13:30:08 server id 1 end_log_pos 2188 CRC32 0xfc9b2088 GTID last_committed=7 sequence_number=8 rbr_only=yes original_committed_timestamp=0 immediate_commit_timestamp=0 transaction_length=0
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
# original_commit_timestamp=0 (1970-01-01 02:00:00.000000 EET)
# immediate_commit_timestamp=0 (1970-01-01 02:00:00.000000 EET)
/*!80001 SET @@session.original_commit_timestamp=0*//*!*/;
/*!80014 SET @@session.original_server_version=0*//*!*/;
/*!80014 SET @@session.immediate_server_version=0*//*!*/;
SET @@SESSION.GTID_NEXT= '0ec00eed-87f3-11eb-acd9-98af65266957:59'/*!*/;
# at 2188
#210321 13:30:08 server id 1 end_log_pos 2260 CRC32 0x1d525b11 Query thread_id=138 exec_time=0 error_code=0
SET TIMESTAMP=1616322608/*!*/;
BEGIN
/*!*/;
# at 2260
#210321 13:30:08 server id 1 end_log_pos 2307 CRC32 0xb57ecb73 Table_map: `test`.`bar` mapped to number 226
# at 2307
#210321 13:30:08 server id 1 end_log_pos 2387 CRC32 0x6770a7e2 Delete_rows: table id 226 flags: STMT_END_F
### DELETE FROM `test`.`bar`
### WHERE
### @1=1 /* INT meta=0 nullable=0 is_null=0 */
### @2=1 /* INT meta=0 nullable=1 is_null=0 */
### DELETE FROM `test`.`bar`
### WHERE
...

Note that decoded row event contains a DELETE command for each affected row.

We may also find to which binary log this event belongs if search for the "Rotate to" event. In our case “Rotate to master-bin.000005” happened after the found positions, so we only need file master-bin.000004 In your case, you may need to skip events from the previous log files too.

So to restore the data we need to run mysqlbinlog one more time, this time with parameters:

mysqlbinlog  --start-position=1601 --exclude-gtids=0ec00eed-87f3-11eb-acd9-98af65266957:58-59 --database=test --skip-gtids=true master-bin.000004 master-bin.000005 > binlog_restore.sql

I removed options -vvvbecause we are not going to examine this restore file and option --base64-output=decode-rows because we need row events to present in the resulting file. I also used option --exclude-gtids=0ec00eed-87f3-11eb-acd9-98af65266957:58-59 to exclude GTIDs that we do not want to re-apply. We also need to use --skip-gtids=true because otherwise updates will be skipped since such GTIDs already exist on the server.

Now binlog_restore.sql contains all updates to the database test made after the backup and before the DROP statement. Let’s restore it.

mysql test < binlog_restore.sql

Restore went successfully. Our tables have all past updates.

mysql> select * from foo;
+----+------+
| id | f1   |
+----+------+
|  1 |    2 |
|  2 |    4 |
|  3 |    6 |
|  4 |    8 |
|  5 |   10 |
+----+------+
5 rows in set (0.01 sec)

mysql> select count(*) from bar;
+----------+
| count(*) |
+----------+
|       10 |
+----------+
1 row in set (0.00 sec)

mysql> select count(*) from baz;
+----------+
| count(*) |
+----------+
|        5 |
+----------+
1 row in set (0.00 sec)

Conclusion

You may save the time required for PITR if use the per-database restore method. However, you need to take into account the following considerations:

  • mysqlbinlog does not support filtering per table, therefore you either need to restore the full database or use a fake server method, described in MySQL Point in Time Recovery the Right Way.
  • Per-database filters depend on the USE statement in the statement-based binary log format. Therefore option --database can only be considered safe with a row-based format.
  • If you do not use GTID you still can use this method. You will need to combine options --start-position and --stop-position to skip the event.

Percona XtraBackup is a free, open source database backup solution for Percona Server for MySQL and MySQL.

Jan
12
2021
--

MySQL Backup and Recovery Best Practices

MySQL Backup and Recovery Best Practices

MySQL Backup and Recovery Best PracticesIn this blog, we will review all the backup and restore strategies for MySQL, the cornerstones of any application. There are a few options, depending on your topology, MySQL versions, etc. And based on that, there are some questions we need to ask ourselves to make sure we make the right choices.

How many backups we need to keep safe, or what’s the best retention policy for us?

This means the number of backups to safeguard, whether local or remote (external fileserver, cloud). The retention policy can be daily, weekly, or monthly, depending on the free space available.

What is the Recovery Time Objective?

The Recovery Time Objective (RTO) refers to the amount of time that may pass during a disruption before it exceeds the maximum allowable threshold specified in the Business Continuity Plan.

The key question related to RTO is, “How quickly must the data on this system be restored?”

What is the Recovery Point Objective?

The Recovery Point Objective (RPO) is the duration of time and service level within which a business process must be stored after a disaster in order to avoid unacceptable consequences associated with a break in continuity.

The key question related to RPO is, “How much data can we lose?”

Different Types of Backups

There are two backup types: physical and logical.

  • Physical (Percona XtraBackup, RDS/LVM Snapshots, MySQL Enterprise Backup), and also you can use cp or rsync command lines to copy the datadir as long as mysql is down/stopped.
  • Logical (mysqldump, mydumper, mysqlpump, mysql shell only for mysql 8)

Also is recommended to take a copy of binlog files, why? Well, this will help us to recover until the last transaction.

Why are backups needed?

Backups are needed in case of multiple problems:

  • Host Failure: We can get multiple problems from disks stalled or broken disks. Also from cloud services, our DB instance can be broken and it’s non-accessible.
  • Corrupted Data: This can happen on a power outage, MySQL wasn’t able to write correctly and close the file, sometimes when MySQL starts again it cannot start due to corrupted data and the crash recovery process cannot fix it.
  • Inconsistent Data: When a human mistake, delete/update erroneous data over the primary or replica node.
  • DataCenter Failure: power outage or internet provider issues.
  • Legislation/Regulation: provide consistent business value and customer satisfaction.

Now let me explain those different types of backups mentioned above, but before I continue, it’s important to configure a new and dedicated replica node for backups purposes, due to the high CPU load to avoid any issue on any other replica node (AKA backup server).

Logical Backup

This is a dump from logical database structure (CREATE DATABASE, CREATE TABLE statements) and content (INSERT statements). This is recommended to be used against smaller amounts of data. The disadvantage of this method is slower (backup and restore) if you compare it with physical backups. Using mydumper you can backup and restore a single database or a single table if it’s needed, and this is useful to copy some data to a different environment to run tests. Also, mydumper can take a consistent (as long as all the tables are InnoDB engine) backup and provides accurate master and slave log positions.

The output is larger than for physical backup, particularly when saved in text format, but it can be compressed on the fly depending on the software you are using. Mydumper can compress and mysqldump needs to add a pipe to redirect the output to gzip, for example.

Logical backups are used to address data corruption or the need to restore a subset of tables.

Physical (Raw) Backup

In short, this consists of exact copies of database directories and files. This can be a copy for all or a part from MySQL datadir directory. This kind of backup is most used to restore or create a new replica node easily and quickly and is used to address host failure. It’s recommended to restore using the same MySQL version. I recommend using Percona XtraBackup because it can include any related files such as configuration files like cnf config files.

Snapshot Backups

Some file system implementations enable “snapshots” to be taken. These provide logical copies of the file system at a given point in time, without requiring a physical copy of the entire file system. MySQL itself does not provide the capability for taking file system snapshots but it is available using third-party solutions such as LVM or ZFS.

The disadvantage is that sometimes physical backups do not compress much, because data is usually in a binary format and sometimes the table is already compressed.

Binary Log Backups

Binlog backups specifically address RPO. Binary log files contain records of each SQL query executed that made changes.

From MySQL 5.6 on, you can use mysqlbinlog to stream binary logs from a remote server. You can combine binlog backups with Percona XtraBackup or mydumper backup to allow restoration up to the end of the most-recently-backed-up binary log.

Incremental / Differential Backups

An incremental backup is a backup of everything that has changed since the last backup (a binary log backup is a special case of an incremental backup). This is a very good option if the dataset size is huge, as you can take a full backup at the beginning of the week and run incremental backups per day. Also, the backup size is smaller than the full backup.

The main risks associated with incremental backups are:

– A single corrupt incremental backup may invalidate all the others

– Incremental backups typically negatively affect the RTO

For a differential backup, it copies the differences from your last backup, and the advantage is that a lot of data does not change from one backup to the next, so the result can be significantly smaller backups. This saves disk space.

Percona XtraBackup supports both incremental and differential backups.

Offsite Storage

It’s highly recommended to copy all the backup methods to another place, like the cloud or an external file server, so in case of host failure or data center failure, you have another copy.

Not all the backup files need to be uploaded to the cloud, sometimes the time you need to spend in the download is bigger than the time consumed in the recovery process.

A good approach is to keep 1-7 days locally on the backup server in case a fast recovery is needed, and this depends on your business regulations.

Encryption

Backups have sensitive data, so it’s highly recommended to encrypt, especially for offsite storage. This adds more time when you need to restore a backup but it keeps your data safe.

GPG is a good option to encrypt backups, and if you use this option or some other alternative, don’t forget to get a copy of the keys/passphrase. If you lose it, your backups will be useless.

Restore Testing

Depending on your business, it’s highly recommended to test your backups at least once per month. This action validates your backups are not corrupted and it provides critical metrics on recovery time. This process should be automated to get the full backup, restore it, and finally configure this server as a replica from the current primary or another replica. This is good as well to validate that the replication process has no errors.

Many customers are using this methodology to refresh their QA/STG environment to have fresh data from production backups.

In addition to the above, it is recommended to create a manual or automated restore documentation process to keep all the steps together, so in case of disaster, you can follow it without wasting time.

Retention Requirements

Last but not least, it is very important to keep multiple copies of different backup types.

Our best recommendation is:

  • One or two physical backups locally on the backup server (as long as space allows it).
  • Seven daily and four weekly logical backups locally on the backup server.
  • 30 days of binlog backups locally on the backup server.
  • For offsite backups (like S3, Google Cloud, etc.), keep monthly backups for one year or more.

For local backups, keep in mind you will need a minimum of 2.5 times the current dataset size as free disk space to save/meet these retention policies. Don’t forget to encrypt all the backup types!

Legal or regulatory requirements may also dictate how long data must be archived.

Percona Can Help

Percona can help you choose, implement, and optimize the most appropriate MySQL backup and recovery solution for your MySQL ecosystem. If your current solution unexpectedly fails, we can facilitate your recovery with onsite, remote, or emergency consulting services. We can also help you take steps to prevent another occurrence. Every situation is unique and we will work with you to create the most effective solution for your business.

Contact Us

Nov
06
2020
--

Various Backup Compression Methods Using Mysqlpump

Backup Compression Methods Using Mysqlpump

Backup Compression Methods Using MysqlpumpMysqlpump is a client program that was released with MySQL 5.7.8 and is used to perform logical backups in a better way. Mysqlpump supports parallelism and it has the capability of creating compressed output. Pablo already wrote a blog about this utility (The mysqlpump Utility), and in this blog, I am going to explore the available compression techniques in the Mysqlpump utility.

Overview

Mysqlpump has three options to perform the compression backup.

–compress: Used to compress all the information sent between client and server.

–compression-algorithm: It was added in MySQL 8.0.18. Used to define the compression algorithm for all incoming connections to the server. (available options: zlib, zstd, uncompressed )

–compress-output: Used to define the compression algorithm for the backup file (available options: lz4, zlib)

Here, “–compress-output” is the option used to define the compression algorithm for the backup file. Which has two algorithms.

  • Lz4
  • Zlib

Lz4: LZ4 is a lossless data compression algorithm that is focused on compression and decompression speed.

Zlib: zlib is a software library used for data compression. zlib compressed data are typically written with a gzip or a zlib wrapper. 

Lab Setup

To experiment with both compression techniques, I have installed the MySQL (8.0.22) server in my local environment. I also created the table “percona_test.mp_test” which has an 11GB size.

[root@localhost]# mysql -e "select @@version, @@version_comment\G"
*************************** 1. row ***************************
        @@version: 8.0.22
@@version_comment: MySQL Community Server - GPL

[root@localhost]# mysql -e "select count(*) from percona_test.mp_test\G"
*************************** 1. row ***************************
count(*): 70698024

[root@localhost percona_test]# ls -lrth
total 11G
-rw-r-----. 1 mysql mysql 11G Oct 23 11:20 mp_test.ibd

Now, I am going to experiment with both compression algorithms.

Compression with Lz4

I am going to take the backup (table: mp_test) using the lz4 compression algorithm.

[root@localhost]# time mysqlpump --set-gtid-purged=off --compress --compress-output=lz4 percona_test mp_test > percona_test.mp_test.lz4
Dump progress: 0/1 tables, 250/70131715 rows
Dump progress: 0/1 tables, 133000/70131715 rows
Dump progress: 0/1 tables, 278500/70131715 rows
...
...
Dump progress: 0/1 tables, 70624000/70131715 rows
Dump completed in 540824
real 9m0.857s

It took 9.1 minutes to complete. And, the file size is 1.1 GB, looks like 10x compression.

[root@dc1 percona_test]# ls -lrth | grep lz4

-rw-r--r--. 1 root  root  1.1G Oct 23 12:47 percona_test.mp_test.lz4

Compression with Zlib

Now, I am going to start the backup with “zlib” algorithm.

[root@dc1]# time mysqlpump --set-gtid-purged=off --compress --compress-output=zlib percona_test mp_test > percona_test.mp_test.zlib
Dump progress: 0/1 tables, 250/70131715 rows
Dump progress: 0/1 tables, 133250/70131715 rows
Dump progress: 0/1 tables, 280250/70131715 rows
Dump progress: 0/1 tables, 428750/70131715 rows
...
...
Dump progress: 0/1 tables, 70627000/70131715 rows
Dump completed in 546249
real 10m6.436s

It took 10.6 minutes to complete the process. And the file size is the same 1.1 GB (10x compression).

[root@dc1]# ls -lrth | grep -i zlib

-rw-r--r--. 1 root  root  1.1G Oct 23 13:06 percona_test.mp_test.zlib

 

 

How to Decompress the Backup

MySQL community provides two utilities to decompress the backups.

  • zlib_decompress ( for zlib compression files )
  • lz4_decompress ( for lz4 compression files )

lz4_decompress

[root@dc1]# time lz4_decompress percona_test.mp_test.lz4 percona_test.mp_test.sql
real 0m45.287s
user 0m1.114s
sys 0m6.568s
[root@dc1]# ls -lrth | grep percona_test.mp_test.sql
-rw-r--r--. 1 root  root  9.1G Oct 23 13:30 percona_test.mp_test.sql

lz4 took 45 seconds to decompress the backup file.

zlib_decompress

[root@dc1]# time zlib_decompress percona_test.mp_test.zlib percona_test.mp_test.sql
real 0m35.553s
user 0m6.642s
sys 0m7.105s
[root@dc1]# ls -lrth | grep percona_test.mp_test.sql
-rw-r--r--. 1 root  root  9.1G Oct 23 13:49 percona_test.mp_test.sql

zlib took 36 seconds to decompress the backup file.

This is the procedure we have to compress/decompress the backups with Mysqlpump. It seems both the algorithms provide the 10x compression. Also, there is not much difference in the execution time as well, but it may be the big one with a large dataset. 

Aug
07
2019
--

Percona XtraBackup 8.0.7 Is Now Available

Percona XtraBackup

Percona XtraBackupPercona is glad to announce the release of Percona XtraBackup 8.0.7 on August 7, 2019. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

In release 8.0.7, Percona XtraBackup enables making backups of databases that contain the encrypted system tablespace. Encrypted mysql tablespace is now also supported.

Percona XtraBackup 8.0.7 implements the support of the lz4 compression algorithm so that you could make compressed backups using lz4 (--compress=lz4) in addition to the default quicklz method.

All Percona software is open-source and free.

New Features and Improvements

  • Add support of the system tablespace encryption. More information in PXB-1649
  • Implemented the support of the lz4 compression algorithm. More information in PXB-1857.

Bugs Fixed

  • When the encrypted tablespaces feature was enabled, encrypted and compressed tables were not usable on the joiner node (Percona XtraDB Cluster) via SST (State Snapshot Transfer) with the xtrabackup-v2 method. Bug fixed PXB-1867.
  • xbcloud did not update date related fields of the HTTP header when retrying a request. Bug fixed PXB-1874.
  • xbcloud did not retry to send the request after receiving the HTTP 408 error (request timeout). Bug fixed PXB-1875.
  • xtrabackup did not accept decimal fractions as values of the innodb_max_dirty_pages_pct option. Bug fixed PXB-1807.
  • If the user tried to merge an already prepared incremental backup, a misleading error was produced without informing that incremental backups may not be used twice. Bug fixed PXB-1862.

Other bugs fixed: PXB-1493, PXB-1557, PXB-1887, PXB-1870, PXB-1879, PXB-1901.

Release notes with all the improvements for version 8.0.7 are available in our online documentation. Please report any bugs to the issue tracker.

Jul
10
2019
--

Percona XtraBackup 2.4.15 Is Now Available

Percona XtraBackup 8.0

Percona XtraBackupPercona is glad to announce the release of Percona XtraBackup 2.4.15 on July 10, 2019. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

All Percona software is open-source and free.

Bugs Fixed

  • When the encrypted tablespaces feature was enabled, encrypted and compressed
    tables were not usable on the joiner node (Percona XtraDB Cluster) via SST
    (State Snapshot Transfer) with the xtrabackup-v2 method. Bug fixed PXB-1867.
  • xbcloud did not update date related fields of the HTTP
    header when retrying a request. Bug fixed PXB-1874.
  • xbcloud did not retry to send the request after receiving the HTTP 408
    error (request timeout). Bug fixed PXB-1875.
  • If the user tried to merge an already prepared incremental backup, a
    misleading error was produced without informing that incremental backups may
    not be used twice. Bug fixed PXB-1862.
  • xbcloud could crash with the Swift storage when project options were
    not included. Bug fixed PXB-1844.
  • xtrabackup did not accept decimal fractions as values of the
    innodb_max_dirty_pages_pct option. Bug fixed PXB-1807.

Other bugs fixed:  PXB-1850, PXB-1879, PXB-1887, PXB-1888, PXB-1890.

Release notes with all the improvements for Percona XtraBackup 2.4.15 are available in our online documentation. Please report any bugs to the issue tracker.

May
09
2019
--

Percona XtraBackup 8.0.6 Is Now Available

Percona XtraBackup 8.0

Percona XtraBackupPercona is glad to announce the release of Percona XtraBackup 8.0.6 on May 9, 2019. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

In version 8.0.6, Percona XtraBackup introduces the support of the MyRocks storage engine with Percona Server for MySQL version 8.0.15-6 or higher.

Percona XtraBackup 8.0.6 enables saving backups to an Amazon S3, MinIO, and Google Cloud Storage (using interoperability mode) when using xbcloud. The following example demonstrates how to use an Amazon S3 storage to make a full backup:

$ xtrabackup --backup --stream=xbstream --extra-lsndir=/tmp --target-dir=/tmp | \
 xbcloud put --storage=s3 \
 --s3-endpoint='s3.amazonaws.com' \
 --s3-access-key='YOUR-ACCESSKEYID' \
 --s3-secret-key='YOUR-SECRETACCESSKEY' \
 --s3-bucket='mysql_backups'
 --parallel=10 \
 ${date -I}-full_backup

All Percona software is open-source and free.

New Features

  • Amazon S3 is now supported in xbcloud. More information in PXB-1813.
  • The MyRocks storage engine is now supported with Percona XtraBackup. More information in PXB-1754.

Bugs Fixed

  • Percona XtraBackup could fail to restore the undo tablespace created during or before incremental backup. Bug fixed PXB-1780.
  • A backup could fail if log_bin_index was defined in my.cnf. Bug fixed PXB-1801.
  • When the row format was changed during the backup, xtrabackup could crash during the incremental prepare stage. Bug fixed PXB-1824.
  • During the prepare phase, PXB could freeze and never finish execution. Bug fixed PXB-1819.
  • Percona XtraBackup could crash during the prepare stage when making a backup of a host running MySQL Server v8.0.16. Bug fixed PXB-1839.

Other bugs fixed: PXB-1809PXB-1810PXB-1832PXB-1837.

Release notes with all the improvements for version 8.0.6 are available in our online documentation. Please report any bugs to the issue tracker.

May
01
2019
--

Percona XtraBackup 2.4.14 Is Now Available

Percona XtraBackup 8.0

Percona XtraBackupPercona is glad to announce the release of Percona XtraBackup 2.4.14 on May 1, 2019. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

Percona XtraBackup 2.4.14 enables saving backups to an Amazon S3, MinIO, and Google Cloud Storage (using interoperability mode) when using xbcloud. The following example demonstrates how to use an Amazon S3 storage to make a full backup:

$ xtrabackup --backup --stream=xbstream --extra-lsndir=/tmp --target-dir=/tmp | \
 xbcloud put --storage=s3 \
 --s3-endpoint='s3.amazonaws.com' \
 --s3-access-key='YOUR-ACCESSKEYID' \
 --s3-secret-key='YOUR-SECRETACCESSKEY' \
 --s3-bucket='mysql_backups'
 --parallel=10 \
 ${date -I}-full_backup

All Percona software is open-source and free.

New Features

  • Amazon S3 is now supported in xbcloud. More information in PXB-1813.

Bugs Fixed

  • When the row format was changed during the backup, xtrabackup could crash
    during the incremental prepare stage. Bug fixed PXB-1824.
  • If compressed InnoDB undo tablespaces were not removed beforehand, the
    incremental backup could crash at the prepare stage. Bug fixed PXB-1552.

Other bugs fixed:  PXB-1771, PXB-1809, PXB-1837.

Release notes with all the improvements for version 2.4.14 are available in our online documentation. Please report any bugs to the issue tracker.

Feb
26
2019
--

Percona XtraBackup Now Supports Dump of InnoDB Buffer Pool

percona-xtra-backup buffer pool restore

percona-xtra-backup buffer pool restoreInnoDB keeps hot data in memory on its buffer named InnoDB Buffer Pool. For a long time, when a MySQL instance needed to bounce, this hot cached data was lost and the instance required a warm-up period to perform as well as it did before the service restart.

That is not the case anymore. Newer versions of MySQL/MariaDB allow users to save the state of this buffer by dumping tablespace ID’s and page ID’s to a file on disk that will be loaded automatically on startup, making the newly started server buffer pool as it was prior the restart.

Details about the MySQL implementation can be found at https://dev.mysql.com/doc/refman/5.7/en/innodb-preload-buffer-pool.html

With that in mind, Percona XtraBackup versions 2.4.13 can now instruct MySQL to dump the content of buffer pool while taking a backup. This means you can restore the backup on a new server and make MySQL perform just like the other instance in terms of InnoDB Buffer Pool data.

How it works

The buffer pool dump happens at the beginning of backup if --dump-innodb-buffer-pool is set.

The user can choose to change the default innodb_buffer_pool_dump_pct. If --dump-innodb-buffer-pool-pct is set, it stores the current MySQL innodb_buffer_pool_dump_pct value, then it changes it to the desired percentage. After the end of the backup, original values is restored back.

The actual file copy happens at the end of the backup.

Percona XtraDB Cluster

A very good use case is PXC/Galera. When a node initiates SST, we would like the joiner to have a copy of InnoDB Buffer Pool from the donor. We can configure PXC nodes to do that:

[xtrabackup]
dump-innodb-buffer-pool
dump-innodb-buffer-pool-pct=100

Here is an example of a PXC node that just received SST:

Before PXB-1548:

[root@marcelo-altmann-pxb-pxc-3 ~]# systemctl stop mysql && rm -rf /var/lib/mysql/* && systemctl start mysql && mysql -psekret -e "SHOW ENGINE INNODB STATUS\G" | grep 'Database pages'
mysql: [Warning] Using a password on the command line interface can be insecure.
Database pages 311

Joiner started with a cold buffer pool.

After adding dump-innodb-buffer-pool and dump-innodb-buffer-pool-pct=100 to my.cnf :

[root@marcelo-altmann-pxb-pxc-3 ~]# systemctl stop mysql && rm -rf /var/lib/mysql/* && systemctl start mysql && mysql -psekret -e "SHOW ENGINE INNODB STATUS\G" | grep 'Database pages'
mysql: [Warning] Using a password on the command line interface can be insecure.
Database pages 30970

Joiner started with a copy of the buffer pool from the donor, which will reduce the joiner warm-up period.

Conclusion

The new version of Percona XtraBackup can help to minimize the time a newly restored backup will take to perform like source server


Photo by Jametlene Reskp on Unsplash

Jan
18
2019
--

Percona XtraBackup 2.4.13 Is Now Available

Percona XtraBackup 8.0

Percona XtraBackupPercona is glad to announce the release of Percona XtraBackup 2.4.13 on January 18, 2018. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, it drives down backup costs while providing unique features for MySQL backups.

New features and improvements:

  • PXB-1548: Percona XtraBackup enables updating the ib_buffer_pool file with the latest pages present in the buffer pool using the --dump-innodb-buffer-pool option. Thanks to Marcelo Altmann for contribution.

Bugs fixed

  • xtrabackup did not delete missing tables from the partial backup which led to error messages logged by the server on startup. Bug fixed PXB-1536.
  • The --history option did not work when autocommit was disabled. Bug fixed PXB-1569.
  • xtrabackup could fail to backup encrypted tablespace when it was recently created or altered. Bug fixed PXB-1648.
  • When the --throttle option was used, the applied value was different from the one specified by the user (off by one error). Bug fixed PXB-1668.
  • It was not allowed for MTS (multi-threaded slaves) without GTID to be backed up with --safe-slave-backup. Bug fixed PXB-1672.
  • Percona Xtrabackup could crash when the ALTER TABLE … TRUNCATE PARTITION command was run during a backup without locking DDL. Bug fixed PXB-1679.
  • xbcrypt could display an assertion failure and generated core if the required parameters are missing. Bug fixed PXB-1683.
  • Using --lock-ddl-per-table caused the server to scan all records of partitioned tables which could lead to the “out of memory error”. Bugs fixed PXB-1691 and PXB-1698.
  • xtrabackup --prepare could hang while performing insert buffer merge. Bug fixed PXB-1704.
  • Incremental backups did not update xtrabackup_binlog_info with --binlog-info=lockless. Bug fixed PXB-1711

Other bugs fixed:  PXB-1570PXB-1609PXB-1632

Release notes with all the improvements for version 2.4.13 are available in our online documentation. Please report any bugs to the issue tracker.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com