Jul
15
2024
--

Percona Server for MySQL and Percona XtraBackup Now Available for ARM64

Percona Server for MySQL and Percona XtraBackup Now Available for ARM64We’re excited to announce that both Percona Server for MySQL and Percona XtraBackup now support the ARM64 architecture on Red Hat Enterprise Linux (RHEL) 8/9 and Oracle Linux (OL) 8/9. The packages with the aarch64.rpm extension can be found on Percona Software Downloads. The aarch64.rpm file extension indicates that the RPM package is specifically built for the ARM64 architecture and intended […]

Dec
07
2023
--

Backup Mastery: Running Percona XtraBackup in Docker Containers

Ensuring the security and resilience of your data hinges on having a robust backup strategy, and Percona XtraBackup (PXB), our open source backup solution for all versions of MySQL, is designed to make backups a seamless procedure without disrupting the performance of your server in a production environment.

When combined with the versatility of Docker containers, it becomes a dynamic duo, offering a scalable approach to data backup and recovery. Let’s take a look at how they work together.

Working with Percona Server for MySQL 8.1 and PXB 8.1 Docker images

Start a Percona Server for MySQL 8.1 instance in a Docker container

Percona Server for MySQL has an official Docker image hosted on Docker Hub. For additional details on how to run an instance in a Docker environment, refer to this section in Percona Documentation:

sudo docker run --name percona-server-8.1 -v mysql_data:/var/lib/mysql -v /var/run/mysqld:/var/run/mysqld -p 3306:3306 -e MYSQL_ROOT_HOST=% 
-e MYSQL_ROOT_PASSWORD=mysql -d percona/percona-server:8.1.0

sudo docker runThis is the command to run a Docker container

--name percona-server-8.1 – Assigns the name “percona-server-8.1” to the Docker container

-v mysql_data:/var/lib/mysql – Creates a Docker volume named “mysql_data” and mounts it to the “/var/lib/mysql” directory inside the container. This is typically used to store MySQL data persistently.

-v /var/run/mysqld:/var/run/mysqld–  Mounts the host’s “/var/run/mysqld” directory to the container’s “/var/run/mysqld” directory. This can be useful for sharing the MySQL socket file for communication between processes. 

-p 3306:3306 –  Maps port 3306 on the host to port 3306 on the container. This is the default MySQL port, and it allows you to access the MySQL server running inside the container from the host machine.

-e MYSQL_ROOT_HOST=% –  Sets an environmental variable MYSQL_ROOT_HOST to ‘%’  (which means any host). This is often used to allow root connections from any host.

-e MYSQL_ROOT_PASSWORD=mysql Sets an environmental variable MYSQL_ROOT_PASSWORD to ‘mysql’. This is the password to the MySQL root user

-d Run the container in the background (detached mode).

percona/percona-server:8.1.0 Specifies the Docker image to use for creating the container. In this case, it is the Percona Server for MySQL version 8.1.0

Note: 

  • To work with Percona Server for MySQL 8.0 Docker images, replace the Docker image name with percona/percona-server:8.0 and Docker container name with percona-server-8.0.
  • To work with Percona Server for MySQL 5.7 Docker images, replace the Docker image name with percona/percona-server:5.7 and Docker container name with percona-server-5.7.
  • Percona XtraBackup 8.1 can only take backups of Percona Server for MySQL 8.1. Similarly, Percona XtraBackup 8.0 and Percona XtraBackup 2.4 can only take backups of Percona Server for MySQL 8.0 and 5.7, respectively.

Add data to the database

Let’s add some data to the Percona Server database. Create a test database and add a table t1 inside with five rows.

sudo docker exec -it percona-server-8.1 mysql -uroot -pmysql -e "CREATE DATABASE IF NOT EXISTS test;" >/dev/null 2>&1
sudo docker exec -it percona-server-8.1 mysql -uroot -pmysql -e "CREATE TABLE test.t1(i INT);" >/dev/null 2>&1
sudo docker exec -it percona-server-8.1 mysql -uroot -pmysql -e "INSERT INTO test.t1 VALUES (1), (2), (3), (4), (5);" >/dev/null 2>&1

Note:

  • In the case of Percona Server for MySQL 8.0, replace the container name with  percona-server-8.0.
  • In the case of Percona Server for MySQL 5.7, replace the container name with  percona-server-5.7.

Run Percona XtraBackup 8.1 in a container, take a backup, and prepare

The Docker command runs Percona XtraBackup 8.1 within a container using the data volume of the Percona Server container (percona-server-8.1). It performs a MySQL backup and stores the data on the volume (pxb_backup_data). The container is removed  (--rm) after execution, providing a clean and efficient solution for MySQL backup operation. In the case of Percona XtraBackup 8.0 or 2.4, replace the Docker image name in the below command to percona/percona-xtrabackup:8.0 or percona/percona-xtrabackup:2.4 respectively.

sudo docker run --volumes-from percona-server-8.1 -v pxb_backup_data:/backup_81 -it --rm --user root percona/percona-xtrabackup:8.1 /bin/bash -c "xtrabackup --backup --datadir=/var/lib/mysql/ --target-dir=/backup_81 --user=root --password=mysql ; xtrabackup --prepare --target-dir=/backup_81"

Stop the Percona Server container

Before attempting to restore the backup, make sure the Percona Server container is stopped.

sudo docker stop percona-server-8.1

Note:

  • In the case of Percona Server for MySQL 8.0, sudo Docker stop percona-server-8.0.
  • In the case of Percona Server for MySQL 5.7, sudo Docker stop percona-server-5.7.

Remove the MySQL data directory

This step ensures that the MySQL data directory is empty before you attempt the --copy-back operation. Remember to replace the Docker image and container names in case Percona Server for MySQL 8.0 or 5.7 is used.

sudo docker run --volumes-from percona-server-8.1 -v pxb_backup_data:/backup_81 -it --rm --user root percona/percona-xtrabackup:8.1 /bin/bash -c "rm -rf /var/lib/mysql/*"

Run Percona XtraBackup 8.1 in a container to restore the backup

The Docker command uses the data volume from a Percona Server for MySQL 8.1 container (percona-server-8.1) and runs Percona XtraBackup 8.1 within a separate container. The command executes the xtrabackup --copy-back operation, restoring MySQL data from the specified directory (/backup_81) to the MySQL data directory (/var/lib/mysql).

sudo docker run --volumes-from percona-server-8.1 -v pxb_backup_data:/backup_81 -it --rm --user root percona/percona-xtrabackup:8.1 /bin/bash -c "xtrabackup --copy-back --datadir=/var/lib/mysql/ --target-dir=/backup_81"

Note:

  • When Percona XtraBackup 8.0 is used, replace the Docker image name to percona/percona-xtrabackup:8.0 and Percona Server container name to percona-server-8.0
  • When Percona XtraBackup 2.4 is used, replace the Docker image name to percona/percona-xtrabackup:2.4 and Percona Server container name to percona-server-5.7 respectively.

Start the Percona Server container to verify the restored data

When we stop and remove the original Percona Server container, the ownership and permission of the files in the mounted volumes may change. A more secure and targeted approach would be to identify the correct user and group IDs needed for the MySQL process and set the ownership accordingly. 

sudo docker run --volumes-from percona-server-8.1 -v pxb_backup_data:/backup_81 -it --rm --user root percona/percona-xtrabackup:8.1 /bin/bash -c "chown -R mysql:mysql /var/lib/mysql/"

This sets the correct ownership for the MySQL data directory. Now, start the Percona Server instance inside the container.

sudo docker start percona-server-8.1

Once the server is started, fetch the total number of records in the test.t1 table to verify the correctness of the restored data.

sudo docker exec -it percona-server-8.1 mysql -uroot -pmysql -Bse 'SELECT * FROM test.t1;' | grep -v password
1
2
3
4
5

Summary

To sum up, Percona XtraBackup is an essential tool for data protection because it provides a dependable and effective backup for MySQL databases. Its easy integration with Docker containers increases its usefulness even more by offering a scalable and adaptable method for recovering and backing up data.

We encourage users to continue using Percona XtraBackup and hope that this blog is useful. Happy MySQLing!

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows.

Download Percona XtraBackup

Nov
08
2023
--

Percona XtraBackup 8.0.34 Removes the Server Version Check

Percona XtraBackup 8.0.34 Removes the Server Version Check

With the release of Percona XtraBackup 8.0.34-39, Percona XtraBackup (PXB) allows a backup on server version 8.0.35 and higher. We have removed the server version check because the MySQL 8.0 series has transitioned to bug fixes.

A feature change in the MySQL 8.0 series, such as the redo log record structure, could cause PXB to fail when taking a backup. To ensure a valid backup, use a PXB version equal to or higher than your source server version. Complete the process and prepare and restore your backups regularly. Do not assume that, because you have no error messages, that the backup ran successfully. 

Before the backup starts, PXB checks the source server version to the PXB version to prevent a failed or corrupted backup due to source server changes. Percona XtraBackup 8.0.22 added the no-server-version-check option. This option ignored the check so that the backup could continue and could have one of the following consequences:

  • Backup failed
  • Backup corrupted
  • Backup successful

The check had the following scenarios:

  • The source server version and the Percona XtraBackup version are the same: the backup continues.
  • The source server version is less than the Percona XtraBackup version: the backup continues.
  • The source server version is greater than the Percona XtraBackup version: the backup stops and generates an error message.
  • The source server version is greater than the Percona XtraBackup version; with the --no-server-version-check option, the backup continues.

With the release of MySQL 8.0.34, the MySQL 8.0 series transitioned to bug fixes. We do not expect any breaking changes that would break backward compatibility. Therefore, Percona XtraBackup 8.0.34 will backup and restore a server version higher than 8.0.34.

Sep
19
2023
--

Use Physical Backups With MySQL InnoDB Redo Log Archiving

InnoDB Redo Log Archiving

In the world of data backup and security, physical backups play an extremely important role. Physical backup methods are faster than logical because they involve only file copying without conversion. This type of backup is suitable for large, important databases that need to be recovered quickly when problems occur.

Physical backups are the backups that consist of raw copies of the directories and files that store database contents. In addition to databases, the backup can include any related files, such as log or configuration files. Now, since backup speed and compactness are important for busy, important databases, Percona’s open source physical backup solution – Percona XtraBackup (PXB), takes into account all these aspects and benefits MySQL world with its exceptional capabilities!

This blog post will walk you through how PXB uses MySQL’s InnoDB redo log archiving to manage the redo log files on the busiest systems and looks at how a new feature released in PXB version 8.0.34-29 will let you set up redo log directory while performing a backup.

InnoDB redo log archiving

Backup utilities that copy redo log records may sometimes fail to keep pace with redo log generation while a backup operation is in progress, resulting in lost redo log records due to overwritten records. This issue most often occurs when there is significant MySQL server activity during the backup operation, and the redo log file storage media operates faster than the backup storage media. The redo log archiving feature, introduced in MySQL 8.0.17, addresses this issue by sequentially writing redo log records to an archive file and the redo log files. Backup utilities can copy redo log records from the archive file as necessary, thereby avoiding the potential loss of data.

Enabling redo log archiving on the server requires setting a value for the innodb_redo_log_archive_dirs system variable. The value is specified as a semicolon-separated list of labeled redo log archive directories. The label:directory pair is separated by a colon (:). For example:

mysql> SET GLOBAL innodb_redo_log_archive_dirs='label1:directory_path1[;label2:directory_path2;…]';

Now, first, we are going to demonstrate to you how we tested the new InnoDB redo log archiving on Percona Server for MySQL 8.0.33-25 (GPL) and should be using MySQL shell (mysqlsh) for enabling the InnoDB redo log archiving.

  1. Create new directories for storing redo log archives.
[root@ip-xxx-xx-xx-xx ~]# mkdir -p /var/lib/mysql-redo-archive/backup1
[root@ip-xxx-xx-xx-xx ~]# chown mysql. -R /var/lib/mysql-redo-archive
[root@ip-xxx-xx-xx-xx ~]# chmod -R 700 /var/lib/mysql-redo-archive/

  1. Login into mysqlsh by root user and check the variable innodb_redo_log_archive_dirs.
[root@ip-xxx-xx-xx-xx ~]# mysqlsh -uroot -p
MySQL Shell 8.0.33
Copyright (c) 2016, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.
Type 'help' or '?' for help; 'quit' to exit.
WARNING: Using a password on the command line interface can be insecure.
Creating a session to 'root@localhost'
Fetching schema names for auto-completion... Press ^C to stop.
Your MySQL connection id is 10 (X protocol)
Server version: 8.0.33-25 Percona Server (GPL), Release 25, Revision 60c9e2c5
No default schema selected; type use <schema> to set one.
MySQL localhost:33060+ ssl  SQL >
MySQL localhost:33060+ ssl JS > \sql
Switching to SQL mode... Commands end with ;
Fetching global names for auto-completion... Press ^C to stop.

MySQL localhost:33060+ ssl SQL > show global variables like 'innodb_redo_log_ar%';
+------------------------------+-------+
| Variable_name                | Value |
+------------------------------+-------+
| innodb_redo_log_archive_dirs |       |
+------------------------------+-------+
1 row in set (0.0031 sec)

  1. Set the value of variable innodb_redo_log_archive_dirs to the one in which we created the directories for redo log archives as below.
MySQL  localhost:33060+ ssl  SQL > set persist innodb_redo_log_archive_dirs='backup1:/var/lib/mysql-redo-archive/';
Query OK, 0 rows affected (0.0019 sec)
MySQL  localhost:33060+ ssl  SQL >  show global variables like 'innodb_redo_log_ar%';
+------------------------------+--------------------------------------+
| Variable_name                | Value                                |
+------------------------------+--------------------------------------+
| innodb_redo_log_archive_dirs | backup1:/var/lib/mysql-redo-archive/ |
+------------------------------+--------------------------------------+
1 row in set (0.0025 sec)

  1. Now it’s ready to work, but it’s not enabled. It will be enabled when the session initializes the backup and will invoke innodb_redo_log_archive_start().
MySQL  localhost:33060+ ssl  SQL > select innodb_redo_log_archive_start('backup1', 'backup1');
+-----------------------------------------------------+
| innodb_redo_log_archive_start('backup1', 'backup1') |
+-----------------------------------------------------+
|                                                   0 |
+-----------------------------------------------------+
1 row in set (0.0229 sec)

How to check if the redo log archiving is active

MySQL  localhost:33060+ ssl  SQL > select * from performance_schema.file_instances where event_name like '%::redo_log_archive_file'G
*************************** 1. row ***************************
FILE_NAME: /var/lib/mysql-redo-archive/backup1/archive.f255a32d-2fb4-11ee-889e-0242ac110005.000001.log
EVENT_NAME: wait/io/file/innodb/meb::redo_log_archive_file
OPEN_COUNT: 1
1 row in set (0.0015 sec)

So, this is not enough to ensure the redo log archiving is active. But we have the possibility also to check if the thread is active using this query:

MySQL  localhost:33060+ ssl  SQL > select thread_id, name, type from performance_schema.threads  where name like '%redo_log_archive%';
+-----------+-----------------------------------------------------+------------+
| thread_id | name                                                | type       |
+-----------+-----------------------------------------------------+------------+
|        48 | thread/innodb/meb::redo_log_archive_consumer_thread | BACKGROUND |
+-----------+-----------------------------------------------------+------------+
1 row in set (0.0021 sec)

If a row is returned, it means that the redo log archiving is enabled and active.

If, instead, you wish not to revert and to make it persist on service restart, add it to my.cnf under [mysqld]:

[mysqld]
innodb_redo_log_archive_dirs = redo-archived1:/mysql/archived-redo1

How to stop redo log archiving

MySQL  localhost:33060+ ssl  SQL > select innodb_redo_log_archive_stop();
+--------------------------------+
| innodb_redo_log_archive_stop() |
+--------------------------------+
|                              0 |
+--------------------------------+
1 row in set (0.0009 sec)

[root@ip-xxx-xx-xx-xx ~]# ls -ltrh /var/lib/mysql-redo-archive/backup1/
total 24K
-r--r-----. 1 mysql mysql 24K Aug 21 11:22 archive.94f4ab58-3d1c-11ee-9e4f-0af1fd7c44c9.000001.log

Revert back the changes

To revert back and disable redo log archiving:

mysql> SET GLOBAL innodb_redo_log_archive_dirs='';

How does Percona XtraBackup deal with redo log files?

PXB considers the log sequence number and starts copying away the data files when a backup is performed. The backup operation on InnoDB is a non-blocking operation, so the ibd files are allowed to change while the backup process is running. This means that while PXB is copying the files, the file reflects the state of the database at different times. PXB needs the redo log records for every change to the data files since it began execution.

PXB runs a background process that watches the transaction log files and copies changes from them. It needs to do this continually because InnoDB writes redo logs to the disk circularly and can be reused after a while. However, prior to PXB version 8.0.34-29, if InnoDB redo log archiving is not enabled and provided that redo logs are written faster, then PXB is able to copy it slower than it is written to the disk. It is possible to receive the below message:

xtrabackup: error: log block numbers mismatch:
xtrabackup: error: expected log block no. X, but got no. Y from the log file.
xtrabackup: error: it looks like InnoDB log has wrapped around before xtrabackup could process all records due to either log copying being too slow, or log files being too small.
xtrabackup: Error: xtrabackup_copy_logfile() failed.

Earlier to the release of the feature redo log archiving in MySQL 8.0.17, the  potential solutions to the above problem were:

  • Increasing the size of the redo logs so they are not wrapped while PXB is working.
  • The read speed is too slow, which usually is a sign of IO congestion.
  • The write speed is too slow, usually a sign of IO congestion or network congestion.
  • Taking backup off-peak time while the system saturation is lower.

However, now, with the use of the redo log archiving feature introduced in MySQL 8.0.17, you can overcome the slowness of physical backups and can efficiently perform the backup even during peak hours.

Also, PXB version 8.0.34-29 has introduced a new option –redo-log-arch-dir that can be used to set the redo log archiving directory if not already set in MySQL.

Using MySQL redo log archiving for Percona XtraBackup

So now we have a redo-log-file archived in the folder we created above demonstration.

[root@ip-xxx-xx-xx-xx ~]# ls -ltrh /var/lib/mysql-redo-archive/backup1/
total 24K
-r--r-----. 1 mysql mysql 24K Aug 21 11:22 archive.94f4ab58-3d1c-11ee-9e4f-0af1fd7c44c9.000001.log

Test with Percona XtraBackup

  1. Run PXB as the owner of the mysqld, typically the MySQL OS user. We will be using version 8.0.34-29
[root@ip-xxx-xx-xx-xx ~]# xtrabackup --version
2023-09-11T16:07:17.696628-00:00 0 [Note] [MY-011825] [Xtrabackup] recognized server arguments: --datadir=/var/lib/mysql
xtrabackup version 8.0.34-29 based on MySQL server 8.0.34 Linux (x86_64) (revision id: 5ba706ee)

This is required for PXB to use the InnoDB archiving feature. Otherwise, it will encounter a “Permission denied” error and will not use archiving as below:

2023-08-21T13:27:59.293520-00:00 0 [Note] [MY-011825] [Xtrabackup] xtrabackup redo_log_arch_dir is set to backup1:/var/lib/mysql-redo-archive/
2023-08-21T13:27:59.295379-00:00 0 [ERROR] [MY-011825] [Xtrabackup] failed to fetch query result select innodb_redo_log_archive_start('backup1', '1692710879293') : Cannot create redo log archive file '/var/lib/mysql-redo-archive/1692710879293/archive.94f4ab58-3d1c-11ee-9e4f-0af1fd7c44c9.000001.log' (OS errno: 13 - Permission denied)
2023-08-21T13:27:59.295429-00:00 0 [Note] [MY-011825] [Xtrabackup] Redo Log Archiving is not used.

PXB will create a temporary directory for the archive file. Since redo log archiving is a MySQL process, it needs to be owned by the MySQL OS user to be able to write to it.

1.1) Option 1: Login as the MySQL OS user:

shell> sudo -H -u mysql bash

Execute:

shell> xtrabackup --no-lock=1 --compress --parallel=4 --host=localhost --user=root --password='password_string' --backup=1 --target-dir=/Backup/21Aug 2> /tmp/b0-with-redo-archiving-as-mysql-os-user.log

1.2) Option 2: Directly execute as the root OS user for the MySQL user.

You must add -c on the command like:

shell> sudo -H -u mysql bash -c '<COMMAND>' 2> /tmp/backup.log

Execute:

shell> sudo -H -u mysql bash -c 'xtrabackup --no-lock=1 --compress --parallel=4 --host=localhost --user=root --password='password_string' --backup=1 --target-dir=/Backup/21Aug' 2> /tmp/b0-with-redo-archiving-as-mysql-os-user.log

4.2) Verify archiving is used:

shell> less /tmp/b0-with-redo-archiving-as-mysql-os-user.log 
2023-08-22T13:36:02.773345-00:00 0 [Note] [MY-011825] [Xtrabackup] recognized server arguments: --datadir=/var/lib/mysql  
2023-08-22T13:36:02.773543-00:00 0 [Note] [MY-011825] [Xtrabackup] recognized client arguments: --no-lock=1 --compress --parallel=4 --host=localhost --user=root --password=* --backup=1 --target-dir=/Backup/22Aug  xtrabackup version 8.0.34-29 based on MySQL server 8.0.34 Linux (x86_64) (revision id: 5ba706ee) 230822 13:36:02  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;host=localhost' as 'root'  (using password: YES). 230822 13:36:02  version_check Connected to MySQL server 230822 13:36:02  version_check Executing a version check against the server... 230822 13:36:02  version_check Done. 
2023-08-22T13:36:02.899397-00:00 0 [Note] [MY-011825] [Xtrabackup] Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: not set 2023-08-22T13:36:02.906059-00:00 0 [Note] [MY-011825] [Xtrabackup] Using server version 8.0.34 
.... 
.... 
2023-08-22T13:36:02.980089-00:00 0 [Note] [MY-011825] [Xtrabackup] Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: not set 
2023-08-22T13:36:02.986698-00:00 0 [Note] [MY-011825] [Xtrabackup] xtrabackup redo_log_arch_dir is set to backup1:/var/lib/mysql-redo-archive/ 
2023-08-22T13:36:03.015207-00:00 0 [Note] [MY-011825] [Xtrabackup] Waiting for archive file '/var/lib/mysql-redo-archive//1692711362986/archive.94f4ab58-3d1c-11ee-9e4f-0af1fd7c44c9.000001.log' 
2023-08-22T13:36:03.091572-00:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (19630018) 
.... 
.... 
2023-08-22T13:36:04.569714-00:00 0 [Note] [MY-011825] [Xtrabackup] Done: Compressing file /Backup/22Aug/xtrabackup_info.zst 
2023-08-22T13:36:05.583075-00:00 0 [Note] [MY-011825] [Xtrabackup] Transaction log of lsn (19630018) to (19630028) was copied. 
2023-08-22T13:36:05.790293-00:00 0 [Note] [MY-011825] [Xtrabackup] completed OK!

How to set –redo-log-arch-dir=name in PXB command

PXB 8.0.34-29 introduces a new feature  –redo-log-arch-dir=name. This option sets the redo log archive directory if this directory is not already set on the server. Here is a short example demonstrating the functioning of  –redo-log-arch-dir parameter.

Suppose innodb_redo_log_archive_dirs is NULL, which means initially, the redo logs archive directory is not set in MySQL, as shown below.

mysql>  select @@innodb_redo_log_archive_dirs;
+--------------------------------+
| @@innodb_redo_log_archive_dirs |
+--------------------------------+
| NULL                           |
+--------------------------------+
1 row in set (0.01 sec)

Create a redo log file directory and give the appropriate permissions

[root@ip-xxx-xx-xx-xx ~]# cd /var/lib/mysql-redo-archive/
[root@ip-xxx-xx-xx-xx ~]# mkdir redo_arch_backup

### Check mysql-redo-archive folder Permissions
[root@ip-xxx-xx-xx-xx ~]# ls -ltrh /var/lib/ |grep archive 
drwx------.  4 mysql  mysql     51 Sep 11 17:16 mysql-redo-archive

As you can see, PXB is all set to use the new redo archive directory just by passing the archive directory path to the parameter  –redo-log-arch-dir

–redo-log-arch-dir=redo_arch_backup:/var/lib/mysql-redo-archive/

[root@ip-xxx-xx-xx-xx ~]# sudo -H -u mysql bash
bash-5.1$ xtrabackup --no-lock=1 --compress --parallel=4 --host=localhost --user=root --password='password_string' --backup=1 --target-dir=/Backup/11Sept23_bkp --redo-log-arch-dir=redo_arch_backup:/var/lib/mysql-redo-archive/ 2> /tmp/a1-with-redo-archiving-as-mysql-os-user.log

Verifying the PXB logs:

[root@ip-xxx-xx-xx-xx ~]# cat /tmp/a1-with-redo-archiving-as-mysql-os-user.log
2023-09-11T17:16:47.836951-00:00 0 [Note] [MY-011825] [Xtrabackup] recognized server arguments: --datadir=/var/lib/mysql
2023-09-11T17:16:47.837105-00:00 0 [Note] [MY-011825] [Xtrabackup] recognized client arguments: --no-lock=1 --compress --parallel=4 --host=localhost --user=root --password=* --backup=1 --target-dir=/Backup/11Sept23_bkp --redo_log_arch_dir=redo_arch_backup:/var/lib/mysql-redo-archive/
xtrabackup version 8.0.34-29 based on MySQL server 8.0.34 Linux (x86_64) (revision id: 5ba706ee)
230911 17:16:47  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;host=localhost' as 'root'  (using password: YES).
230911 17:16:47  version_check Connected to MySQL server
230911 17:16:47  version_check Executing a version check against the server...
230911 17:16:47  version_check Done.
2023-09-11T17:16:47.956847-00:00 0 [Note] [MY-011825] [Xtrabackup] Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: not set
....
....
2023-09-11T17:16:48.028506-00:00 0 [Note] [MY-011825] [Xtrabackup] Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: not set
2023-09-11T17:16:48.034515-00:00 0 [Note] [MY-011825] [Xtrabackup] xtrabackup redo_log_arch_dir is set to redo_arch_backup:/var/lib/mysql-redo-archive/
2023-09-11T17:16:48.056504-00:00 0 [Note] [MY-011825] [Xtrabackup] Waiting for archive file '/var/lib/mysql-redo-archive//1694452608034/archive.94f4ab58-3d1c-11ee-9e4f-0af1fd7c44c9.000001.log'
2023-09-11T17:16:48.137941-00:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (19816218)
2023-09-11T17:16:48.139886-00:00 0 [Note] [MY-012953] [InnoDB] Disabling background ibuf IO read threads.
2023-09-11T17:16:48.343117-00:00 0 [Note] [MY-011825] [Xtrabackup] Generating a list of tablespaces
....
....
2023-09-11T17:16:49.496740-00:00 0 [Note] [MY-011825] [Xtrabackup] Compressing /Backup/11Sept23_bkp/backup-my.cnf.zst
2023-09-11T17:16:49.496789-00:00 0 [Note] [MY-011825] [Xtrabackup] Done: Compressing file /Backup/11Sept23_bkp/backup-my.cnf.zst
2023-09-11T17:16:49.498483-00:00 0 [Note] [MY-011825] [Xtrabackup] Compressing /Backup/11Sept23_bkp/xtrabackup_info.zst
2023-09-11T17:16:49.498521-00:00 0 [Note] [MY-011825] [Xtrabackup] Done: Compressing file /Backup/11Sept23_bkp/xtrabackup_info.zst
2023-09-11T17:16:50.501818-00:00 0 [Note] [MY-011825] [Xtrabackup] Transaction log of lsn (19816218) to (19816228) was copied.
2023-09-11T17:16:50.708435-00:00 0 [Note] [MY-011825] [Xtrabackup] completed OK!

mysql>  select @@innodb_redo_log_archive_dirs;
+-----------------------------------------------+
| @@innodb_redo_log_archive_dirs                |
+-----------------------------------------------+
| redo_arch_backup:/var/lib/mysql-redo-archive/ |
+-----------------------------------------------+
1 row in set (0.00 sec)

This means if innodb_redo_log_archive_dirs is not set in MySQL, PXB 8.0.34-29 will set this through –redo-log-arch-dir option and start using the redo log archive while a backup is initiated. When not set explicitly through –redo-log-arch-dir, PXB, by default, will first check innodb_redo_log_archive_dirs. If this variable is set in MySQL, as demonstrated previously, it will use the redo log arching directory assigned in innodb_redo_log_archive_dirs variable while performing the backup.

Conversely, if this MySQL variable innodb_redo_log_archive_dirs is NULL and you did not explicitly specify –redo-log-arch-dir while taking the backup, then PXB won’t use the redo log archiving feature.

Conclusion

This feature is crucial for heavy workloads when the backup storage doesn’t have the same capabilities as the production storage and cannot follow up the speed of the writes.

When enabled by the DBA, Percona XtraBackup will detect the archived redo log files and will use it automatically. To verify if the redo log archiving process was started and is still active, you can check the performance_schema.threads table as illustrated above. On the other hand, to use this feature with Percona XtraBackup, you can also explicitly use –redo-log-arch-dir=name.

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL.

 

Try Percona XtraBackup today!

 

Related links:

https://docs.percona.com/percona-xtrabackup/8.0/release-notes/8.0/8.0.34-29.0.html

https://docs.percona.com/percona-xtrabackup/8.0/xtrabackup-option-reference.html#-redo-log-arch-dirname

https://dev.mysql.com/doc/refman/8.0/en/innodb-redo-log.html#innodb-redo-log-archiving

https://www.percona.com/blog/innodb-redo-log-archiving/

 

Aug
23
2023
--

The Deprecation of qpress/QuickLZ Compression Algorithm

Deprecation of qpress/QuickLZ

To reduce the backup size, save storage space, and speed up the backup and restore process, you can compress a backup with Percona XtraBackup. The XtraBackup --compress option makes XtraBackup compress all output data, including the transaction log file and metadata files, with one of the supported compression algorithms. To decompress all files in a backup made with the --compress option, use the --decompress option.

Version changes

  • With Percona XtraBackup 8.0.34-29, qpress/QuickLZ is no longer supported for compress operations. The Zstandard (ZSTD) compression algorithm is moved to General Availability. With this version, ZSTD becomes the default compression algorithm for the --compress option. The alternative compression algorithm is LZ4.

    To compress files using the ZSTD compression algorithm, use the --compress option:

    xtrabackup --backup --compress --target-dir=/data/backup

    To compress files using the LZ4 compression algorithm, set the --compress option to LZ4:

    xtrabackup --backup --compress=lz4 --target-dir=/data/backup

    To decompress all files in a backup, use the --decompress option:

    xtrabackup --decompress --target-dir=/data/compressed/

    To decompress backups taken by older versions of Percona XtraBackup that used a QuickLZ compression algorithm, the --decompress option still supports qpress for backward compatibility.

  • Up to Percona XtraBackup 8.0.33-28, the --compress option uses a QuickLZ compression algorithm by default. When using --compress, the resulting files have the qpress (*.qp) archive format. 

    To compress files using the QuickLZ compression algorithm, use the --compress option:

    xtrabackup --backup --compress --target-dir=/data/backup

    Every *.qp file produced by XtraBackup is a one-file qpress archive. You can extract the contents of these files with the --decompress option that supports the qpress file archiver.

  • Starting with Percona XtraBackup 8.0.31-24, the use of qpress/QuickLZ to compress backups is deprecated. Percona recommends using either LZ4 or ZSTD compression algorithms. 
  • Percona XtraBackup 8.0.30-23 adds ZSTD compression algorithm in tech preview. ZSTD is a fast lossless compression algorithm that targets real-time compression scenarios and better compression ratios. 

    To compress files using the ZSTD compression algorithm, set the --compress option to zstd.

    xtrabackup --backup --compress=zstd --target-dir=/data/backup

    The --compress=zstd option produces *.zst files. You can extract the contents of these files with the --decompress option.

    Also, you can specify the ZSTD compression level with the --compress-zstd-level(=#) option as follows:

    xtrabackup --backup --compress --compress-zstd-level=1 --target-dir=/data/backup

 

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows.

 

Download Percona XtraBackup

Jul
25
2023
--

20X Faster Backup Preparation With Percona XtraBackup 8.0.33-28!

Faster Backup Preparation With Percona XtraBackup

In this blog post, we will describe the improvements to Percona XtraBackup 8.0.33-28 (PXB), which significantly reduces the time to prepare the backups before the restore operation. This improvement in Percona XtraBackup significantly reduces the time required for a new node to join the Percona XtraDB Cluster (PXC).

Percona XtraDB Cluster uses Percona XtraBackup to do SST (State Snapshot Transfer) from one node to another. When a new node joins the cluster, SST is performed to receive the data from DONOR to the JOINER. JOINER uses PXB to stream the data directory from DONOR. JOINER must prepare the backup before using it. It is observed that when the DONOR has a huge number of tablespaces (one million),  XtraBackup on JOINER side couldn’t complete the preparing the data (xtrabackup –prepare).

Percona XtraBackup copies the InnoDB data files. The data is internally inconsistent because the server concurrently modifies the data files while they are being copied. Percona XtraBackup performs crash recovery on the files to make a consistent, usable database again. This is called the ‘prepare’ operation (xtrabackup –prepare).

The XtraBackup –prepare operation is done in two phases:

  1. Redo log apply
  2. Undo log apply

In the redo apply phase, the changes from redo log file modifications are applied to a page. This phase has no concept of a row or a transaction. The redo apply phase wouldn’t make the database consistent with respect to a transaction. The changes done by an uncommitted transaction can be flushed or written to the redo log by the server. XtraBackup still applies the modifications recorded in the redo log, and the redo log apply phase does not undo those changes. For that, we have to use undo logs.

In the undo log apply phase (AKA rollback phase), the changes required to undo the transaction are read from undo log pages. They are then applied (for example, writing old values back again) to the pages again and written to disk. After this phase, all uncommitted transactions during the backup are rolled back.

Undo log records are of two types: INSERT Undo log record and UPDATE undo log record. DELETE MARK of records is considered as a subtype of the UPDATE UNDO log record.

The format is as shown below:

Undo log records format

When the Server writes these records, it doesn’t write the index/table information along with each record. It just writes a “table_id” as part of UNDO LOG records. The table_id is used to fetch the table schema. The table schema and key fields from the undo log record are used to create an index search tuple (key). This search tuple (key) is used to find the record to perform the undo operation.

So, given a table_id, how do you get the table schema/definition?

After the  “data dictionary” (DD) engine and DD cache are initialized on a server, the Storage Engines can ask for a table definition. For example, InnoDB asks for a table definition based on the table_id, also known as “se_private_id”.

Percona XtraBackup, unlike a server, doesn’t have access to the “data dictionary” (DD). Initializing a DD engine and the cache adds complexity and other server dependencies. XtraBackup does not simply behave like a server to access a table object.

Discover why Percona XtraBackup is trusted by thousands of enterprises.

Percona XtraBackup initializes the InnoDB engine and requires “InnoDB table object” aka dict_table_t, for all its purposes (rollback, export, etc.). XtraBackup relies on Serialized Dictionary Information (SDI). This is a JSON representation of the table. For InnoDB tablespaces, the information is stored within the tablespace. From 8.0, the IBD file is “self-describing”; for example, the table schema is available within an IBD file.

table schema is available within an IBD file

Let’s take a look at the example table.

CREATE TABLE test.t1(a INT PRIMARY KEY, b INT);

The CREATE TABLE statement creates a file called t1.ibd in the test directory. For example, mysql datadir/test/t1.ibd. So t1.ibd contains information about the table structure (columns, their types, how many indexes, columns in indexes, foreign keys, etc.) as SDI. Use a tool called “ibd2sdi” to extract SDI from an IBD file.

ibd2sdi data/test/t1.ibd > t1.sdi

As you can see from the above image, the table name is in the “dd_object:name” field, and the column information is stored in a “dd_object:columns” array.

Old design (until Percona XtraBackup 8.0.33-27):

XtraBackup reads SDI from *every* IBD and loads all tables from each IBD into the cache as non-evictable.  Essentially LRU cache is disabled by loading the tables as non-evictable. Every table remains in memory until XtraBackup exits.

Problems with this approach:

  • Loading tables that are not required for rollback.
  • Unnecessary IO operations from reading SDI pages of tables.
  • Loading unnecessary tables increases the time required to –prepare.
  • Occupies memory and can lead to OOM.
  • Crashes the XtraBackup prepare operation if the backup directory contains a huge number of tables/IBD files.
  • A node joining the PXC cluster requires more memory and takes a long time to join the cluster.

Why did XtraBackup load tables as ‘non-evictable’? Can’t we just load them as evictable to solve the problem? Let’s say a table is evicted and has to be loaded again. How will XtraBackup know the tablespace (IBD) that contains the evicted table? It must scan every IBD again to find the evicted table.

New design (from Percona XtraBackup 8.0.33-28)

To load tables as evictable, a relationship between the table_id and the tablespace(space_id) that contains the table should be established. It is done by scanning the B-tree pages of the data dictionary tables mysql.indexes and mysql.index_partitions

After this relation table_id→space_id is established, it will be used during transaction rollback. In this new design,  user tables are loaded only if there is a transaction rollback on them.

The new design is as follows:

Tables from the cache are evicted when the cache size limit is reached or by the background master thread.

Benefits of the new design, xtrabackup –prepare:

  1. Uses less memory
  2. Uses less IO
  3. Faster prepare
  4. Completes successfully even with a huge number of tables.
  5. A node completes the SST process faster and joins the PXC cluster quickly.
  6. A node requires less memory to join the PXC cluster.

Benchmarks

Percona XtraBackup benchmarks

xtrabackup –prepare on backup directory of other sizes like 10K, 50K, 100K, and 250K tables. The performance improvement is as follows:

Conclusion

As you can see, from Percona XtraBackup 8.0.33-28, xtrabackup –prepare is faster and memory efficient with the dictionary cache. The improvement will depend on the number of tablespace files (IBDs) in the backup directory. The time taken for a new node to join the PXC Cluster is also significantly reduced as the SST process will complete faster.

Percona XtraBackup is a 100% open source backup solution for all versions of Percona Server for MySQL and MySQL that performs online non-blocking, tightly compressed, highly secure full backups on transactional systems.

 

Try Percona XtraBackup today

Jul
25
2023
--

Faster Streaming Backups – Introducing Percona XtraBackup FIFO Parallel Stream

Percona XtraBackup FIFO Parallel Stream

When it comes to backups, there are several options for saving backup files. You can choose to save them locally on the same server, stream them to different servers, or store them in object storage. Percona XtraBackup facilitates streaming through the use of an auxiliary tool called xbcloud.

STDOUT Datasink

This diagram displays the process of streaming a backup to object storage utilizing the current STDOUT datasink:

STDOUT Datasink

  • XtraBackup spawns multiple copy threads. Each one will be reading a chunk of data from a specific file.
  • Each copy thread will write the chunk of data to a pipe (aka STDOUT).
  • Xbcloud will have a red thread that will be responsible for reading each chunk of data from STDIN. This chunk will be uploaded to object storage utilizing an async request, and a callback will be added to an event handler list.
  • The event handler thread of xbcloud will be responsible for checking when the TCP socket has received the response from the object storage and executing the callback depending on the return (success or failure).

When testing the speed of XtraBackup streaming, we can see a limitation of around 1.8 Gbps. This architecture works fine when uploading data via WAN (we mostly will not have this speed over WAN). However, needing to stream backups over the LAN has become more popular, and 10Gbps or more is the standard setup.

After careful consideration of the above architecture, it has become clear that even though xbcloud can act super fast reading from STDIN, doing an asynchronous operation to Object Storage, and delegating the process to wait for the response to the event handler worker thread, there is only one thread on the XtraBackup side that can be writing at the same time to STDOUT. This is done via a mutex to ensure a chunk of data is written from start to finish. Otherwise, we would have the interference of other copy thread data being appended to STDOUT while the other thread has not completed its initial chunk.

FIFO Datasink

Percona XtraBackup 8.0.33-28 introduces a new datasink for true parallel streaming. This new datasink utilizes FIFO (named pipes) to achieve true parallelism to stream data from XtraBackup to xbcloud. Extending the previous diagram, the new data stream will look like below:

FIFO Datasink

  • Multiple FIFO files will be created, which will be responsible for storing the data from XtraBackup to xbcloud.
  • Each XtraBackup copy thread will be bound to a FIFO file in a round-robin fashion.
  •  Xbcloud will have multiple read threads to read from the FIFO streams.

Usage

We introduced three new parameters:

  • fifo-streams=# Number of FIFO files to use for parallel datafiles stream. Setting this parameter to 1 disables FIFO, sending the stream to STDOUT. The default is kept as 1 (STDOUT) for backward compatibility.
  • fifo-dir=path Directory to write Named Pipe.
  • fifo-timeout=# How many seconds to wait for the other end to open the stream for reading. Default is 60 seconds

The current behavior and XtraBackup and cloud are not changed. The default value of –fifo-streams is 1, so we still have a single-line command:

Performance

To test the new FIFO datasink, we have created 1TB of data into multiple tables. The link speed between the source server and the destination server ( utilizing MinIO ) is ~9.2Gbps:

The test was performed without any load (no new redo log during the backup) to ensure both tests are both pushing 1TB of data from two servers.

The results are shown in the below graph:

  • STDOUT – Utilized 239 MBps (1.8 Gbps) and took  01:25:24 to push 1TB of data.
  • FIFO ( 8 pipes) – Utilized 1.15 GBps (9.2 Gbps) and took only  00:16:01 to push the same 1TB of data.

Summary

If you utilize streaming inside your local network, you can definitely benefit from faster backups by streaming data in parallel using the new FIFO datasink.

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL.

Learn more about Percona XtraBackup

Jun
27
2023
--

Backup and Restore Using MySQL Shell

Backup and Restore Using MySQL Shell

MySQL Shell is an advanced client and code editor for MySQL. In addition to the provided SQL functionality, similar to MySQL, MySQL Shell provides scripting capabilities for JavaScript and Python and includes APIs for working with MySQL. The X DevAPI enables you to work with both relational and document data, and MySQL Shell 8.0 is highly recommended for use with MySQL Server 8.0 and 5.7.

MySQL Shell includes utilities for working with MySQL. To access the utilities from within MySQL Shell, use the util global object, which is available in JavaScript and Python modes, but not SQL mode. These are the utilities to take a backup; let’s see some basic commands.

  • util.dumpTables – Dump one or more tables from single database
  • util.dumpSchemas – Dump one or more databases
  • util.dumpInstance – Dump full instance
  • util.loadDump – Restore dump

1. Single table dump

The below command is to take a dump of the table sbtest1 from the sysbench database and store the backup on the destination directory sysbench_dumps. The utility will create the directory when the destination directory does not exist. By default, compression, and chunking are enabled. When chunking is enabled, the table dump will be spitted onto multiple files based on size. Dialect:”csv gives the extension of the dump file, and by default, the file will be created with the tsv (Table separated value) extension.

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dialect:"csv"})
Acquiring global read lock
Global read lock acquired
Initializing - done
1 tables and 0 views will be dumped.
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Running data dump using 6 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing schema metadata - done
Writing DDL - done
Writing table metadata - done
Starting data dump
101% (1000.00K rows / ~986.40K rows), 317.96K rows/s, 63.91 MB/s
Dump duration: 00:00:03s
Total duration: 00:00:04s
Schemas dumped: 1
Tables dumped: 1
Data size: 198.89 MB
Rows written: 1000000
Bytes written: 198.89 MB
Average throughput: 60.96 MB/s
MySQL localhost JS >

These are the files created for the above dump command.

[root@centos12 sysbench_dumps]# ls -lrth
total 190M
-rw-r-----. 1 root root 869 Jun 21 13:08 @.json
-rw-r-----. 1 root root 240 Jun 21 13:08 @.sql
-rw-r-----. 1 root root 240 Jun 21 13:08 @.post.sql
-rw-r-----. 1 root root 231 Jun 21 13:08 sysbench.json
-rw-r-----. 1 root root 638 Jun 21 13:08 sysbench@sbtest1.json
-rw-r-----. 1 root root 474 Jun 21 13:08 sysbench.sql
-rw-r-----. 1 root root 789 Jun 21 13:08 sysbench@sbtest1.sql
-rw-r-----. 1 root root 1.5K Jun 21 13:08 sysbench@sbtest1.csv.idx
-rw-r-----. 1 root root 190M Jun 21 13:08 sysbench@sbtest1.csv
-rw-r-----. 1 root root 233 Jun 21 13:08 @.done.json
[root@centos12 sysbench_dumps]# pwd
/home/vagrant/sysbench_dumps

@.json Complete information about dump options, servername, and username used for the dump and binlog file and position, etc.
@.sql, @.post.sql. Shows server version and dump version details.
sysbench.json Database and table details involved in the dump.
sysbench@sbtest1.json Details about the table sbtest1, including column names, indexes, triggers, characterset, and partitions.
sysbench.sql Create a statement for the database sysbench.
sysbench@sbtest1.sql Create a statement for the table sbtest1.
@.done.json End time of the dump and dump file size.
sysbench@sbtest1.csv Table dump file.

 

2. Backup only table structure

Option ddlOnly:true is used to take only the table structures. The below command is to take the table structure of sbtest1 from the sysbench database and store it in the sysbench_dumps path.

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, ddlOnly:true})

3. Dump only table data

Option dataOnly:true to take the dump of only data. The below command is to take table data of sbtest1 from the sysbench database and store it in the sysbench_dumps path.

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true})

4. Dump only selected data

This “where”: {“databasename.tablename”: “condition”} option is used to take a dump of selected data. The below command is to take a dump of table sbtest1 from id 1 to 10.

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1"], "sysbench_dumps", {"compression":"none", "threads":6 ,chunking:false, dataOnly:true, "where" : {"sysbench.sbtest1": "id between 1 and 10"}})

It’s also possible to take a dump of multiple tables with their conditions in a single command.

Syntax:

"where" : {"databasename1.tablename1": "condition for databasename1.tablename1", "databasename2.tablename2": "condition for databasename2.tablename2"}

The below command is to take a dump of table sbtest1 from id 1 to 10 and dump of sbtest2 from id 100 to 110.

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1", "sbtest2"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true, "where" : {"sysbench.sbtest1": "id between 1 and 10", "sysbench.sbtest2": "id between 100 and 110"}})

5. Dump data from partitions

The option partitions is to take a dump from selected partitions.

Syntax:

"partitions" : {"db1.table1": ["list of partitions"],"db2.table1": ["list of partitions"]}

The below command is to take a dump from only partitions p1 and p2 and dump of sbtest2 table from partitions p4 and p5.

MySQL localhost JS > util.dumpTables("sysbench", ["sbtest1", "sbtest2"], "sysbench_dumps", {"compression":"none", "threads":6, chunking:false, dataOnly:true, "partitions" : {"sysbench.sbtest1": ["p1", "p2"],"sysbench.sbtest2": ["p4", "p5"]}})

6. Taking Schemas dump

When taking schemas dump, by default, events, triggers, and routines will be taken. Those are stored in the database_name.sql file. The below command is to take a dump of the percona and sakila databases.

MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {"compression":"none", "threads":6, chunking:false})

The below command is to skip the events, routines, and triggers.

MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {events:false, routines:false, triggers:false, "compression":"none", "threads":6, chunking:false})

We can also use these options to include and exclude the events, routines, and triggers

Syntax:

includeEvents   : [db1.include_event1,db2.include_event2...]
includeRoutines : [db1.include_procedure1,db2.include_function2...]
includeTriggers : [db1.include_trigger1,db2.include_trigger2...]

excludeEvents   : [db1.exclude_event1,db2.exclude_event2...]
excludeTriggers : [db1.exclude_trigger1,db2.exclude_trigger2...]
excludeRoutines : [db1.exclude_procedure1,db2.exclude_function2...]

7. Taking specified tables from different databases

Sometimes we may need to take selected tables from different schemas. We can achieve this using the option includeTables.

Syntax:

includeTables:["db1.table1", "db2.table2"....]

Below is the command to take a dump of table users from the percona database and a dump of the actor table from the sakila database.

MySQL localhost JS > util.dumpSchemas(["percona", "sakila"], "schema_dump", {includeTables:["percona.users", "sakila.actor"], "compression":"none", "threads":6, chunking:false})

8. Instance dump

The command  util.dumpInstance takes the dump of a complete instance and stores it in /backup/instance_dump path. The system databases (mysql, sys, information_schema, performance_schema) are excluded, and by default, a dump of all the users from the instance is taken and stored in the file @.users.sql. This user dump file has the create and grant statements of all the users.

MySQL localhost JS > util.dumpInstance("/backup/instance_dump", {"compression":"none", "threads":6, chunking:false})

Some more options in the instance dump.

users: false                                - Skip users dump
excludeUsers : [‘user1’,’user2’]            - Execute particular users
includeUsers : [‘user1’,’user2’].           - Include particular users
excludeSchemas : [“db1”,”db2”]              - Exclude particular schemas
includeSchemas : [“db1”,”db2”].             - Include particular schemas
excludeTables : [“db1.table1”,”db2.table2”] - Exclude particular tables
includeTables : [“db1.table1”,”db2.table2”] - Include particular tables

9. Restore the dump into a single database

The command util.loadDump is used to restore the dumps. The variable local_infile should be enabled to load the dumps.

Syntax :

util.loadDump("/path/of/the/dump", {options})

The below command is to restore the dump into database test_restore. When we need to restore on a different schema, we have to use this option schema: “test_restore”. Otherwise, it will be restored on the source schema where it was taken.

MySQL localhost SQL > create database test_restore;
Query OK, 1 row affected (0.3658 sec)
MySQL localhost SQL > js
Switching to JavaScript mode...
MySQL localhost JS > util.loadDump("/home/vagrant/schema_dump", {schema:"test_restore", progressFile: "progress.json", threads: 8, showProgress:true, resetProgress:true})

10. Restore the full instance dump and configure replication

Here, we just loaded the full instance dump from /home/vagrant/instance_dump path with eight parallel threads.

MySQL localhost JS > util.loadDump("/home/vagrant/instance_dump", {progressFile: "progress.json", threads: 8, showProgress:true, resetProgress:true})
Loading DDL and Data from '/home/vagrant/instance_dump' using 8 threads.
Opening dump...
Target is MySQL 8.0.33-25. Dump was produced from MySQL 8.0.32-24
NOTE: Load progress file detected for the instance but 'resetProgress' option was enabled. Load progress will be discarded and the whole dump will be reloaded.
Scanning metadata - done
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done
Executing view DDL - done
Starting data load
8 thds loading / 100% (19.18 MB / 19.18 MB), 541.36 KB/s, 6 / 23 tables and partitions done
Recreating indexes - done
Executing common postamble SQL
23 chunks (100.00K rows, 19.18 MB) for 11 tables in 2 schemas were loaded in 11 sec (avg throughput 2.53 MB/s)
0 warnings were reported during the load.
MySQL localhost JS >

I got the binlog file and position from the file @.json and configured the replication.

[root@centos12 instance_dump]# cat @.json | grep -i binlog
"binlogFile": "centos12-bin.000006",
"binlogPosition": 760871466,
[root@centos12 instance_dump]#

MySQL localhost SQL > CHANGE REPLICATION SOURCE TO SOURCE_HOST="192.168.33.12", SOURCE_USER="bhuvan",SOURCE_PASSWORD="Bhuvan@123", SOURCE_LOG_FILE='centos12-bin.000006',SOURCE_LOG_POS=760871466;
Query OK, 0 rows affected, 2 warnings (0.1762 sec)

MySQL localhost SQL > START REPLICA;
Query OK, 0 rows affected (0.1156 sec)
MySQL localhost SQL > show replica statusG
*************************** 1. row ***************************
Replica_IO_State: Waiting for source to send event
Source_Host: 192.168.33.12
Source_User: bhuvan
Source_Port: 3306
Connect_Retry: 60
Source_Log_File: centos12-bin.000006
Read_Source_Log_Pos: 823234119
Relay_Log_File: centos11-relay-bin.000002
Relay_Log_Pos: 1152129
Relay_Source_Log_File: centos12-bin.000006
Replica_IO_Running: Yes
Replica_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Source_Log_Pos: 762023266
Relay_Log_Space: 62363195
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Source_SSL_Allowed: No
Source_SSL_CA_File:
Source_SSL_CA_Path:
Source_SSL_Cert:
Source_SSL_Cipher:
Source_SSL_Key:
Seconds_Behind_Source: 718
Source_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Source_Server_Id: 100
Source_UUID: f46a1600-045e-11ee-809f-0800271333ce
Source_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Replica_SQL_Running_State: Waiting for dependent transaction to commit
Source_Retry_Count: 86400
Source_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Source_SSL_Crl:
Source_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Source_TLS_Version:
Source_public_key_path:
Get_Source_public_key: 0
Network_Namespace:
1 row in set (0.1470 sec)

I hope the above examples help to understand the backup and restore using MySQL Shell. It has many advantages over native mysqldump. I personally feel that we are missing insert statements here, as we used to see the insert statements in dump files; apart from that, it looks good. Logical backup is good only when the dataset is small. When the dataset is big, the logical backup takes longer, and we have to go for physical backup using Percona XtraBackup.

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows.

 

Download Percona XtraBackup

Jun
23
2023
--

Is My Backup Stuck? A Quick Guide for Reviewing Percona XtraBackup Threads

mysql backup stuck

In Percona Managed Services, one key aspect of managing the databases is configuring backups. Percona XtraBackup is one of the best tools for performing physical database backups.

It is a good practice to compress the backups to save costs on storage and to encrypt the backups so those can’t be used if the files are compromised as long as you keep your encryption keys safe!

Percona XtraBackup supports both compression and encryption of the backups. When the data is too big (tens of TB and more), the backups can take several hours or even days to complete. In order to speed up the backup process along with the compression and encryption, we can use multiple threads. We can specify the number of threads to be used for each operation (copy, compression, encryption) in the Xtrabackup command.

For example, we can use:

parallel: to specify the number of threads for copying data.
compress-threads: to specify the number of threads that will compress.
encrypt-threads: to specify the number of threads that will encrypt data.

Given that the usual workflow is:

copy -> compress -> encrypt

We should use more threads to copy than to compress or encrypt. I won’t get into the details about how those should be configured as that depends on every scenario, but generally, we could use as many threads as CPUs on the server. 

If you have multiple CPUs on the server, you can increase the number of threads for each operation.

Note that multithreading backups will perform better on DBs where there are many big tables compared to when there are many small tables and a few big or huge tables, as each table will be processed by a single thread, so the bottleneck would still be to process those big tables by a single thread as in the example I will present here.

OK, now we have configured Xtrabackup with multiple threads; what to do if my backup is still taking too long, and in the log, there doesn’t seem to be any progress? 

Reviewing the log can be a bit hard as there is a lot of information. But let’s try to understand it with the following example.

Consider the following Xtrabackup command ran on Apr 20th:

xtrabackup --rsync --backup --target-dir=/backups/percona_backups/xtrabackup --slave-info --kill-long-queries-timeout=10 --kill-long-query-type=select --compress --encrypt=AES256 --encrypt-key-file=/etc/rdba/xb_keyfile --compress-threads=24 --parallel=36 --encrypt-threads=16 --lock-ddl

NOTE: The example server has 48 CPUs. Even though the sum of threads configured (parallel 36 + compress 24 + encrypt 16) is more than the actual CPU count, it doesn’t mean that all threads would be working at the same time, as the copy passes the work to the compress thread and then to an encrypt thread, in the meantime, some threads might be idle.  

We’re using parallelism with 36 threads to copy, 24 to compress, and 16 to encrypt.

The backup takes several hours to complete, so we need to review the status. For that, we check the latest entries from the log:

tailf /var/log/xtrabackup.log
2023-04-22T18:59:12.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:13.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:14.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:15.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:16.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:17.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:18.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:19.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:20.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:21.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:22.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)
2023-04-22T18:59:23.070291-06:00 1 [Note] [MY-011825] [Xtrabackup] >> log scanned up to (15453815760)

Is it stuck? Is it progressing at all?

Let’s take a look. 

First, we connected to MySQL and found that replication is blocked by the backup (caused by the –lock-ddl flag), so there are no new writes to MySQL, that’s why the log sequence is not moving (437325895905570).

From the processlist:

| 2495984 | system user         |                 | percona | Connect | 240221 | Waiting for backup lock                                         | CREATE TABLE IF NOT EXISTS `percona`.`heartbeat` (ts varchar(26) NOT NULL,  | 240220951 |         0 |             0 |

The above-blocked thread is the SQL thread.

Because of the above, it is expected that the log sequence number is not moving.

Then, we can ignore the lines of “log scanned” to remove “clutter” from the log:

grep -v "log scanned" /var/log/xtrabackup.log|less

We can see something like this:

2023-04-21T00:27:14.857218-00:00 03 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb1.ibd to /backups/percona_backups/xtrabackup/xxxx/tb1.ibd.qp.xbcrypt
2023-04-21T00:27:14.875316-00:00 24 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb2.ibd to /backups/percona_backups/xtrabackup/xxxx/tb2.ibd.qp.xbcrypt
2023-04-21T00:27:14.893224-00:00 05 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb3.ibd to /backups/percona_backups/xtrabackup/xxxx/tb3.ibd.qp.xbcrypt
2023-04-21T00:27:14.914653-00:00 12 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb4.ibd to /backups/percona_backups/xtrabackup/xxxx/tb4.ibd.qp.xbcrypt
2023-04-21T00:27:14.935646-00:00 27 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb5.ibd to /backups/percona_backups/xtrabackup/xxxx/tb5.ibd.qp.xbcrypt
2023-04-21T00:27:14.925667-00:00 31 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb6.ibd to /backups/percona_backups/xtrabackup/xxxx/tb6.ibd.qp.xbcrypt
2023-04-21T00:27:15.014653-00:00 14 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb7.ibd to /backups/percona_backups/xtrabackup/xxxx/tb7.ibd.qp.xbcrypt
2023-04-21T00:27:15.056675-00:00 33 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb8.ibd to /backups/percona_backups/xtrabackup/xxxx/tb8.ibd.qp.xbcrypt
2023-04-21T00:27:15.103853-00:00 20 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb9.ibd to /backups/percona_backups/xtrabackup/xxxx/tb9.ibd.qp.xbcrypt
2023-04-21T00:27:15.143466-00:00 36 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb10.ibd to /backups/percona_backups/xtrabackup/xxxx/tb10.ibd.qp.xbcrypt
2023-04-21T00:27:15.167853-00:00 35 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./dbxxxx/tb11.ibd to /backups/percona_backups/xtrabackup/xxxx/tb11.ibd.qp.xbcrypt

From the above output, we can see that the number of the thread is just after the date and time, and it prints which table it was processing.

NOTE: In Percona XtraBackup 2.4, the thread number is printed between brackets [], for example:

230421 00:27:14 [04] Compressing and encrypting ./xxxx/tb12.ibd to /backups/percona_backups/xtrabackup/xxxx/tb12.ibd.qp.xbcrypt

In our example, we ran it with 36 threads, so it is expected to see numbers up to 36.

After each thread finishes what it was doing, it prints the label “Done:” followed by the table it has just finished processing, as we can see here:

2023-04-21THH:MM:SS.MS-00:00 2 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tb14.ibd to /backups/percona_backups/xtrabackup/xxxx/tb14.ibd.qp.xbcrypt

NOTE: In XtraBackup 2.4, it was only printed the label “…done” without further information as follows:

230421 00:27:15 [20]        ...done

So, if we search for the “Done:” text in the log and print the last ones, we can see the following:

grep "Done:" /var/log/xtrabackup.log|tail -50
2023-04-21THH:MM:SS.MS-00:00 1  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 17 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 25 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 2  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 11 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 28 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 8  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 14 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 10 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 35 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 15 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 16 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 27 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 23 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 21 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 24 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 33 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 13 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 6  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 2  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 18 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 34 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 7  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 22 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 3  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 26 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 29 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 20 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 31 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 30 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 9  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-21THH:MM:SS.MS-00:00 33 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-22THH:MM:SS.MS-00:00 19 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt
2023-04-22THH:MM:SS.MS-00:00 4  [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tbxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxx.ibd.qp.xbcrypt

NOTE: For XtraBackup 2.4, the command we can use is this one: 

grep “done:” /var/log/xtrabackup.log|tail -50

We can see that there are recent entries (assuming that we are checking on Apr 22nd). So we can assume that the backup is still running.

From the above list of threads, it seems that only thread 12 is missing; thus, it may still be processing some work. So let’s search for what thread number 12 is doing:

grep " 12 " /var/log/xtrabackup.log |tail 
2323-04-20T09:25:59.935646-00:00 12 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./xxxx/tb12x.ibd to /backups/percona_backups/xtrabackup/xxxx/tb12x.ibd.qp.xbcrypt
2323-04-20T09:30:12.234235-00:00 12 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tb12x.ibd to /backups/percona_backups/xtrabackup/xxxx/tb12x.ibd.qp.xbcrypt
2323-04-20T09:30:12.345435-00:00 12 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./xxxx/tb13x.ibd to /backups/percona_backups/xtrabackup/xxxx/tb13x.ibd.qp.xbcrypt
2323-04-20T09:32:34.342422-00:00 12 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tb13x.ibd to /backups/percona_backups/xtrabackup/xxxx/tb13x.ibd.qp.xbcrypt
2323-04-20T09:32:34.463554-00:00 12 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./xxxx/tb14x.ibd to /backups/percona_backups/xtrabackup/xxxx/tb14x.ibd.qp.xbcrypt
2323-04-20T09:32:34.676345-00:00 12 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tb14x.ibd to /backups/percona_backups/xtrabackup/xxxx/tb14x.ibd.qp.xbcrypt
2323-04-20T09:32:34.945535-00:00 12 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./xxxx/tb15x.ibd to /backups/percona_backups/xtrabackup/xxxx/tb15x.ibd.qp.xbcrypt
2323-04-20T13:45:31.453535-00:00 12 [Note] [MY-011825] [Xtrabackup] Done: Compressing and encrypting ./xxxx/tb15x.ibd to /backups/percona_backups/xtrabackup/xxxx/tb15x.ibd.qp.xbcrypt
2323-04-20T13:45:31.756454-00:00 12 [Note] [MY-011825] [Xtrabackup] Compressing and encrypting ./xxxx/tbxxx.ibd to /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt

It is still compressing and encrypting a large table since two days ago. So we check if the file size keeps moving:

$ ll /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt
-rw-r----- 1 mysql mysql 5413837478400 abr  22 19:07 /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt
$ ll /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt
-rw-r----- 1 mysql mysql 5414047488000 abr  22 19:07 /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt
$ ll /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt
-rw-r----- 1 mysql mysql 5414413101588 abr  22 19:07 /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt
$ ll /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt
-rw-r----- 1 mysql mysql 5414656187760 abr  22 19:07 /backups/percona_backups/xtrabackup/xxxx/tbxxx.ibd.qp.xbcrypt

We can see that the size of the file keeps increasing, so my backup is still running, and it is not stuck. Note that the table is 5T in size, and it is being processed by a single thread, and that’s the reason for the big duration of the backup. Enabling InnoDB compression and encryption at rest could help to avoid using encryption and compression with the backups.

Conclusion

We can use multiple threads to speed up the backup process with Percona XtraBackup, but even with multiple threads, the backups can take several hours or days to complete if the data is large.

It may be easy to assume that the backup is stuck if the log is not printing any different message for a few hours, but we need to make sure that it is running by reviewing the log.

Reviewing the log can be a bit overwhelming as it prints too much information. In this blog, we covered how to review what the backup process is doing with basic Linux text parsing commands such as grep, tail, etc.

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows.

Try Percona XtraBackup today

Jun
06
2023
--

Full vs. Incremental vs. Differential Backups: Comparing Backup Types

incremental vs differential backup

This blog was originally published in January 2012 and was updated in June 2023.

In today’s data-driven world, the protection and reliability of our valuable information is vital. The potential impact of data loss or corruption on individuals and organizations cannot be understated. Therefore, establishing a strong backup strategy is essential to ensure data security and minimize potential disruptions.

When it comes to backups, there are multiple types available, each with its own advantages and limitations. In this blog post, we will explore the realm of backups and conduct a comparative analysis of three backup types: Full, incremental, and differential backups.

Looking for a MongoDB backup solution?

Get started today with Percona Backup for MongoDB.

What is a full backup?

A full backup is a comprehensive data backup strategy that involves creating a complete, exact copy of all data and files in a database. Depending on an organization’s specific requirements, legal or otherwise, full backups are typically performed periodically, such as daily, weekly, or monthly.

Full backups are most suitable for critical systems or sensitive data that cannot afford any loss or downtime, allowing for a complete recovery of data in the event of data corruption, hacking, system failures, or natural disasters. In addition, they are used for restoring or moving data to a new system.

Full backups also provide a baseline for subsequent incremental or differential backups, which we will discuss a little later.

In order to implement effective full backups of your database, you must follow some best practices. This includes establishing a regular backup schedule, regular testing to ensure data integrity, storing backups securely and offsite/in multiple locations, and documenting backup procedures.

Advantages of full backups

Full backups provide several benefits in terms of safeguarding data and facilitating recovery processes, as they encompass thorough data replication and enable complete system restoration during critical situations.

  • Comprehensive Data Protection: A big advantage of full backups is that they capture every bit of information, including the operating system, applications, databases, and user files, ensuring that all critical data is replicated and protected against loss or corruption. In the event of system failures or catastrophic events, the ability to restore the entire system from a full backup ensures minimal data loss.
  • Simplicity and Ease of Restoration: Since full backups capture a complete copy of all data and files, it simplifies the recovery and restoration of a database. This allows for faster and more convenient restoration, minimizing downtime and allowing organizations to resume normal operations quickly.
  • Independent Backup Copies: The self-contained nature of full backups — capturing a complete copy of all data and files — allows each backup set to be accessed and restored independently. This provides flexibility in data recovery and allows organizations to retrieve specific versions or data sets as needed effectively.
  • Fast Recovery: Restoring a system from a full backup is generally faster when compared to other backup types, allowing for minimal downtime and fast resumption of operations.

Disadvantages of full backups

While full backups offer numerous advantages, it is important to consider their potential disadvantages in order to make informed decisions regarding backup strategies.

  • Increased Storage Requirements: One of the disadvantages of full backups is the increased storage requirements due to the fact that they capture a complete copy of all data and files. As the amount of data grows, organizations may need to allocate additional storage resources to accommodate the larger backup sets, leading to higher costs.
  • Longer Backup Time: Full backups have longer backup times due to the comprehensive nature of capturing all data and files, which can be particularly time-consuming when dealing with large datasets.
  • Network Bandwidth Utilization: This is another potential disadvantage when it comes to full backups. Since they capture all of the data and files, they require significant network resources to transfer backup sets over the network, resulting in increased network congestion and potential performance issues.
  • Frequent Backup Cycles: An aspect to consider with full backups is the frequency of necessary backup cycles. Full backups can be resource-intensive and time-consuming, and organizations may need to allocate more time and storage resources to accommodate regular full backups.

Learn how to restore a single InnoDB table from a full backup after accidentally dropping it in this blog.

What is an incremental backup?

Incremental backups capture and store only any changes made to the data since the last backup (full or incremental). This results in smaller backup sets, reducing storage requirements, time to take the backup, and network usage, as compared to full backups. These backups are particularly suitable for those environments where data changes frequently.

To implement incremental backups effectively, it is important to establish a baseline with a full backup that serves as the initial reference for subsequent incremental backups. And in order to ensure data integrity and recoverability, DBAs should be testing incremental backups regularly, including the verification of the integrity of backup sets, as well as doing test restores.

When deciding where to store incremental backups, consideration should be given to offsite storage in order to provide protection against data loss or damage, and the regular scheduling of incremental backups is essential to ensure that all data changes are captured and maintain a reliable backup chain.

Advantages of incremental backups

In this section, we will explore the benefits and advantages of utilizing incremental backups for data protection which makes them a valuable database backup solution.

  • Reduced Backup Time: This is a big advantage of incremental backups. Since only the changes made since the last backup are captured, the backup process is far faster as compared to full backups.
  • Lower Storage Requirements: Incremental backups help lower storage needs by only storing the changes made to data since the last backup rather than saving the entire dataset. This optimizes storage space and allows for more frequent backups.
  • Faster Recovery: Faster data recovery is a key advantage of incremental backups. Because this process allows for selective restores of only specific changes since the last backup, it reduces the time organizations need for data recovery.
  • Bandwidth Optimization: Because a smaller amount of data needs to be transferred during an incremental backup, it can reduce network resources and congestion. For issues that may arise during peak network usage, this can prove to be valuable in keeping applications performing efficiently.

Disadvantages of incremental backups

Although incremental backups do provide several advantages over other backup strategies, it is essential to be aware of their potential drawbacks to make well-informed decisions that work for your requirements.

  • Longer Restore Time: The restore process for incremental backups may take longer because it could involve having to sequentially apply multiple incremental backups to restore any missing or corrupted data that spans several backup versions.
  • Increased Complexity: Implementing incremental backups can introduce increased complexity compared to other backup strategies, as DBAs always need to manage and maintain the backup chain. This requires careful tracking and organization to ensure data integrity and availability at all times.
  • Dependency on Full Backup: The restoration of data from an incremental backup strategy still relies on the availability of an initial full backup, as well as all subsequent incremental backups.
  • Longer Backup Chain: Incremental backups create a longer backup chain as each backup captures the changes since the last backup, including all incremental backups. Because each of these backup sets must be managed and stored, the length of the backup chain grows over time, requiring adequate storage capacity that may increase costs.

Want to learn more about incremental backups in MySQL using page tracking? Check out this blog!

What is a differential backup?

A differential backup backs up and keeps all the changes made since the last full backup was taken. Unlike the incremental backups discussed above, which capture only the changes made since the last backup, differential backups result in larger backup sets over time.

Differential backups are ideal when organizations must find a balance between backup size and restoration time. This strategy is optimal for quick data recovery, as the process involves applying the latest full backup followed by the most recent differential backup. It also simplifies data restoration compared to incremental backups, which require multiple backup sets to be used.

In order to optimally use differential backups, it’s vital to establish a regular backup schedule, test and verify data integrity constantly, store all backups securely and maintain up-to-date documentation of backup procedures.

Advantages of differential backups

Differential backups provide significant benefits in terms of restore efficiency and storage space optimization, making them a valuable choice for organizations looking for a backup strategy.

  • Efficient Restore Process: Differential backups offer an efficient restore process by requiring only the latest full backup and the most recent differential backup to restore the data. This results in a quicker data restoration than incremental backups, which involve multiple backup files or sets.
  • Storage Space Optimization: Differential backups optimize storage space usage by capturing and storing only the changes made since the last full backup, eliminating the need for multiple backup files to restore data. With one differential backup containing all changes since the full backup, storage requirements are reduced while still ensuring efficient data recovery.
  • Simplified Data Recovery: The data recovery process for differential backups is simplified and streamlined as it requires only the latest full and differential backup, making it more efficient than incremental backups.
  • Faster Backup Times: Unlike incremental backups, which require scanning and comparing the entire backup chain, differential backups involve capturing the cumulative changes in a single backup operation, minimizing backup time and resource utilization.

Disadvantages of differential backups

In this section, we examine the potential disadvantages and drawbacks of utilizing differential backups for data protection.

  • Increasing Backup Size Over Time: One drawback of differential backups is they often increase in size over time. Since each differential backup captures all changes made since the last full backup, the backup files gradually become larger than incremental backups.
  • Longer Restore Times: Differential backups can lead to longer restore times compared to incremental backups because when restoring data involves applying the latest full backup and all subsequent differential backups.
  • Higher Storage Requirements: Due to the accumulation of changes since the last full backup, differential backups can require higher storage requirements because of larger backup files over time.
  • Potential Data Redundancy: Differential backups may result in data redundancy, as each subsequent backup includes all changes since the last full backup, leading to the possibility of the same data being backed up several times.

Choosing between incremental vs. differential backups

When trying to choose between incremental and differential backups, you will need to understand what you are trying to accomplish with your backup strategy and consider factors such as the size of your data, how long backups will take, the efficiency of your restore process, the storage requirements for your data and data sets, and the potential for downtime to restore said data.

One backup strategy may be better when trying to minimize data size, and another may be better when your focus has to be on the ease and quickness of data recovery. With that in mind, let’s take a look at a few factors that should help you decide what strategy would work best for your needs and requirements.

Performance Considerations:

When considering the performance abilities of these two backup strategies, incremental backups are generally faster — but the restoration process can be slower, as previously discussed. Alternatively, differential backups generally take longer, but data restores are often faster. Consider which is more important to match your specific needs.

Storage Requirements:

When evaluating the storage needs of incremental vs. differential backups, keep these factors in mind. Incremental backups generally have reduced storage requirements since only the changes since the last backup are captured. Differential backups, on the other hand, grow in size as each backup captures all changes since the last full backup. Storage costs can rise quickly, so be sure to consider your options when making your choice.

Recovery Time:

Incremental backups tend to need more time to restore than differential backups, as they require sequentially applying multiple backup files to restore the data. Differential backups, in contrast, make for faster restores as they only require the latest full backup and differential backup for the restore.

Choosing the right backup type for your needs

Having the proper backup strategy in place for your organization is vital to minimize the potential business disruption of data loss, security breaches, or corruption. And with multiple types of backups available — full, incremental, and differential — it’s important to know how they work, their advantages and disadvantages, and their use cases.

And depending on how much data you need to back up, the storage requirements for that data, how long the data takes to back up, as well as the restore efficiency of each method, it’s important to choose the appropriate strategy for your specific needs.

Percona XtraBackup is a free, open source, and complete online backup solution for all versions of Percona Server for MySQL and MySQL. It performs online non-blocking, tightly compressed, highly secure backups on transactional systems so that applications remain fully available during planned maintenance windows. Percona XtraBackup is capable of taking incremental backups, and when you specify the base backup as the previous incremental or full backup, you can easily make differential backups.

Discover why Percona XtraBackup has been downloaded 3,500,000 times and counting.

 

Contact us to learn more!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com