Dec
06
2022
--

Importance of Delay Replica in Data Recovery: Recover Deleted Records

Delay Replica in Data Recovery

What is a Delay Replica and how does it help?

MySQL Replication is useful, easy to set up, and used for very different purposes. For example:

  • split reads and writes
  • run data mining or reporting processes on them
  • disaster recovery

To learn more, check out How Does MySQL Replication Work?

It’s important to mention that a replication server is not a backup by itself. A mistake on the source, for example, a DROP DATABASE or an UPDATE without a WHERE clause will be replicated nearly instantly to all replica servers. Simply having a replica is not going to be helpful here. How can we avoid that kind of mistake? By having a replica server that intentionally lags behind.

We can never avoid human error in database infrastructure operations. But rollback to the last known good condition from delayed Source/Replica is the best thing recommended during the entire database infrastructure corruption scenarios.

Delayed replication can be used for several purposes:

  • To test how the system behaves when there is a lag. For example, in an application, a lag might be caused by a heavy load on the replica. However, it can be difficult to generate this load level. Delayed replication can simulate the lag without having to simulate the load. It can also be used to debug conditions related to a lagging replica.
  • To inspect what the database looked like in the past, without having to reload a backup. For example, by configuring a replica with a delay of one week, if you then need to see what the database looked like before the last few days’ worth of development, the delayed replica can be inspected.
  • In our case, as we have six hours replication delay, we can recover the non-problematic state of the table by going back to six hours delayed replica in case there was the wrong DML on the source.

Testing the Delay Replica which can help us recover the deleted record

Let’s understand the situation first: Someone deleted the data on a Percona XtraDB Cluster 8 (PXC) server accidentally. As per architecture, we always configure a delayed replica. We have stopped the replication on the delayed replica and now we will restore the deleted records.

From the initial investigation, we came to know from the application operation team that the below query was executed.

delete from Schema_g2.per_sch order by id desc;

Almost 20k records were deleted. Let us do our pre-reqs and initial investigation based on the requirements we have.

In this article, PXC-8 is our source and pxc-backup is our delayed backup Replica. Click through to learn about our product Percona XtraDB Cluster and our backup solution Percona XtraBackup.

Percona XtraDB Cluster

  1. All records for Schema_g2.per_sch were deleted from PXC-8 by mistake.
  2. We have stopped the delayed replica pxc-backup to retrieve these records. All deleted records are present here right now. Below are the number of records we have until replication was stopped.
Pxc-backup > select count(*) from Schema_g2.per_sch;
+----------+
| count(*) |
+----------+
|    21762 |
+----------+

At this point, we have already stopped the backup replica. 

  1. After deletion, below are the new records inserted on the PXC-8 source for this table.
PXC-8 (none)> select count(*) from Schema_g2.per_sch;
+----------+
| count(*) |
+----------+
|      215 |
+----------+

  1. id column is auto_inc so new rows inserted have next values.
id int(11) NOT NULL AUTO_INCREMENT,

PXC-8 (none)> select * from Schema_g2.per_sch;
+--------+---------------+-----------+-------+
| id     | permission_id | school_id | grant |
+--------+---------------+-----------+-------+
| 178852 |           446 |     48887 |     0 |
| 178853 |           448 |     48887 |     0 |

...

  1. Records on the backup server:
Pxc-backup > select * from Schema_g2.per_sch limit 10;
+-----+---------------+-----------+-------+
| id  | permission_id | school_id | grant |
+-----+---------------+-----------+-------+
|   0 |             0 |         0 |     0 |
| 105 |           426 |         1 |     0 |
| 429 |            89 |        16 |     0 |
| 431 |            93 |        16 |     0 |

...

| 178629 |           194 |     35758 |     0 |
| 178630 |           195 |     35758 |     0 |
| 178631 |          8239 |     35758 |     0 |
+--------+---------------+-----------+-------+

  1. A few records between 178631 and 178852 are missing on the backup node and the replica was stopped in between.
  2. Exact position before the drop was executed on source: (Application team pinged this, we verified it as an UPDATE query before the DELETE).
START SLAVE UNTIL MASTER_LOG_FILE = 'PXC-8-bin.001611', MASTER_LOG_POS = 613996753;

From binlog we see below:

#210922 11:44:05 server id 8  end_log_pos 613996753 CRC32 0xee39f244    Query   thread_id=36995659      exec_time=0     error_code=0
SET TIMESTAMP=1632300245/*!*/;
BEGIN
/*!*/;
# at 613996753
#210922 11:44:05 server id 8  end_log_pos 613997049 CRC32 0x92aea136    Table_map: `Schema_g2`.`usr` mapped to number 109
# at 613997049
#210922 11:44:05 server id 8  end_log_pos 613997697 CRC32 0x1132b4ad    Update_rows: table id 109 flags: STMT_END_F
### UPDATE `Schema_g2`.`usr`

.......

#210922 11:44:04 server id 8  end_log_pos 613997872 CRC32 0xf41297bc    Query   thread_id=37823889      exec_time=0     error_code=0
SET TIMESTAMP=1632300244/*!*/;
BEGIN
/*!*/;
# at 613997872
#210922 11:44:04 server id 8  end_log_pos 613997943 CRC32 0xe341c435    Table_map: `Schema_g2`.`per_sch` mapped to number 664
# at 613997943
#210922 11:44:04 server id 8  end_log_pos 614006154 CRC32 0xd71595b1    Delete_rows: table id 664

 

Steps to restore

  1. Start replication on the backup server until the DROP statement.
START SLAVE UNTIL MASTER_LOG_FILE = 'PXC-8-bin.001611', MASTER_LOG_POS = 613996753;

 Verify binlog position and that it is caught up till above and verify new record count on the backup server.

 Verify replication is stopped again.

select count(*) from Schema_g2.per_sch;

-Verify last id is < 178852

 Use –where clause IF we notice any duplicate duplicates rows same as PXC-8 on the backup server.

--where=“id < 178852”

  1. Take a backup from the backup server with the below options to avoid dropping the newly added rows.
mysqldump -h backup-server-ip --single-transaction --skip-add-drop-table --no-create-info Schema_g2 per_sch > per_sch_backup.sql

Verify no drops/created are present in the backup file.

  1. Restore backup on source PXC-8:
mysql -h Source-server-ip Schema_g2 < per_sch_backup.sql

Verify it completes ok and records are added back on source:

mysql -h Source-server-ip -e "select count(*) from Schema_g2.per_sch"

Verify records < 178852 and above also exist on the source.

  1. Start replication on the backup server normally.
stop slave;

start slave;

With loading the record on the source, it will get replicated to delay replica and other replicas and the data will be in sync again.

Conclusion

It is always recommended to have a Delay Replica in your architecture to avoid and quickly resolve such data losses.

Mar
13
2019
--

Live MySQL Slave Rebuild with Percona Toolkit

MySQL slave data out of sync

MySQL slave data out of syncRecently, we had an edge case where a MySQL slave went out-of-sync but it couldn’t be rebuilt from scratch. The slave was acting as a master server to some applications and it had data was being written to it. It was a design error, and this is not recommended, but it happened. So how do you synchronize the data in this circumstance? This blog post describes the steps taken to recover from this situation. The tools used to recover the slave were pt-slave-restartpt-table-checksum, pt-table-sync and mysqldiff.

Scenario

To illustrate this situation, it was built a master x slave configuration with sysbench running on the master server to simulate a general application workload. The environment was set with a Percona Server 5.7.24-26 and sysbench 1.0.16.

Below are the sysbench commands to prepare and simulate the workload:

# Create Data
sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \
  --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \
  --table_size=5000 --tables=100 --threads=1 --events=0 --time=60 \
  --rand-type=uniform /usr/share/sysbench/oltp_read_only.lua prepare
# Simulate Workload
sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \
  --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \
  --table_size=5000 --tables=100 --threads=10 --events=0 --time=6000 \
  --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua --report-interval=1 run

With the environment set, the slave server was stopped, and some operations to desynchronize the slave were performed to reproduce the problem.

Fixing the issue

With the slave desynchronized, a restart on the replication was executed. Immediately, the error below appeared:

Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

To recover the slave from this error, we had to point the slave to an existing binary log with a valid binary log position. To get a valid binary log position, the command shown below had to be executed on the master:

master [localhost] {msandbox} ((none)) > show master status\G
*************************** 1. row ***************************
File: mysql-bin.000007
Position: 218443612
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.01 sec)

Then, a CHANGE MASTER command was run on the slave:

slave1 [localhost] {msandbox} (test) > change master to master_log_file='mysql-bin.000007', MASTER_LOG_POS=218443612;
Query OK, 0 rows affected (0.00 sec)
slave1 [localhost] {msandbox} (test) > start slave;
Query OK, 0 rows affected (0.00 sec)

Now the slave had a valid binary log file to read, but since it was inconsistent, it hit another error:

Last_SQL_Errno: 1032
               Last_SQL_Error: Could not execute Delete_rows event on table test.sbtest8; Can't find record in 'sbtest8', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000005, end_log_pos 326822861

Working past the errors

Before fixing the inconsistencies, it was necessary to keep the replication running and to skip the errors. For this, the pt-slave-restart tool will be used. The tool needs to be run on the slave server:

pt-slave-restart --user root --socket=/tmp/mysql_sandbox45008.sock --ask-pass

The tool skips errors and starts the replication threads. Below is an example of the output of the pt-slave-restart:

$ pt-slave-restart --user root --socket=/tmp/mysql_sandbox45009.sock --ask-pass
Enter password:
2019-02-22T14:18:01 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007        1996 1146
2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007        8698 1146
2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007       38861 1146

Finding the inconsistencies

With the tool running on one terminal, the phase to check the inconsistencies began. First things first, an object definition check was performed using mysqldiff utility. The mysqldiff tool is part of MySQL utilities. To execute the tool:

$ mysqldiff --server1=root:msandbox@localhost:48008 --server2=root:msandbox@localhost:48009 test:test --difftype=sql --changes-for=server2

And below are the differences found between the master and the slave:

1-) A table that doesn’t exist

# WARNING: Objects in server1.test but not in server2.test:
# TABLE: joinit

2-) A wrong table structure

# Comparing `test`.`sbtest98` to `test`.`sbtest98` [FAIL]
# Transformation for --changes-for=server2:
#
ALTER TABLE `test`.`sbtest98`
DROP INDEX k_98,
DROP COLUMN x,
ADD INDEX k_98 (k);

Performing the recommendations on the slave (creating the table and the table modification) the object’s definition was now equal. The next step was to check data consistency. For this, the pt-table-checksum was used to identify which tables are out-of-sync. This command was run on the master server.

$ pt-table-checksum -uroot -pmsandbox --socket=/tmp/mysql_sandbox48008.sock --replicate=percona.checksums --create-replicate-table --empty-replicate-table --no-check-binlog-format --recursion-method=hosts

And an output example:

01 master]$ pt-table-checksum --recursion-method dsn=D=percona,t=dsns --no-check-binlog-format --nocheck-replication-filter --host 127.0.0.1 --user root --port 48008 --password=msandbox
Checking if all tables can be checksummed ...
Starting checksum ...
  at /usr/bin/pt-table-checksum line 332.
Replica lag is 66 seconds on bm-support01.bm.int.percona.com.  Waiting.
Replica lag is 46 seconds on bm-support01.bm.int.percona.com.  Waiting.
Replica lag is 33 seconds on bm-support01.bm.int.percona.com.  Waiting.
           TS ERRORS  DIFFS     ROWS  DIFF_ROWS  CHUNKS SKIPPED    TIME TABLE
02-26T16:27:59      0      0     5000          0       1       0   0.037 test.sbtest1
02-26T16:27:59      0      0     5000          0       1       0   0.039 test.sbtest10
02-26T16:27:59      0      1     5000          0       1       0   0.033 test.sbtest100
02-26T16:27:59      0      1     5000          0       1       0   0.034 test.sbtest11
02-26T16:27:59      0      1     5000          0       1       0   0.040 test.sbtest12
02-26T16:27:59      0      1     5000          0       1       0   0.034 test.sbtest13

Fixing the data inconsistencies

Analyzing the DIFFS column it is possible to identify which tables were compromised. With this information, the pt-table-sync tool was used to fix these inconsistencies. The tool synchronizes MySQL table data efficiently, performing non-op changes on the master so they can be replicated and applied on the slave. The tools need to be run on the slave server. Below is an example of the tool running:

$ pt-table-sync --execute --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock

It is possible to perform a dry-run of the tool before executing the changes to check what changes the tool will apply:

$ pt-table-sync --print --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock
REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('1', '1654', '97484653464-60074971666-42998564849-40530823048-27591234964-93988623123-02188386693-94155746040-59705759910-14095637891', '15000678573-85832916990-95201670192-53956490549-57402857633') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;
REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('2', '3007', '31679133794-00154186785-50053859647-19493043469-34585653717-64321870163-33743380797-12939513287-31354198555-82828841987', '30122503210-11153873086-87146161761-60299188705-59630949292') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;

After executing the pt-table-sync, we recommend that you run the pt-table-checksum again and check if the DIFFS column shows the value of 0.

Conclusion

This blog post was intended to cover all possible issues that could happen on a slave when it goes out-of-sync such as DDL operations, binary log purge and DML operations. This process involves many steps and it could take a long time to finish, especially in large databases. Note that this process might take longer than the backup/restore process. However, in situations like the one mentioned above, it might be the only solution to recover a slave.


Image based on Photo by Randy Fath on Unsplash

 

Oct
14
2014
--

Recover orphaned InnoDB partition tablespaces in MySQL

A few months back, Michael wrote about reconnecting orphaned *.ibd files using MySQL 5.6. I will show you the same procedure, this time for partitioned tables. An InnoDB partition is also a self-contained tablespace in itself so you can use the same method described in the previous post.

To begin with, I have an example table with a few orphaned partitions and we will reconnect each partition one by one to the original table.

mysql [localhost] {msandbox} (recovery) > SHOW CREATE TABLE t1 G
*************************** 1. row ***************************
Table: t1
Create Table: CREATE TABLE `t1` (
[...]
KEY `h_date` (`h_date`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (year(h_date))
(PARTITION p0 VALUES LESS THAN (2006) ENGINE = InnoDB,
PARTITION p1 VALUES LESS THAN (2010) ENGINE = InnoDB,
PARTITION px VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
1 row in set (0.00 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t1;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
-rw-rw----.  1 revin revin 176M Oct  8 08:41 t1#P#p0.ibd
-rw-rw----.  1 revin revin 612M Oct  8 08:41 t1#P#p1.ibd
-rw-rw----.  1 revin revin 932M Oct  8 08:42 t1#P#px.ibd

The first step is to create a dummy table and remove partitioning so that we can reattach individual partitions to this table.

mysql [localhost] {msandbox} (recovery) > CREATE TABLE t1_t LIKE t1;
Query OK, 0 rows affected (0.02 sec)
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t REMOVE PARTITIONING;
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t DISCARD TABLESPACE;
Query OK, 0 rows affected (0.05 sec)


Once the tablespace for our dummy table has been discarded, we copy  one of the partitions to take the place of the dummy table’s tablespace. For example, we copy 

t1#P#p0.ibd

  as 

t1_t.ibd

  into the MySQL data directory, of course taking into account the permissions afterward. The next step is to import the tablespace to the dummy table.

mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1_t IMPORT TABLESPACE;
Query OK, 0 rows affected, 1 warning (7.34 sec)


And for the secret sauce, we will exchange our dummy table recently imported tablespace to replace the target partition in our original table.

mysql [localhost] {msandbox} (recovery) > ALTER TABLE t1 EXCHANGE PARTITION px WITH TABLE t1_t;
Query OK, 0 rows affected (6.42 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t1;
+----------+
| COUNT(*) |
+----------+
|  8523686 |
+----------+
1 row in set (2.50 sec)


You can do the same with subpartitions, too! Here’s my slightly different table with subpartitions where I reconnect one of the orphaned tablespaces

t2#P#px#SP#pxsp1.ibd

 .

mysql [localhost] {msandbox} (recovery) > SHOW CREATE TABLE t2 G
*************************** 1. row ***************************
       Table: t2
Create Table: CREATE TABLE `t2` (
[...]
  KEY `h_date` (`h_date`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (year(h_date))
SUBPARTITION BY HASH (u_id)
SUBPARTITIONS 2
(PARTITION p0 VALUES LESS THAN (2006) ENGINE = InnoDB,
 PARTITION p1 VALUES LESS THAN (2010) ENGINE = InnoDB,
 PARTITION px VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
1 row in set (0.00 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t2;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.94 sec)
-rw-rw----.  1 revin revin  92M Oct  8 08:44 t2#P#p0#SP#p0sp0.ibd
-rw-rw----.  1 revin revin  92M Oct  8 08:44 t2#P#p0#SP#p0sp1.ibd
-rw-rw----.  1 revin revin 304M Oct  8 08:44 t2#P#p1#SP#p1sp0.ibd
-rw-rw----.  1 revin revin 316M Oct  8 08:44 t2#P#p1#SP#p1sp1.ibd
-rw-rw----.  1 revin revin 480M Oct  8 08:45 t2#P#px#SP#pxsp0.ibd
-rw-rw----.  1 revin revin 460M Oct  8 08:45 t2#P#px#SP#pxsp1.ibd
mysql [localhost] {msandbox} (recovery) > CREATE TABLE t2_t LIKE t2;
Query OK, 0 rows affected (0.02 sec)
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2_t REMOVE PARTITIONING;
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2_t DISCARD TABLESPACE;
Query OK, 0 rows affected (0.04 sec)


Once again, after copying 

t2#P#px#SP#pxsp1.ibd

  to replace 

t2_t.ibd

  in the MySQL data directory and setting the correct permissions, we can import it into our dummy table and exchange the same to our original table. In this case, on a table with subpartitions, a tablespace is a combined unit of partition and subpartition, hence in our partition name above P#px is our main partition and SP#pxsp1 the subpartition name. For our exchange command below, we will only need the later portion.

mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2_t IMPORT TABLESPACE;
Query OK, 0 rows affected, 1 warning (2.49 sec)
mysql [localhost] {msandbox} (recovery) > ALTER TABLE t2 EXCHANGE PARTITION pxsp1 WITH TABLE t2_t;
Query OK, 0 rows affected (3.11 sec)
mysql [localhost] {msandbox} (recovery) > SELECT COUNT(*) FROM t2;
+----------+
| COUNT(*) |
+----------+
|  4546036 |
+----------+
1 row in set (0.94 sec)


But wait there’s more: Do you know that in MySQL 5.7, you can take the full shortcut? Directly import all partitions back to the original table, sweet! A quick example below on MySQL 5.7.5, I created the same t2 table above,

DISCARD TABLESPACE

 , copy the partition tablespaces from my test 5.6 instance and

IMPORT TABLESPACE

  And done! :-)

mysql [localhost] {msandbox} (test) > SELECT COUNT(*) FROM t2;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)
mysql [localhost] {msandbox} (test) > ALTER TABLE t2 DISCARD TABLESPACE;
Query OK, 0 rows affected (0.02 sec)
mysql [localhost] {msandbox} (test) > ! cp -v /sbx/msb/msb_5_6_210/data/test/t2#P#* /sbx/msb/msb_5_7_5/data/test/
`/sbx/msb/msb_5_6_210/data/test/t2#P#p0#SP#p0sp0.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#p0#SP#p0sp0.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#p0#SP#p0sp1.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#p0#SP#p0sp1.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#p1#SP#p1sp0.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#p1#SP#p1sp0.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#p1#SP#p1sp1.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#p1#SP#p1sp1.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#px#SP#pxsp0.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#px#SP#pxsp0.ibd'
`/sbx/msb/msb_5_6_210/data/test/t2#P#px#SP#pxsp1.ibd' -> `/sbx/msb/msb_5_7_5/data/test/t2#P#px#SP#pxsp1.ibd'
mysql [localhost] {msandbox} (test) > ALTER TABLE t2 IMPORT TABLESPACE;
Query OK, 0 rows affected, 6 warnings (11.36 sec)
mysql [localhost] {msandbox} (test) > SHOW WARNINGS G
*************************** 1. row ***************************
  Level: Warning
   Code: 1810
Message: InnoDB: IO Read error: (2, No such file or directory) Error opening './test/t2#P#p0#SP#p0sp0.cfg', will attempt to import without schema verification
[...]


 

The post Recover orphaned InnoDB partition tablespaces in MySQL appeared first on MySQL Performance Blog.

Jun
17
2014
--

MySQL Backup Service from Percona

The Percona Backup Service managed service launched today. It ensures properly configured backups run successfully as scheduled on customer provided backup storage – whether on premise, in the cloud, or a hybrid. Backup issues or production data recovery are efficiently handled by Percona Managed Services technicians with deep knowledge of MySQL.

As we state in our white papers, “MySQL backup and recovery are a foundational piece of any infrastructure. A well-tested backup and recovery system can be the difference between a minor outage and the end of a business.” While MySQL backups are “foundational,” they still require constant management, and the most important use of a backup, namely recovery, is often complex.

The Percona MySQL Backup Service is available for companies using any variant of single-node or clustered MySQL — on premise or in the cloud. This includes MySQL, Percona Server, MariaDB, and Percona XtraDB Cluster.

Reliable MySQL Backup Services

Developed by MySQL experts with decades of operational experience, the Percona MySQL Backup Service is based on widely adopted open source database backup software solutions such as Percona XtraBackup. Monitoring software is installed to ensure MySQL backups run as expected and alert the Percona Managed Services team to any issues. Percona experts on duty 24x7x365 resolve problems before the availability of backups is impacted and can implement a full, partial, or point in time recovery to minimize downtime and data loss on production servers.

MySQL backup data sets are secure and compliant with regulatory requirements. 256 bit encryption meets or exceeds common security and compliance requirements. Internal procedures ensure that backups are managed with extreme care and are only stored safely on secure servers. Backups are available for on-demand recovery to comply with HIPAA, SOX, or PCI requirements.

The status of current and past backups is easily accessible through the Percona MySQL Backup Service customer portal. The portal includes instructions on how to use the backups to restore data for routine purposes such as restoration of development databases.

Efficient MySQL Data Recovery

Percona Backup Service technical experts respond within 30 minutes to ensure that production data is recovered quickly and as completely as possible. We can recover:

  • Data back to the last full capture image
  • Specific tables which saves significant time when only a single table or set of tables needs to be recovered
  • Full data to a specific point in time which ensures an application can be recovered to the same state as when the data was lost

Unlike database-as-a-service solutions, the Percona Backup Service can recover specific tables and full data to a specific point in time because we create additional backup points.

Cost Effective and Highly Flexible

Our MySQL backup-as-a-service solution costs less than managing backups in-house. Our 24x7x365 team serves multiple customers so the cost of the Percona MySQL Backup Service is lower than having someone on staff to manage backups. We also use a proprietary backup approach which significantly reduces the size of backup data sets, requiring much less storage space than conventional methods.

The Percona MySQL Backup Service accommodates any combination of MySQL server and backup server locations. We can work with MySQL server and backup servers on premise or in the cloud and can even configure the process to store one backup set on premise and another in the cloud for additional protection. This configuration flexibility means we can meet a wide range of data availability and budget needs.

MySQL Backup Service Features

The Percona Backup Service is designed so backups will run smoothly and reliably. The following features are included:

  • Customer portal for anytime access to current and past backup status as well as instructions on how to restore the MySQL backups for non-production purposes
  • Efficient data recovery for production issues with full, partial, and point in time recovery options
  • A high level of security with 256 bit encryption and backups only stored on the customer’s secure servers
  • Regulatory compliance with backups available for on-demand recovery to comply with HIPAA, SOX, and PCI requirements
  • Lower cost than managing MySQL backups in-house with 24x7x365 monitoring and issue resolution combined with a proprietary backup approach which significantly reduces the size of the backup data set versus conventional methods
  • Ability to accommodate any combination of MySQL server and backup server locations – on premise, in the cloud, or a hybrid
  • Flexible configuration options which enable the service to meet a wide range of data availability and budget requirements

Learn More

Learn more about our MySQL Backup Service solution as well as register for our upcoming “MySQL Backup and Recovery Best Practices” webinar on June 18, 2014. Contact us now to learn more about the Percona Backup Service and how we can ensure your backups are ready when you need them!

The post MySQL Backup Service from Percona appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com