In current times, there is a high degree of focus on ensuring the availability and recovery of your production data. This can be challenging at times when using DBaaS solutions in the public cloud space, for example, when using AWS Aurora. Relying solely on a single cloud provider for database services can pose significant risks. […]
19
2024
Why You Should Consider an External Replica for Your Cloud Environment
30
2021
HYCU raises $87.5M to take on Rubrik and the rest in multi-cloud data backup and recovery
As more companies become ever more reliant on digital infrastructure for everyday work, the more they become major targets for malicious hackers — both trends accelerated by the pandemic — and that is leading to an ever-greater need for IT and security departments to find ways of protecting data should it become compromised. Today, one of the companies that has emerged as a strong player in data backup and recovery is announcing its first major round of funding.
HYCU, which provides multi-cloud backup and recovery services for mid-market and enterprise customers, has raised $87.5 million, a Series A that it the Boston-based startup will be using to invest in building out its platform further, to bring its services into more markets, and to hire 100 more people.
HYCU’s premise and ambition, CEO and founder Simon Taylor said in an interview, is to provide backup and storage services that are as simple to use “as backing up in iCloud for consumers.”
“If you look at primary storage, it’s become very SaaS-ifed, with no professional services required,” he continued. “But backup has stayed very legacy. It’s still mostly focused on one specific environment and can’t perform well when multi-cloud is being used.”
And HYCU’s name fits with that ethos. It is pronounced “haiku”, which Taylor told me refers not just to that Japanese poetic form that looks simple but hides a lot of meaning, but also “hybrid cloud uptime.”
The company is probably known best for its integration with Nutanix, but has over time expanded to serve enterprises building and operating IT and apps over VMware, Google Cloud, Azure and AWS. The company also has built a tool to help migrate data for enterprises, HYCU Protégé, which will also be expanded.
The funding is being led by Bain Capital Ventures, with participation also from Acrew Capital (which was also in the news last week as an investor in the $118 million round for Pie Insurance). The valuation is not being disclosed.
This is the first major outside funding that the company has announced since being founded in 2018, but in that time it has grown into a sizeable competitor against others like Rubrik, Veeam, Veritas and CommVault. The Rubrik comparison is interesting, given that it is also backed by Bain (which led a $261 million round in Rubrik in 2019). HYCU now has more than 2,000 customers in 75 countries. Taylor says that not taking funding while growing into what it has become meant that it was “listening and closer to the needs of our customers,” rather than spending more time paying attention to what investors says.
Now that it’s reached a certain scale, though, things appear to be shifting and there will probably be more money down the line. “This is just round one for us,” Taylor said.
He added that this funding came in the wake of a lot of inbound interest that included not just the usual range of VCs and private equity firms that are getting more involved in VC, but also, it turns out, SPACs, which as they grow in number, seem to be exploring what kinds and stages of companies they tap with their quick finance-and-go-public model.
And although HYCU hadn’t been proactively pitching investors for funding, it would have been on their radars. In fact, Bain is a major backer of Nutanix, putting some $750 million into the company last August. There is some strategic sense in supporting businesses that figure strongly in the infrastructure of your other portfolio companies.
There is another important reason for HYCU raising capital to expand beyond what its balance sheet could provide to fuel growth: HYCU’s would-be competition is itself going through a moment of investment and expansion. For example, Veeam, which was acquired by Insight last January for $5 billion, then proceeded to acquire Kasten to move into serving enterprises that used Kubernetes-native workloads across on-premises and cloud environments. And Rubrik last year acquired Igneous to bring management of unstructured data into its purview. And it’s not a given that just because this is a sector seeing a lot of demand, that it’s all smooth sailing. Igneous was on the rocks at the time of its deal, and Rubrik itself had a data leak in 2019, highlighting that even those who are expert in protecting data can run up against problems.
Taylor notes that ransomware indeed remains a very persistent problem for its customers — reflecting what others in the security world have observed — and its approach for now is to remain focused on how it delivers services in an agent-less environment. “We integrate into the platform,” he said. “That is incredibly important. It means that you can be up and running immediately, with no need for professional services to do the integrating, and we also make it a lot harder for criminals because of this.”
Longer term, it will keep its focus on backup and recovery with no immediate plans to move into adjacent areas though such as more security services or other tools. “We’re not trying to be a Veritas and own the entire business end-to-end,” Taylor said. “The goal is to make sure the IT department has visibility and the cloud journey is protected.”
Enrique Salem, a partner at Bain Capital Ventures and the former CEO of Symantec, is joining HYCU’s board with this round and sees the opportunity in the market for a product like HYCU’s.
“We are in the early days of a multi-decade shift to the public cloud, but existing on-premises backup vendors are poorly equipped to enable this transition, creating tremendous opportunity for a new category of cloud-native backup providers,” he said in a statement. “As one of the early players in multi-cloud backup as a service bringing true SaaS to both on-premises and cloud-native environments, HYCU is a clear leader in a space that will continue to create large multi-billion dollar companies.”
Stefan Cohen, a principal at Bain Capital Ventures, will also be joining the board.
13
2019
Live MySQL Slave Rebuild with Percona Toolkit
Recently, we had an edge case where a MySQL slave went out-of-sync but it couldn’t be rebuilt from scratch. The slave was acting as a master server to some applications and it had data was being written to it. It was a design error, and this is not recommended, but it happened. So how do you synchronize the data in this circumstance? This blog post describes the steps taken to recover from this situation. The tools used to recover the slave were pt-slave-restart, pt-table-checksum, pt-table-sync and mysqldiff.
Scenario
To illustrate this situation, it was built a master x slave configuration with sysbench running on the master server to simulate a general application workload. The environment was set with a Percona Server 5.7.24-26 and sysbench 1.0.16.
Below are the sysbench commands to prepare and simulate the workload:
# Create Data sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \ --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \ --table_size=5000 --tables=100 --threads=1 --events=0 --time=60 \ --rand-type=uniform /usr/share/sysbench/oltp_read_only.lua prepare # Simulate Workload sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox \ --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 \ --table_size=5000 --tables=100 --threads=10 --events=0 --time=6000 \ --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua --report-interval=1 run
With the environment set, the slave server was stopped, and some operations to desynchronize the slave were performed to reproduce the problem.
Fixing the issue
With the slave desynchronized, a restart on the replication was executed. Immediately, the error below appeared:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
To recover the slave from this error, we had to point the slave to an existing binary log with a valid binary log position. To get a valid binary log position, the command shown below had to be executed on the master:
master [localhost] {msandbox} ((none)) > show master status\G *************************** 1. row *************************** File: mysql-bin.000007 Position: 218443612 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.01 sec)
Then, a CHANGE MASTER command was run on the slave:
slave1 [localhost] {msandbox} (test) > change master to master_log_file='mysql-bin.000007', MASTER_LOG_POS=218443612; Query OK, 0 rows affected (0.00 sec) slave1 [localhost] {msandbox} (test) > start slave; Query OK, 0 rows affected (0.00 sec)
Now the slave had a valid binary log file to read, but since it was inconsistent, it hit another error:
Last_SQL_Errno: 1032 Last_SQL_Error: Could not execute Delete_rows event on table test.sbtest8; Can't find record in 'sbtest8', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000005, end_log_pos 326822861
Working past the errors
Before fixing the inconsistencies, it was necessary to keep the replication running and to skip the errors. For this, the pt-slave-restart tool will be used. The tool needs to be run on the slave server:
pt-slave-restart --user root --socket=/tmp/mysql_sandbox45008.sock --ask-pass
The tool skips errors and starts the replication threads. Below is an example of the output of the pt-slave-restart:
$ pt-slave-restart --user root --socket=/tmp/mysql_sandbox45009.sock --ask-pass Enter password: 2019-02-22T14:18:01 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 1996 1146 2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 8698 1146 2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 38861 1146
Finding the inconsistencies
With the tool running on one terminal, the phase to check the inconsistencies began. First things first, an object definition check was performed using mysqldiff utility. The mysqldiff tool is part of MySQL utilities. To execute the tool:
$ mysqldiff --server1=root:msandbox@localhost:48008 --server2=root:msandbox@localhost:48009 test:test --difftype=sql --changes-for=server2
And below are the differences found between the master and the slave:
1-) A table that doesn’t exist
# WARNING: Objects in server1.test but not in server2.test: # TABLE: joinit
2-) A wrong table structure
# Comparing `test`.`sbtest98` to `test`.`sbtest98` [FAIL] # Transformation for --changes-for=server2: # ALTER TABLE `test`.`sbtest98` DROP INDEX k_98, DROP COLUMN x, ADD INDEX k_98 (k);
Performing the recommendations on the slave (creating the table and the table modification) the object’s definition was now equal. The next step was to check data consistency. For this, the pt-table-checksum was used to identify which tables are out-of-sync. This command was run on the master server.
$ pt-table-checksum -uroot -pmsandbox --socket=/tmp/mysql_sandbox48008.sock --replicate=percona.checksums --create-replicate-table --empty-replicate-table --no-check-binlog-format --recursion-method=hosts
And an output example:
01 master]$ pt-table-checksum --recursion-method dsn=D=percona,t=dsns --no-check-binlog-format --nocheck-replication-filter --host 127.0.0.1 --user root --port 48008 --password=msandbox Checking if all tables can be checksummed ... Starting checksum ... at /usr/bin/pt-table-checksum line 332. Replica lag is 66 seconds on bm-support01.bm.int.percona.com. Waiting. Replica lag is 46 seconds on bm-support01.bm.int.percona.com. Waiting. Replica lag is 33 seconds on bm-support01.bm.int.percona.com. Waiting. TS ERRORS DIFFS ROWS DIFF_ROWS CHUNKS SKIPPED TIME TABLE 02-26T16:27:59 0 0 5000 0 1 0 0.037 test.sbtest1 02-26T16:27:59 0 0 5000 0 1 0 0.039 test.sbtest10 02-26T16:27:59 0 1 5000 0 1 0 0.033 test.sbtest100 02-26T16:27:59 0 1 5000 0 1 0 0.034 test.sbtest11 02-26T16:27:59 0 1 5000 0 1 0 0.040 test.sbtest12 02-26T16:27:59 0 1 5000 0 1 0 0.034 test.sbtest13
Fixing the data inconsistencies
Analyzing the DIFFS column it is possible to identify which tables were compromised. With this information, the pt-table-sync tool was used to fix these inconsistencies. The tool synchronizes MySQL table data efficiently, performing non-op changes on the master so they can be replicated and applied on the slave. The tools need to be run on the slave server. Below is an example of the tool running:
$ pt-table-sync --execute --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock
It is possible to perform a dry-run of the tool before executing the changes to check what changes the tool will apply:
$ pt-table-sync --print --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('1', '1654', '97484653464-60074971666-42998564849-40530823048-27591234964-93988623123-02188386693-94155746040-59705759910-14095637891', '15000678573-85832916990-95201670192-53956490549-57402857633') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/; REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('2', '3007', '31679133794-00154186785-50053859647-19493043469-34585653717-64321870163-33743380797-12939513287-31354198555-82828841987', '30122503210-11153873086-87146161761-60299188705-59630949292') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;
After executing the pt-table-sync, we recommend that you run the pt-table-checksum again and check if the DIFFS column shows the value of 0.
Conclusion
This blog post was intended to cover all possible issues that could happen on a slave when it goes out-of-sync such as DDL operations, binary log purge and DML operations. This process involves many steps and it could take a long time to finish, especially in large databases. Note that this process might take longer than the backup/restore process. However, in situations like the one mentioned above, it might be the only solution to recover a slave.
—
Image based on Photo by Randy Fath on Unsplash
13
2018
Webinar Thurs 6/14: MongoDB Backup and Recovery Field Guide
Please join Percona’s Sr. Technical Operations Architect, Tim Vaillancourt as he presents MongoDB Backup and Recovery Field Guide on Thursday, June 14, 2018, at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).
This talk will cover backup and recovery solutions for MongoDB replica sets and clusters, focusing on online and low-impact solutions for production systems.
Tim Vaillancourt
Senior Technical Operations Architect
With experience operating infrastructures in industries such as government, online marketing/publishing, SaaS and gaming combined with experience tuning systems from the hard disk all the way up to the end-user, Tim has spent time in nearly every area of the modern IT stack with many lessons learned.
Tim is based in Amsterdam, NL and enjoys traveling, coding and music. Prior to Percona Tim was the Lead MySQL DBA of Electronic Arts’ DICE studios, helping some of the largest games in the world (“Battlefield” series, “Mirrors Edge” series, “Star Wars: Battlefront”) launch and operate smoothly while also leading the automation of MongoDB deployments for EA systems. Before the role of DBA at EA’s DICE studio, Tim served as a subject matter expert in NoSQL databases, queues and search on the Online Operations team at EA SPORTS.
Prior to moving to the gaming industry, Tim served as a Database/Systems Admin operating a large MySQL-based SaaS infrastructure at AbeBooks/Amazon Inc.
The post Webinar Thurs 6/14: MongoDB Backup and Recovery Field Guide appeared first on Percona Database Performance Blog.
28
2017
Data management startup Rubrik confirms $180M round at a $1.3B valuation
Rubrik, a startup that provides data backup and recovery services for enterprises across both cloud and on-premises environments, has closed a $180 million round of funding that values the company at $1.3 billion. The news confirms a report we ran earlier this week noting that the company was raising between $150 million and $200 million. Read More
16
2015
Using dbsake to recover table structure from .frm files and process mysqldump output
We work on data recoveries quite often. In many cases, we recover table structures from the .frm files because there was no backup available. There is already a great blog post by my colleague Miguel Ángel Nieto about how we can recover structures from .frm files using MySQL utilities.
This works pretty well and we prefer to run mysqlfrm with the “–server” option to get all possible information from a .frm file. However, this option expects that MySQL is up and running so that mysqlfrm can spawn a new MySQL instance, and run the structure recovery there.
Recently I came across a tool that makes this job easier. The name of tool is dbsake, it’s a collection of command-line tools that perform various DBA related tasks for MySQL. In this blog, we will look at two very useful dbsake commands.
Installation is very easy and straightforward. It’s in an executable python zip archive with all dependencies included.
# curl -s http://get.dbsake.net > dbsake # chmod u+x dbsake # ./dbsake --version dbsake, version 2.1.0 9525896
Recovering table structures from MySQL .frm files with dbsake
To recover table structures using dbsake, you need to use the “dbsake frmdump” command, followed by the .frm file path. The frmdump command decodes the MySQL .frm file and provides a “CREATE TABLE” or “CREATE VIEW” statement in the output. The good thing is that it doesn’t require a running a MySQL server instance, and interprets the .frm file according to rules similar to the MySQL server.
Let’s see an example:
# ./dbsake frmdump /var/lib/mysql/sakila/staff.frm -- -- Table structure for table `staff` -- Created with MySQL Version 5.6.27 -- CREATE TABLE `staff` ( `staff_id` tinyint(3) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(45) NOT NULL, `last_name` varchar(45) NOT NULL, `address_id` smallint(5) unsigned NOT NULL, `picture` blob, `email` varchar(50) DEFAULT NULL, `store_id` tinyint(3) unsigned NOT NULL, `active` tinyint(1) NOT NULL DEFAULT '1', `username` varchar(16) NOT NULL, `password` varchar(40) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`staff_id`), KEY `idx_fk_store_id` (`store_id`), KEY `idx_fk_address_id` (`address_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The result looks pretty good, and has recovered the character set and collation information as well. We can also see the MySQL version (5.6.27) retrieved from the .frm file. It is important to mention that the command only decodes the information available in .frm file, which means that it cannot recover InnoDB foreign-key references and AUTO_INCREMENT values. These items are stored outside of the .frm file.
The frmdump command makes the recovery process easy and faster. We can easily script this and recover the structure of a large number of tables. For example, if we need to recover the structure of all tables from a world database, we can do following:
mysql> create database world_recover; # for tbl in `ls -1 /var/lib/mysql/world/*.frm`; do ./dbsake frmdump $tbl | mysql world_recover; done; mysql> show tables from world_recover; +-------------------------+ | Tables_in_world_recover | +-------------------------+ | city | | country | | countrylanguage | +-------------------------+ 3 rows in set (0.00 sec)
Filter and transform a mysqldump stream with dbsake
It’s a very common requirement to filter one or more tables from a mysqldump full database backup. The “dbsake sieve [options]” command helps us to filter or transform mysqldump output.
Let’s see how to extract a single table from a mysqldump file.
# mysqldump world > world.sql # cat world.sql | ./dbsake sieve -t world.city > world.city.sql Processed . Output: 1 database(s) 1 table(s) and 0 view(s)
The “-t” or “–table” option tells the command to only output the table matching the given pattern. It will also show the number of databases, tables or views processed in output.
To extract multiple tables, you can pass “-t db.tbl” multiple times.
# cat world.sql | ./dbsake sieve -t world.city -t world.country > world.city_country.sql Processed . Output: 1 database(s) 2 table(s) and 0 view(s) # cat world.city_country.sql | grep -i 'create table' CREATE TABLE `city` ( CREATE TABLE `country` (
The latest Percona server added the new option “–innodb-optimize-keys” in mysqldump. It changes the way InnoDB tables are dumped, so that secondary keys are created after loading the data, thus taking advantage of InnoDB fast index creation. This is a really great feature in that it helps us to restore data more efficiently than the default incremental rebuild that mysqldump performs.
Using the “dbsake sieve [options]” command, we can transform the regular mysqldump output to take advantage of fast index creation. The “–defer-indexes” option rewrites the output of CREATE TABLE statements, and arranges for secondary indexes to be created after the table data is loaded. Similarly the “–defer-foreign-keys” option can be added to add foreign key constraints after loading table data.
Let’s see an example:
# cat world.sql | ./dbsake sieve --defer-indexes --defer-foreign-keys -t world.city > world.city.sql Processed . Output: 1 database(s) 1 table(s) and 0 view(s)
This means that world.city.sql will have a table structure with the Primary Key first, then will insert statements to load data, and an additional ALTER TABLE statement to create secondary keys when there is at least one secondary index to be added. Foreign keys will also created with secondary indexes.
The original structure of table world.city:
CREATE TABLE `city` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `Name` char(35) NOT NULL DEFAULT '', `CountryCode` char(3) NOT NULL DEFAULT '', `District` char(20) NOT NULL DEFAULT '', `Population` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`ID`), KEY `CountryCode` (`CountryCode`), CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `country` (`Code`) ) ENGINE=InnoDB AUTO_INCREMENT=4080 DEFAULT CHARSET=latin1;
And the transformation done by dbsake:
........... CREATE TABLE `city` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `Name` char(35) NOT NULL DEFAULT '', `CountryCode` char(3) NOT NULL DEFAULT '', `District` char(20) NOT NULL DEFAULT '', `Population` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=4080 DEFAULT CHARSET=latin1; ........... LOCK TABLES `city` WRITE; ........... INSERT INTO `city` VALUES ..................... ........... UNLOCK TABLES; -- -- InnoDB Fast Index Creation (generated by dbsake) -- ALTER TABLE `city` ADD KEY `CountryCode` (`CountryCode`), ADD CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `country` (`Code`); ...........
For more sieve command options, please read the online manual.
Conclusion
The dbsake command makes it easier to recover table structures from .frm files. We can also filter and transform the mysqldump output easily without writing a complex awk or sed script. There are some more useful features of this tool that you can read about in the manual.
The post Using dbsake to recover table structure from .frm files and process mysqldump output appeared first on MySQL Performance Blog.
02
2014
How to recover table structure from .frm files with MySQL Utilities
Table structures are stored in .frm files and in the InnoDB Data Dictionary. Sometimes, usually in data recovery issues, we need to recover those structures to be able to find the lost data or just to recreate the tables.
There are different ways to do it and we’ve already written about it in this blog. For example, we can use the data recovery tools to recover table structures from InnoDB Dictionary or from the .frm files using a MySQL Server. This blog post will be an update of that last one. I will show you how to easily recover the structure from a .frm file and in some cases even without using a MySQL server. This will make the process faster and easily scriptable.
MySQL Utilities and mysqlfrm
MySQL Utilities are a set of scripts released by Oracle that helps us to do some usual DBA tasks in an easier way. It is written in Python and it’s only dependency is the Python Connector. From the large list of utilities we are going to use mysqlfrm, the tool that will help us to recover the structure.
As usual, an image worth a thousand words. Let’s recover some table structures:
This is the table we have:
CREATE TABLE `new_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) DEFAULT NULL, `age` tinyint(4) NOT NULL, PRIMARY KEY (`id`), KEY `name_idx` (`name`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
So, let’s try to recover that info from the .frm file and let’s see what we can get:
$ mysqlfrm --diagnostic /usr/local/mysql/data/test/new_table.frm # WARNING: Cannot generate character set or collation names without the --server option. [...] CREATE TABLE `test`.`new_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) DEFAULT NULL, `age` tinyint(4) NOT NULL, PRIMARY KEY `PRIMARY` (`id`), KEY `name_idx` (`name`) ) ENGINE=InnoDB;
Pretty good result
It is important to mention that this tool has two different ways to do the recovery.
- First one is spawning a new MySQL instance and run there the structure recovery, pretty similar to the one PeterZ explained in his blog post. You would need to use –server or –basedir directory along with –port. It will shut the spawned instance down after the recovery is complete.
- The second one used with –diagnostic reads the .frm file byte by byte to recover all the information possible but without the requirement of a MySQL instance. Therefore, this method can be used to recover all the information possible from damaged .frm files that even MySQL can’t read.
As we can see in the warning of the last example not all information can be recovered with second method. For example character set or collation can’t be recovered without the –server option (first method). Let’s see how to use a spawned server to recover the .frm info:
$ mysqlfrm --server=root@127.0.0.1 --port 3307 ./new_table.frm # Source on 127.0.0.1: ... connected. # Starting the spawned server on port 3307 ... done. # Reading .frm files # # Reading the new_table.frm file. # # CREATE statement for ./new_table.frm: # CREATE TABLE `new_table` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) DEFAULT NULL, `age` tinyint(4) NOT NULL, PRIMARY KEY (`id`), KEY `name_idx` (`name`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
The tool connects to MySQL server, get all the info it needs (basedir and so on) and it spawns a new instance on port 3307. Then it uses that new instance to recover the info from the .frm file. Fast and easy
It is worth to mention that not all the information we could need is stored in those .frm files. There are some things that we won’t be able to recover, for example FK constraints and AI number sequences.
Conclusion
MySQL Utilities is a very useful set of tools. In this particular case mysqlfrm can be used to recover a large list of table structures from their .frm files, making the process fast and easy to script.
The post How to recover table structure from .frm files with MySQL Utilities appeared first on MySQL Performance Blog.
05
2013
How to recover an orphaned .ibd file with MySQL 5.6
A few years ago Yves Trudeau and Aleksandr Kuzminsky wrote posts about different ways for recovering orphaned .ibd files:
Today I want to show you how to do that in more easy and quick way. In my example I’ll restore a “payment.ibd” file (payment table) from Sakila DB on a server with MySQL 5.5 (but with help from MySQL 5.6 and sandbox).
In my case the OS is CentOS. So I needed to:
install mysqlsandbox(check instructions there)
download latest Percona Server 5.6:
wget http://www.percona.com/redir/downloads/Percona-Server-5.6/LATEST/release-5.6.14-62.0/483/binary/linux/i686/Percona-Server-5.6.14-rel62.0-483.Linux.i686.tar.gz
create sandbox
make_sandbox Percona-Server-5.6.14-rel62.0-483.Linux.i686.tar.gz
test it
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "select @@version\G" *************************** 1. row *************************** @@version: 5.6.14-rel62.0
It Works!
check datadir and if the innodb_file_per_table option enabled (this is requirement)
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "show variables like 'datadir'" +---------------+---------------------------------------+ | Variable_name | Value | +---------------+---------------------------------------+ | datadir | /home/mixa/sandboxes/msb_5_6_14/data/ | +---------------+---------------------------------------+
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "show variables like '%per_table'" +-----------------------+-------+ | Variable_name | Value | +-----------------------+-------+ | innodb_file_per_table | ON | +-----------------------+-------+
If it’s not enabled then you’ll need to enable it
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "SET GLOBAL innodb_file_per_table=1"
create empty payment table on 5.6 sandbox
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 test < payment_table.sql
payment_table.sql – is file with “SHOW CREATE TABLE” statement for payment table. The table structure should be the same.
cat payment_table.sql CREATE TABLE `payment` ( `payment_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT, `customer_id` smallint(5) unsigned NOT NULL, `staff_id` tinyint(3) unsigned NOT NULL, `rental_id` int(11) DEFAULT NULL, `amount` decimal(5,2) NOT NULL, `payment_date` datetime NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`payment_id`), KEY `idx_fk_staff_id` (`staff_id`), KEY `idx_fk_customer_id` (`customer_id`), KEY `fk_payment_rental` (`rental_id`) ) ENGINE=InnoDB
stop sandbox
./sandboxes/msb_5_6_14/stop
replace .ibd file (in my case the correct copy of it is located in my homedir)
cp ~/payment.ibd ~/sandboxes/msb_5_6_14/data/test/ -f
make sure permissions are ok for .ibd file
sudo chmod 660 ~/sandboxes/msb_5_6_14/data/test/payment.ibd sudo chown : ~/sandboxes/msb_5_6_14/data/test/payment.ibd
start sandbox
./sandboxes/msb_5_6_14/start
Currently if you’ll try to select something from the table you’ll get an error:
select from table
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "select count(*) from test.payment" ERROR 2013 (HY000) at line 1: Lost connection to MySQL server during query mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "select count(*) from test.payment" ERROR 1146 (42S02) at line 1: Table 'test.payment' doesn't exist
error log
2013-11-02 14:36:34 b7eff990 InnoDB: Error: table 'test/payment' InnoDB: in InnoDB data dictionary has tablespace id 7, InnoDB: but a tablespace with that id does not exist. There is InnoDB: a tablespace of name test/payment and id 10, though. Have InnoDB: you deleted or moved .ibd files? ... ... 2013-11-02 14:36:36 11640 [ERROR] InnoDB: Failed to find tablespace for table '"test"."payment"' in the cache. Attempting to load the tablespace with space id 7. 2013-11-02 14:36:36 11640 [ERROR] InnoDB: In file './test/payment.ibd', tablespace id and flags are 10 and 0, but in the InnoDB data dictionary they are 7 and 0. Have you moved InnoDB .ibd files around without using the commands DISCARD TABLESPACE and IMPORT TABLESPACE? Please refer to http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html for how to resolve the issue. 2013-11-02 14:36:36 a31a2b90 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. 2013-11-02 14:36:36 11640 [ERROR] InnoDB: Could not find a valid tablespace file for 'test/payment'. See http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html for how to resolve the issue. 2013-11-02 14:36:36 a31a2b90 InnoDB: cannot calculate statistics for table "test"."payment" because the .ibd file is missing. For help, please refer to http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting.html
How to Fix it? In 5.6 tablespace management is very improved so the only thing needed is “ALTER TABLE .. DISCARD TABLESPACE” and “ALTER TABLE .. IMPORT TABLESPACE”.
Please check also limitations: Tablespace Copying Limitations
Look at example:
Discard tablespace
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "alter table test.payment discard tablespace; show warnings;" +---------+------+--------------------------------------------------------+ | Level | Code | Message | +---------+------+--------------------------------------------------------+ | Warning | 1812 | InnoDB: Tablespace is missing for table 'test/payment' | | Warning | 1812 | InnoDB: Tablespace is missing for table 'payment' | +---------+------+--------------------------------------------------------+
Import tablespace
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "alter table test.payment import tablespace; show warnings" +---------+------+----------------------------------------------------------------------------------------------------------------------------------------------+ | Level | Code | Message | +---------+------+----------------------------------------------------------------------------------------------------------------------------------------------+ | Warning | 1810 | InnoDB: IO Read error: (2, No such file or directory) Error opening './test/payment.cfg', will attempt to import without schema verification | +---------+------+----------------------------------------------------------------------------------------------------------------------------------------------+
That’s it, data recovered, payment table accessible on 5.6 sandbox.
Now check if data exists in payment table on sandbox:
mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "select count(*) from test.payment" +----------+ | count(*) | +----------+ | 16049 | +----------+ mysql -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 -e "select * from test.payment limit 1\G" *************************** 1. row *************************** payment_id: 1 customer_id: 1 staff_id: 1 rental_id: 76 amount: 2.99 payment_date: 0000-00-09 03:49:32 last_update: 2028-02-08 12:32:35
Exists.
So dump it from sandbox and restore on 5.5:
dump from 5.6
mysqldump -umsandbox -pmsandbox --host=127.0.0.1 --port=5614 --add-drop-table test payment > ~/payment_dump.sql
restore to 5.5
mysql -u user -p < ~/payment_dump.sql
Check if data exists on 5.5
mysql -u root -e "select * from test.payment limit 3;" +------------+-------------+----------+-----------+--------+---------------------+---------------------+ | payment_id | customer_id | staff_id | rental_id | amount | payment_date | last_update | +------------+-------------+----------+-----------+--------+---------------------+---------------------+ | 1 | 1 | 1 | 76 | 2.99 | 0000-00-09 03:49:32 | 2028-02-08 12:32:35 | | 2 | 1 | 1 | 573 | 0.99 | 0000-00-09 03:49:32 | 0000-00-00 00:00:00 | | 3 | 1 | 1 | 1185 | 5.99 | 0000-00-09 03:49:37 | 0000-00-00 00:00:00 | +------------+-------------+----------+-----------+--------+---------------------+---------------------+
During my work with this case I got into a situation in which the drop table payment on 5.5 wasn’t possible because payment.idb there wasn’t correct – so the server crashed each time I tried to access to this table. The workaround is:
– stop server
– rm .ibd file
– start server
– drop table as usually by DROP TABLE command
The post How to recover an orphaned .ibd file with MySQL 5.6 appeared first on MySQL Performance Blog.