Jul
07
2021
--

Migrating Ownership of Your Stored Routines, Views, and Triggers in MySQL

Migrating Ownership MySQL

Migrating Ownership MySQL“It would be nice to have an option, that would allow to suppress the DEFINER statement in the CREATE VIEW statements generated by mysqldump. This would help when transferring data structures between databases with different security models.” 

TLDR;

Use mysqlpump with option

--skip-definer

instead of

mysqldump

.

The Story

This was requested as MySQL Bug #24680 on Nov 29, 2006. This feature request got large Community support. Even if we cannot see the number of people who voted for this request, the number of comments is impressive.

The request is very reasonable:

mysqldump

is widely used during application development and it is a very common practice to migrate database structure between developers’ machines and to the production servers.

Imagine a situation where developer Sveta creates a database and adds few objects with

DEFINER

  clauses there. We will use only one for this post but in reality, she can have dozens.

mysql> CREATE VIEW large_tables AS SELECT * FROM information_schema.tables WHERE DATA_LENGTH > 100000000;
Query OK, 0 rows affected (0,01 sec)

Once you create a view default

DEFINER

  is the user who created this view:

mysql> SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME FROM information_schema.views WHERE TABLE_NAME='large_tables';
+---------+--------------+--------------+
| DEFINER | TABLE_SCHEMA | TABLE_NAME   |
+---------+--------------+--------------+
| sveta@% | definers     | large_tables |
+---------+--------------+--------------+
1 row in set (0,01 sec)

And this causes issues when another user tries to import such a view into a different server:

mysql> CREATE USER production;
Query OK, 0 rows affected (0,01 sec)

mysql> GRANT ALL ON definers.* TO production@'%';
Query OK, 0 rows affected (0,01 sec)

mysql> GRANT SESSION_VARIABLES_ADMIN ON *.* TO production@'%';
Query OK, 0 rows affected (0,01 sec)

$ mysqldump -usveta definers | mysql -uproduction production
ERROR 1227 (42000) at line 61: Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation

Here is the content of line 61:

$ mysqldump -usveta definers | head -n 62 | tail
/*!50001 DROP VIEW IF EXISTS `large_tables`*/;
/*!50001 SET @saved_cs_client          = @@character_set_client */;
/*!50001 SET @saved_cs_results         = @@character_set_results */;
/*!50001 SET @saved_col_connection     = @@collation_connection */;
/*!50001 SET character_set_client      = utf8mb4 */;
/*!50001 SET character_set_results     = utf8mb4 */;
/*!50001 SET collation_connection      = utf8mb4_0900_ai_ci */;
/*!50001 CREATE ALGORITHM=UNDEFINED */
/*!50013 DEFINER=`sveta`@`%` SQL SECURITY DEFINER */

So this is a

CREATE VIEW

  operation that failed during import.

Unfortunately,

mysqldump

still does not have an option that allows migrating definers.

But since August 2015 and MySQL 5.7.8 we have a solution that, unfortunately, was overlooked in favor of the famous tool

mysqldump

.

Version 5.7.8 and all which created after it, come with a new dump tool:

mysqlpump

  that has the option

--skip-definer

  and allows to migrate database objects without any issue:

$ mysqlpump -h127.0.0.1 -P3306 -usveta --skip-definer definers | mysql -h127.0.0.1 -P13000 -uproduction definers
Dump completed in 17

$ mysql -h127.0.0.1 -P13000 -uproduction definers -e "SHOW FULL TABLES"
+--------------------+------------+
| Tables_in_definers | Table_type |
+--------------------+------------+
| large_tables       | VIEW       |
+--------------------+------------+

$ mysql -h127.0.0.1 -P13000 -uproduction definers -e "SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME FROM information_schema.views WHERE TABLE_NAME='large_tables';"
+--------------+--------------+--------------+
| DEFINER      | TABLE_SCHEMA | TABLE_NAME   |
+--------------+--------------+--------------+
| production@% | definers     | large_tables |
+--------------+--------------+--------------+

Note that

mysqlpump

automatically adds

CREATE DATABASE

  into the dump and full path to the database objects. E.g.

CREATE ALGORITHM=UNDEFINED VIEW `definers`.`large_tables` AS select …

  Therefore this method cannot be used to migrate view, routine, or trigger definitions between different databases on the same server.

For more information about

mysqlpump

 and why you should switch to this tool from

mysqldump

 read this blog post, The mysqlpump Utility.

May
12
2017
--

Percona Server for MySQL 5.6.36-82.0 is Now Available

Percona Server for MySQL 5.7.18-14

Percona Server for MySQL 5.6.36-82.0Percona announces the release of Percona Server for MySQL 5.6.36-82.0 on May 12, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.6.36, and including all the bug fixes in it, Percona Server for MySQL 5.6.36-82.0 is the current GA release in the Percona Server for MySQL 5.6 series. Percona Server for MySQL is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.36-82.0 milestone on Launchpad.

New Features:
Bugs Fixed:
  • Deadlock could occur in I/O-bound workloads when server was using several small buffer pool instances in combination with small redo log files and variable innodb_empty_free_list_algorithm set to backoff algorithm. Bug fixed #1651657.
  • Querying TABLE_STATISTICS in combination with a stored function could lead to a server crash. Bug fixed #1659992.
  • tokubackup_slave_info file was created for a master server after taking the backup with Percona TokuBackup. Bug fixed #135.
  • Fixed a memory leak in Percona TokuBackup. Bug fixed #1669005.
  • Compressed columns with dictionaries could not be added to a partitioned table by using ALTER TABLE. Bug fixed #1671492.
  • Fixed a memory leak that happened in case of failure to create a multi-threaded slave worker thread. Bug fixed #1675716.
  • The combination of using any audit API-using plugin, like Audit Log Plugin and Response Time Distribution, with multi-byte collation connection and PREPARE statement with a parse error could lead to a server crash. Bug fixed #1688698 (upstream #86209).
  • Fix for a #1433432 bug in Percona Server 5.6.28-76.1 caused a performance regression due to suboptimal LRU manager thread flushing heuristics. Bug fixed #1631309.
  • Creating Compressed columns with dictionaries in MyISAM tables by specifying partition engines would not result in error. Bug fixed #1631954.
  • It was not possible to configure basedir as a symlink. Bug fixed #1639735.
  • Replication slave did not report Seconds_Behind_Master correctly when running in multi-threaded slave mode. Bug fixed #1654091 (upstream #84415).
  • DROP TEMPORARY TABLE would create a transaction in binary log on a read-only server. Bug fixed #1668602 (upstream #85258).
  • Creating a compression dictionary with innodb_fake_changes enabled could lead to a server crash. Bug fixed #1629257.

Other bugs fixed: #1660828 (upstream #84786), #1664519 (upstream #84940), #1674299, #1683456, #1670588 (upstream #84173), #1672389, #1674507, #1674867, #1675623, #1650294, #1659224, #1660565, #1662908, #1669002, #1671473, #1673800, #1674284, #1676441, #1676705, #1676847 (upstream #85671), #1677130 (upstream #85678), #1677162, #1678692, #1678792, #1680510 (upstream #85838), #1683993, #1684012, #1684078, #1684264, and #1674281.

Release notes for Percona Server for MySQL 5.6.36-82.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Apr
17
2017
--

The mysqlpump Utility

mysqlpump

mysqlpumpIn this blog, we’ll look at the

mysqlpump

 utility.

mysqlpump

 is a utility that performs logical backups (which means backing up your data as SQL statements instead of a raw copy of data files). It was added in MySQL Server version 5.7.8, and can be used to dump a database or a set of databases to a file and then loaded on another SQL server (not necessarily a MySQL server).

Its usage is similar to

mysqldump

, but it includes a new set of features. Many of the options are the same, but it was written from scratch to avoid being limited to

mysqldump

 compatibility.

The Main Features Include:

  • To make the dump process faster, it allows parallel processing of databases and objects within databases.
  • There are more options to customize your dumps and choose which databases and objects to dump (tables, stored programs, user accounts), using the
    --include-*

     and 

    --exclude-*

     parameters.

  • User accounts can be dumped now as
    CREATE USER

     and

    GRANT

     statements, instead of inserting directly to the MySQL system database.

  • Information between the client and the server can be compressed using the
    --compress

     option. This feature is very useful for remote backups, as it saves bandwidth and transfer time. You can also compress the output file using

    --compress-output

    , which supports ZLIB and LZ4 compression algorithms.

  • It has an estimated progress indicator. This is really useful to check the current status of the dump process. You can see the total amount of rows dumped and the number of databases completed. It also reports an estimate of the total time to complete the dump.
  • Creation of secondary indexes for InnoDB tables happens after data load for shorter load times.

Exclude/Include:

This feature provides more control over customizing your dumps, and filter the data that you need. Using this feature, you can be more selective with the data you want to dump (databases, tables, triggers, events, routines, users) and save file size, process time and transferring time while copying/moving the file to another host.

Keep in mind that there are some options that are mutually exclusive: e.g., if you use the

--all-databases

 option, the

--exclude-databases

  parameter won’t take effect. By default,

mysqlpump

 will not dump the following databases unless you specify them using the

--include-databases

 option:

INFORMATION_SCHEMA

,

performance_schema

,

ndbinfo

  and

sys

.

Values for these options need to be declared by comma-separated listing. Using a “%” as a value for any of the exclude/include options acts as a wildcard. For example, you can dump all databases starting with “t” and “p” by adding the option

--include-databases=t%,p%

  to the command line.

For users, routines, triggers and events,

mysqlpump

 has

--include-*

 and

--exclude-*

 options with similar usage. Some specific notes:

  • Triggers are dumped by default, but you can also filter them using the
    --include-triggers

    /

    --exclude-triggers

     options

  • Routines and events are not dumped by default, and need to be specified in the command line with
    --routines

     and

    --events

    , or the corresponding

    --include

     and 

    --exclude

     options

  • Keep in mind that if a stored procedure and a function have the same name, then include/exclude applies to both

Parallel Processing:

This feature allows you to process several databases, and tables within the databases, in parallel. By default,

mysqlpump

 uses one processing queue with two threads. You can increase the number of threads for this default queue with

--default-parallelism

. Unless you create additional queues, all the databases and/or tables you elect to dump go through the default queue.

To create additional queues you can use the 

--parallel-schemas

 option, which takes two parameters: the number of threads for the queue and the sub-set of databases this queue processes.  As an example, you could run:

mysqlpump --include-databases=a,b,c,d,e,f,g,h --default-parallelism=3 --parallel-schemas=4:a,b

so that schemas c, d, e, f, g and h are processed by the default queue (which uses three threads), and then tables from schemas a and b are processed by a separate queue (that uses four threads). Database names should be included as a comma-separated list:

$ mysqlpump --parallel-schemas=4:example1,example2,example3 --parallel-schemas=3:example4,example5 > examples.sql
Dump progress: 0/1 tables, 250/261184 rows
Dump progress: 24/30 tables, 1204891/17893833 rows
Dump progress: 29/30 tables, 1755611/17893833 rows
Dump progress: 29/30 tables, 2309111/17893833 rows
...
Dump completed in 42424 milliseconds

User Accounts:

User accounts can be dumped using this tool. Here’s a comparison of our Percona Tool

pt-show-grants

 versus

mysqlpump

 to check their differences.

By default,

mysqlpump

 doesn’t dump user account definitions (even while dumping the MySQL database). To include user accounts on the dump, you must specify the

--users

 option.

Here’s an example on how use

mysqlpump

 to get only user accounts dumped to a file:

$ mysqlpump --exclude-databases=% --exclude-triggers=% --users
-- Dump created by MySQL dump utility, version: 5.7.8-rc, linux-glibc2.5 (x86_64)
-- Dump start time: Thu Aug 27 17:10:10 2015
-- Server version: 5.7.8
SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET @OLD_TIME_ZONE=@@TIME_ZONE;
SET TIME_ZONE='+00:00';
SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT;
SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS;
SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION;
SET NAMES utf8;
CREATE USER 'msandbox'@'127.%' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT ALL PRIVILEGES ON *.* TO 'msandbox'@'127.%';
CREATE USER 'msandbox_ro'@'127.%' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT SELECT, EXECUTE ON *.* TO 'msandbox_ro'@'127.%';
CREATE USER 'msandbox_rw'@'127.%' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON *.* TO 'msandbox_rw'@'127.%';
CREATE USER 'rsandbox'@'127.%' IDENTIFIED WITH 'mysql_native_password' AS '*B07EB15A2E7BD9620DAE47B194D5B9DBA14377AD' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT REPLICATION SLAVE ON *.* TO 'rsandbox'@'127.%';
CREATE USER 'furrywall'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*AB8D50A9E3B8D1F3ACE85C54736B5BF472B44539' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT LOCK;
GRANT USAGE ON *.* TO 'furrywall'@'localhost';
CREATE USER 'msandbox'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT ALL PRIVILEGES ON *.* TO 'msandbox'@'localhost';
CREATE USER 'msandbox_ro'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT SELECT, EXECUTE ON *.* TO 'msandbox_ro'@'localhost';
CREATE USER 'msandbox_rw'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE ON *.* TO 'msandbox_rw'@'localhost';
CREATE USER 'root'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6C387FC3893DBA1E3BA155E74754DA6682D04747' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION;
CREATE USER 'testuser'@'localhost' IDENTIFIED WITH 'mysql_native_password' AS '*6E543F385210D9BD42A4FDB4BB23FD2C31C95462' REQUIRE NONE PASSWORD EXPIRE INTERVAL 30 DAY ACCOUNT UNLOCK;
GRANT USAGE ON *.* TO 'testuser'@'localhost';
SET TIME_ZONE=@OLD_TIME_ZONE;
SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT;
SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS;
SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION;
SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;
-- Dump end time: Thu Aug 27 17:10:10 2015
Dump completed in 823 milliseconds

As you can see, above the tool makes sure the session uses known values for timezone and character sets. This won’t affect users, it’s part of the dump process to ensure correctness while restoring on the destination.

Comparing it with

pt-show-grants

 from Percona Toolkit, we can see that 

mysqlpump

 dumps the

CREATE USER

  information as well. The statements produced by

mysqlpump

 are the right thing to run to recreate users (and should be the preferred method), especially because of the

sql_mode

 NO_AUTO_CREATE_USERS. If enabled, it renders

pt-show-grants

 useless.

Here’s an example of

pt-show-grants

 usage:

$ pt-show-grants --host 127.0.0.1 --port 5708 --user msandbox --ask-pass
Enter password:
-- Grants dumped by pt-show-grants
-- Dumped from server 127.0.0.1 via TCP/IP, MySQL 5.7.8-rc at 2015-08-27 17:06:52
-- Grants for 'furrywall'@'localhost'
GRANT USAGE ON *.* TO 'furrywall'@'localhost';
-- Grants for 'msandbox'@'127.%'
GRANT ALL PRIVILEGES ON *.* TO 'msandbox'@'127.%';
-- Grants for 'msandbox'@'localhost'
GRANT ALL PRIVILEGES ON *.* TO 'msandbox'@'localhost';
-- Grants for 'msandbox_ro'@'127.%'
GRANT EXECUTE, SELECT ON *.* TO 'msandbox_ro'@'127.%';
-- Grants for 'msandbox_ro'@'localhost'
GRANT EXECUTE, SELECT ON *.* TO 'msandbox_ro'@'localhost';
-- Grants for 'msandbox_rw'@'127.%'
GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INDEX, INSERT, LOCK TABLES, SELECT, SHOW DATABASES, UPDATE ON *.* TO 'msandbox_rw'@'127.%';
-- Grants for 'msandbox_rw'@'localhost'
GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INDEX, INSERT, LOCK TABLES, SELECT, SHOW DATABASES, UPDATE ON *.* TO 'msandbox_rw'@'localhost';
-- Grants for 'root'@'localhost'
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION;
GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION;
-- Grants for 'rsandbox'@'127.%'
GRANT REPLICATION SLAVE ON *.* TO 'rsandbox'@'127.%';
-- Grants for 'testuser'@'localhost'
GRANT USAGE ON *.* TO 'testuser'@'localhost';

Some Miscellaneous Notes:

  • One of the differences with
    mysqldump

     is that

    mysqlpump

     adds 

    CREATE DATABASE

     statements to the dump by default, unless specified with the

    --no-create-db

    option.

    • There’s an important difference on the dump process that is closely related: it includes the database name while adding the
      CREATE TABLE

       statement. This causes a problem when trying to use the tool to create a duplicate.

Apr
05
2017
--

Percona Server for MySQL 5.7.17-13 is Now Available

Percona Server for MySQL 5.7.17-13

Percona Server for MySQL 5.7.17-13Percona announces the GA release of Percona Server for MySQL 5.7.17-13 on April 5, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.17, including all the bug fixes in it, Percona Server for MySQL 5.7.17-13 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.17-13 milestone at Launchpad.

Bugs Fixed:
  • MyRocks storage engine detection implemented in mysqldump in Percona Server 5.6.17-12 was using a deprecated INFORMATION_SCHEMA.SESSION_VARIABLES table, causing mysqldump failures on servers running with the show_compatibility_56 variable set to OFF. Bug fixed #1676401.

The release notes for Percona Server for MySQL 5.7.17-13 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Mar
24
2017
--

Percona Server 5.6.35-81.0 is Now Available

Percona Server for MySQL 5.6.35-81.0

percona server 5.6.35-81.0Percona announces the release of Percona Server 5.6.35-81.0 on March 24, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.6.35, and including all the bug fixes in it, Percona Server 5.6.35-81.0 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.35-81.0 milestone on Launchpad.

New Features:
  • Percona Server has implemented new mysqldump --order-by-primary-desc option. This feature tells mysqldump to take the backup by descending primary key order (PRIMARY KEY DESC) which can be useful if storage engine is using reverse order column for a primary key.
Bugs Fixed:
  • When innodb_ft_result_cache_limit was exceeded by internal memory allocated by InnoDB during the FT scan not all memory was released which could lead to server assertion. Bug fixed #1634932 (upstream #83648).
  • Log tracking initialization did not find last valid bitmap data correctly, potentially resulting in needless redo log retracking or hole in the tracked LSN range. Bug fixed #1658055.
  • If Audit Log Plugin was unable to create file pointed by audit_log_file, the server would crash during the startup. Bug fixed #1666496.
  • A DROP TEMPORARY TABLE ... for a table created by a CREATE TEMPORARY TABLE ... SELECT ... would get logged in the binary log on a disconnect with mixed mode replication. Bug fixed #1671013.
  • TokuDB did not use an index with even if cardinality was good. Bug fixed #1671152.
  • Row-based replication events were not reflected in Rows_updated fields in the User Statistics INFORMATION_SCHEMA tables. Bug fixed #995624.
  • A long-running binary log commit would block SHOW STATUS, which in turn could block a number of other operations such as client connects and disconnects. Bug fixed #1646100.
  • It was impossible to use column compression dictionaries with partitioned InnoDB tables. Bug fixed #1653104.
  • Diagnostics for OpenSSL errors have been improved. Bug fixed #1660339 (upstream #75311).
  • When DuplicateWeedout strategy was used for joins, use was not reported in the query plan info output extension for the slow query log. Bug fixed #1592694.

Other bugs fixed: #1650321, #1650322, #1654501, #1663251, #1666213, #1652912, #1659548, #1663452, #1670834, #1672871, #1626545, #1644174, #1658006, #1658021, #1659218, #1659746, #1660239, #1660243, #1660255, #1660348, #1662163 upstream (#81467), #1664219, #1664473, #1671076, and #1671123.

Release notes for Percona Server 5.6.35-81.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Dec
16
2015
--

Using dbsake to recover table structure from .frm files and process mysqldump output

dbsakeWe work on data recoveries quite often. In many cases, we recover table structures from the .frm files because there was no backup available. There is already a great blog post by my colleague Miguel Ángel Nieto about how we can recover structures from .frm files using MySQL utilities.

This works pretty well and we prefer to run mysqlfrm with the “–server” option to get all possible information from a .frm file. However, this option expects that MySQL is up and running so that mysqlfrm can spawn a new MySQL instance, and run the structure recovery there.

Recently I came across a tool that makes this job easier. The name of tool is dbsake, it’s a collection of command-line tools that perform various DBA related tasks for MySQL. In this blog, we will look at two very useful dbsake commands.

Installation is very easy and straightforward. It’s in an executable python zip archive with all dependencies included.

# curl -s http://get.dbsake.net > dbsake
# chmod u+x dbsake
# ./dbsake --version
dbsake, version 2.1.0 9525896

Recovering table structures from MySQL .frm files with dbsake

To recover table structures using dbsake, you need to use the “dbsake frmdump” command, followed by the .frm file path. The frmdump command decodes the MySQL .frm file and provides a “CREATE TABLE” or “CREATE VIEW” statement in the output. The good thing is that it doesn’t require a running a MySQL server instance, and interprets the .frm file according to rules similar to the MySQL server.

Let’s see an example:

# ./dbsake frmdump /var/lib/mysql/sakila/staff.frm
--
-- Table structure for table `staff`
-- Created with MySQL Version 5.6.27
--
CREATE TABLE `staff` (
`staff_id` tinyint(3) unsigned NOT NULL AUTO_INCREMENT,
`first_name` varchar(45) NOT NULL,
`last_name` varchar(45) NOT NULL,
`address_id` smallint(5) unsigned NOT NULL,
`picture` blob,
`email` varchar(50) DEFAULT NULL,
`store_id` tinyint(3) unsigned NOT NULL,
`active` tinyint(1) NOT NULL DEFAULT '1',
`username` varchar(16) NOT NULL,
`password` varchar(40) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`staff_id`),
KEY `idx_fk_store_id` (`store_id`),
KEY `idx_fk_address_id` (`address_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

The result looks pretty good, and has recovered the character set and collation information as well. We can also see the MySQL version (5.6.27) retrieved from the .frm file. It is important to mention that the command only decodes the information available in .frm file, which means that it cannot recover InnoDB foreign-key references and AUTO_INCREMENT values. These items are stored outside of the .frm file.

The frmdump command makes the recovery process easy and faster. We can easily script this and recover the structure of a large number of tables. For example, if we need to recover the structure of all tables from a world database, we can do following:

mysql> create database world_recover;
# for tbl in `ls -1 /var/lib/mysql/world/*.frm`; do ./dbsake frmdump $tbl | mysql world_recover; done;
mysql> show tables from world_recover;
+-------------------------+
| Tables_in_world_recover |
+-------------------------+
| city                    |
| country                 |
| countrylanguage         |
+-------------------------+
3 rows in set (0.00 sec)

Filter and transform a mysqldump stream with dbsake

It’s a very common requirement to filter one or more tables from a mysqldump full database backup. The “dbsake sieve [options]” command helps us to filter or transform mysqldump output.

Let’s see how to extract a single table from a mysqldump file.

# mysqldump world > world.sql
# cat world.sql | ./dbsake sieve -t world.city > world.city.sql
Processed . Output: 1 database(s) 1 table(s) and 0 view(s)

The “-t” or “–table” option tells the command to only output the table matching the given pattern. It will also show the number of databases, tables or views processed in output.

To extract multiple tables, you can pass “-t db.tbl” multiple times.

# cat world.sql | ./dbsake sieve -t world.city -t world.country > world.city_country.sql
Processed . Output: 1 database(s) 2 table(s) and 0 view(s)
# cat world.city_country.sql | grep -i 'create table'
CREATE TABLE `city` (
CREATE TABLE `country` (

The latest Percona server added the new option “–innodb-optimize-keys” in mysqldump. It changes the way InnoDB tables are dumped, so that secondary keys are created after loading the data, thus taking advantage of InnoDB fast index creation. This is a really great feature in that it helps us to restore data more efficiently than the default incremental rebuild that mysqldump performs.

Using the “dbsake sieve [options]” command, we can transform the regular mysqldump output to take advantage of fast index creation. The “–defer-indexes” option rewrites the output of CREATE TABLE statements, and arranges for secondary indexes to be created after the table data is loaded. Similarly the “–defer-foreign-keys” option can be added to add foreign key constraints after loading table data.

Let’s see an example:

# cat world.sql | ./dbsake sieve --defer-indexes --defer-foreign-keys -t world.city  > world.city.sql
Processed . Output: 1 database(s) 1 table(s) and 0 view(s)

This means that world.city.sql will have a table structure with the Primary Key first, then will insert statements to load data, and an additional ALTER TABLE statement to create secondary keys when there is at least one secondary index to be added. Foreign keys will also created with secondary indexes.

The original structure of table world.city:

CREATE TABLE `city` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `Name` char(35) NOT NULL DEFAULT '',
  `CountryCode` char(3) NOT NULL DEFAULT '',
  `District` char(20) NOT NULL DEFAULT '',
  `Population` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`),
  KEY `CountryCode` (`CountryCode`),
  CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `country` (`Code`)
) ENGINE=InnoDB AUTO_INCREMENT=4080 DEFAULT CHARSET=latin1;

And the transformation done by dbsake:

...........
CREATE TABLE `city` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `Name` char(35) NOT NULL DEFAULT '',
  `CountryCode` char(3) NOT NULL DEFAULT '',
  `District` char(20) NOT NULL DEFAULT '',
  `Population` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB AUTO_INCREMENT=4080 DEFAULT CHARSET=latin1;
...........
LOCK TABLES `city` WRITE;
...........
INSERT INTO `city` VALUES .....................
...........
UNLOCK TABLES;
--
-- InnoDB Fast Index Creation (generated by dbsake)
--
ALTER TABLE `city`
  ADD KEY `CountryCode` (`CountryCode`),
  ADD CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `country` (`Code`);
...........

For more sieve command options, please read the online manual.

Conclusion

The dbsake command makes it easier to recover table structures from .frm files. We can also filter and transform the mysqldump output easily without writing a complex awk or sed script. There are some more useful features of this tool that you can read about in the manual.

The post Using dbsake to recover table structure from .frm files and process mysqldump output appeared first on MySQL Performance Blog.

Jul
10
2015
--

How to create a rock-solid MySQL database backup & recovery strategy

Percona MySQL and MongoDB WebinarsHave you ever wondered what could happen if your MySQL database goes down?

Although it’s evident such a crash will cause downtime – and surely some business impact in terms of revenue – can you do something to reduce this impact?

The simple answer is “yes” by doing regular backups (of course) but are you 100% sure that your current backup strategy will really come through when an outage occurs? And how much precious time will pass (and how much revenue will be lost) before you get your business back online?

I usually think of backups as the step after HA fails. Let’s say we’re in M<>M replication and something occurs that kills the db but the HA can’t save the day. Let’s pretend that the UPS fails and those servers are completely out. You can’t failover; you have to restore data. Backups are a key piece of “Business Continuity.” Also factor in the frequent need to restore data that’s been altered by mistake. No ‘WHERE’ clause or DROP TABLE in prod instead of DEV. These instances are where backups are invaluable.

Let’s take some time and discuss the possible backup strategies with MySQL…  how to make backups efficiently and also examine the different tools that are available. We’ll cover these topics and more during my July 15  webinar: “Creating a Best-in-Class Backup and Recovery System for Your MySQL Environment” starting at 10 a.m. Pacific time.

On a related note, did you know that most online backups are possible with mysqldump and you can save some space on backups by using simple Linux tools? I’ll also cover this so be sure to join me next Wednesday. Oh, and it’s a free webinar, too!

Stay tuned!

The post How to create a rock-solid MySQL database backup & recovery strategy appeared first on MySQL Performance Blog.

Mar
15
2015
--

Deep dive into MySQL’s innochecksum tool

Percona XtraBackupOne of our Percona Support customers recently reported that Percona XtraBackup failed with a page corruption error on an InnoDB table. The customer thought it was a problem or bug in the Percona XtraBackup tool. After investigation we found that an InnoDB page was actually corrupted and a Percona XtraBackup tool caught the error as expected and hence the backup job failed.

I thought this would be an interesting topic and worthy of a blog post. In this article I will describe the innochecksum tool, when and how to use it and what are the possible fixes if an InnoDB table suffers from page corruption.

The innochecksum tool is an offline tool that prints checksums for InnoDB files. This tool reads an InnoDB tablespace file, calculates the checksum for each page and reports mismatches, if any. A checksum mismatch is an indication of corrupt pages. Being as offline tool, innochecksum can’t be used on tablespace file that a MySQL server is currently using,  hence you need to shutdown the server prior to running the innochecksum tool. If you try to run the innochecksum tool on a running MySQL server, then there is a chance that innochecksum crashes or reports a bad checksum for a good page, resulting false positives results. There is chance when you run innochecksum on a tablespace file that is opened by mysqld, that pages are dirty and not checksummed yet by the InnoDB storage engine itself.

The point: don’t run innochecksum against a running server.

InnoDB corruption can be caused by many factors (e.g. power lost, faulty hardware, bugs).  The InnoDB storage engine validates calculated checksum while reading pages from a tablespace on disk to the stored checksum in the page. In case, InnoDB finds page checksum mismatch it will force down the MySQL server.

Let me show you a page corruption error identified by Percona XtraBackup during a backup run in which the backup failed afterward.

[01] xtrabackup: Database page corruption detected at page 25413, retrying...
[01] xtrabackup: Database page corruption detected at page 25413, retrying...
[01] xtrabackup: Database page corruption detected at page 25413, retrying...

First, we need to identify if the tablespace is really corrupted for that particular table. I do that with the help of the innochecksum utility as shown below. As I mentioned earlier, make sure to shut down MySQL before using the innochecksum tool.

$ innochecksum -p 25413 /path/to/datadir/database_name/table_name.ibd

I passed the -p (page) flag for innochecksum to only check the specific pages that were reported corrupt by Percona XtraBackup. Without passing any option to the innochecksum tool, it will check entire tablespace for corruption which will required additional server downtime. The innochecksum tool also supports the -d (debug) option to print the checksum for each page and the -v (verbose) parameter to print a progress indicator. You can find more details in the manual. If the tool reports page corruption then database table is really corrupted as below.

page 25413 invalid (fails log sequence number check)

In order to fix this issue, the first thing you should try is to mysqldump the corrupted table and If mysqldump succeeded then problem exists in secondary indexes for that tablespace. This is because the mysqldump utility doesn’t touch indexes as indexes are created after all rows are inserted.

If mysqldump succeeds then the problem is associated with indexes. I would suggest following options to fix the corruption.

— Execute OPTIMIZE TABLE on that table which rebuilds indexes. The table will be locked during the operation prior to MySQL 5.6.17. Since MySQL 5.6.17 OPTIMIZE TABLE is an online operation.
— Rebuild table with the pt-online-schema-change tool from Percona Toolkit. This will give the same result as OPTIMIZE TABLE a non-blocking way as the pt-online-schema=change tool is online schema change tool.
— Drop all secondary indexes and then recreate them. The table will be locked during that operation for writes only. Again, you can use pt-online-schema-change tool for this purpose without sacrificing read/writes ability on the table during the drop and create indexes operation.

Finally, I would suggest to re-run the innochecksum tool to verify the tables integrity again as this will make sure there is no more page corruption. In this case we found that the table was actually corrupted and fixing table corruption through the backup/reload table fixed the problem and Percona XtraBackup ran fine during the next run.

It is possible that mysqldump crashes a MySQL server for a corrupted table. Fortunately, Percona Server contains innodb_corrupt_table_action which you can enable. The configuration variable is dynamic in nature, this means enabling it doesn’t requires a MySQL server restart. Prior to Percona Server 5.6 innodb_corrupt_table_action was known as innodb_pass_corrupt_table. Once you enable this option, you can try mysqldump again. If you are using Oracle MySQL then I would suggest to try this with innodb_force_recovery in case mysqldump fails to dump the table contents.

As a side note, if your backup is successful without any errors while performing a backup with Percona Xtrabackup, this means your InnoDB tables don’t have any page checksum mismatch or corruption. Percona XtraBackup can validate page checksums and in case of errors it  logs error and exists as I mentioned above.

There is also a modified version of the innochecksum made available by Facebook’s Mark Callaghan and can be found in this bug report which provides extra stats on tablespace undo blocks. There is another tool made by Google’s Jeremy Cole known as the InnoDB Ruby Tool to examine the internals of InnoDB.

LIMITATIONS:

  • Innochecksum is an offline InnoDB checksum tool. This means you must stop MySQL server. Otherwise it produces “Unable to lock file” error since MySQL 5.7.
  • Old versions of innochecksum only supports files up to 2GB in size. However, since MySQL 5.6 innochecksum supports files greater than 2GB in size.
  • Percona Server variable innodb_corrupt_table_action is supported on tables existing in their tablespace (i.e. innodb_file_per_table).
  • If you are using compressed tables (ROW_FORMAT=COMPRESSED) , then you must use innochecksum from MySQL 5.7.2 or greater, as earlier versions of innochecksum don’t support compressed tables. Check this bug for details.

New Features for the innochecksum tool from MySQL 5.7:

  • As I mentioned above, since MySQL 5.7 innochecksum supports file sizes greater than 2GB.
  • Since MySQL 5.7 you can log the output with the –log option.
  • –page-type-summary option added for page type summaries.
  • MySQL 5.7 also includes another nice option –page-type-dump which dumps the details of each page to standard output (STDOUT) or standard error (STDERR).
  • Since MySQL 5.7 innochecksum can be executed on multiple user-defined system tablespace files.
  • Since MySQL 5.7 innochecksum can be executed on multiple system tablespace files.

You can read more about this is in the MySQL 5.7 manual page of innochecksum.

Conclusion:
In this post, we identified InnoDB page corruption using the logs generated by Percona XtraBackup and fixed  them by using the mysqldump tool. But again, unfortunately, there are chances that Percona XtraBackup will not always fail in the same way when it finds corruption. So in some cases, it’s not easy to tell whether Percona XtraBackup has failed due to a bad checksum or a bug of its own. But in most cases, page corruption is the culprit if Percona XtraBackup fails to complete.

To summarize, I would say that Percona XtraBackup is a good way of verifying whether or not InnoDB pages are corrupted – and you can also verify the same thing via the mysqldump utility.

The post Deep dive into MySQL’s innochecksum tool appeared first on MySQL Performance Blog.

Jun
10
2013
--

Migrating between MySQL schemas with Percona Xtrabackup

Percona XtraBackup for MySQLRecently, I was working with a client that asked about using Percona Xtrabackup to take a snapshot of a particular MySQL schema and then reload it with a different schema name on the same server.  It caught me off guard because I’d never really thought about it – typically, I’ve used Xtrabackup simply to clone a server (for replication or migration) or migrate an existing schema to a new server.  However, given the import/export functionality of Xtrabackup combined with Percona Server (Exporting and Importing Tables), it did seem possible.

Further discussion with the client clarified the use case: clients make a mistake and need to compare their old data to their current data on the current live server.  Mysqldump works well for this, but can quite slow on larger schemas.

One of the downsides to mysqldump is the need to scan the full tables and in turn, load that data into and pollute the buffer pool.  Note that this can be somewhat mitigated using innodb_old_blocks_time, but that is outside the scope of this post.  Similarly, reloading the data will be a very IO intense operation (redo logs, binlogs, etc) as well further polluting the buffer pool.  Enter Xtrabackup…

As a quick refresher, Xtrabackup works by copying the dirty tablespace files while streaming the redo-logs to ensure that all transactions are also captured.  The –apply-logs phase simply utilizes the built in crash recovery and applies the redo-logs to the dirty tablespace and voila, you have a consistent binary backup at a point in time.

When running Percona Server, you can utilize the –export flag during the –apply-logs phase and then re-import those files to a running server.  However, you need to have existing table structures in place.  Fortunately, this can be done easily using mysqldump –no-data.

Now, enough with the theory, here is the procedure I used:

  1. Locate an existing snapshot that contains the schema you are interested in (/tmp/snapshot/2013-06-03_11-30/orig)
  2. Get the table structures:  mysqldump –no-data orig > /tmp/orig.schema.sql
  3. Create the new target database:  mysqladmin create orig_old
  4. Load the schema into the target database:  mysql orig_old < /tmp/orig.schema.sql
  5. Ensure innodb_import_table_from_xtrabackup = 1  (dynamic variable)
  6. Prepare the backup using the –export flag:  innobackupex –apply-log –export /tmp/snapshot/2013-06-03_11-30
  7. For each table, run:  ALTER TABLE tblname DISCARD TABLESPACE
  8. Copy the .exp and .ibd files from the snapshot to the new instance:  cp /tmp/snapshot/2013-06-03_11-30/orig/*[.exp|.ibd] /var/lib/mysql/orig_old
  9. Make sure that the files are owned by mysql: chown mysql:mysql /var/lib/mysql/orig_old/*
  10. For each table, run: ALTER TABLE tblname IMPORT TABLESPACE

Now, you have a old version of the schema running side by side with the current version on the same server.  This will allow you to compare and restore values (potentially corrupted via user error or other issues) with SQL rather than needing to import from a remote server.  This can allow for more targeted restores, easier comparison, and allow remote users to compare on a live system without needed to grant access to another “backup” server.

As noted by one of my colleagues (thanks Bill Karwin!), the cumbersome part of this process is the DISCARD/IMPORT TABLESPACE step as that is done manually for each table (currently a blueprint in innobackupex).  He also included this helpful script to generate all of those statements for steps 7 and 10 in two scripts:

mysql -N -B <<’EOF’ > discard-ddl.sql
SELECT CONCAT(‘ALTER TABLE `’, table_name, ‘` DISCARD TABLESPACE;’) AS _ddl
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA=’orig’ AND ENGINE=’InnoDB’;
EOF
mysql -N -B <<’EOF’ > import-ddl.sql
SELECT CONCAT(‘ALTER TABLE `’, table_name, ‘` IMPORT TABLESPACE;’) AS _ddl
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA=’orig’ AND ENGINE=’InnoDB’;
EOF

For full disclosure, here are the versions I was using on my test VM:

  • Percona Server 5.5.30
  • Xtrabackup 2.0.6

Some other use cases that come to mind for this technique would be:

  • Targeted schema migration (to new schema name) using –include=”^orig[.]“ (i.e. functional partitioning)
    • Combine with –replicate-rewrite-db
  • Daily backups from production to staging with different db names (i.e. orig_prod restored to orig_staging)
  • I’m sure there are others as well…

The post Migrating between MySQL schemas with Percona Xtrabackup appeared first on MySQL Performance Blog.

Apr
19
2013
--

MySQL alternative Percona Server 5.1.68 -14.6 now available

Percona Server

Percona Server for MySQL version 5.1.68-14.6

Percona is glad to announce the release of Percona Server  5.1.68 -14.6 on April 19, 2013 (downloads are available here and from the Percona Software Repositories). Based on MySQL 5.1.68, including all the bug fixes in it, Percona Server 5.1.68-14.6, a MySQL alternative, is now the current stable release in the 5.1 series. All of Percona‘s software is open source and free, all the details of the release can be found in the 5.1.68-14.6 milestone at Launchpad.

Bugs Fixed:

  • Fixed yum dependencies that were causing conflicts in CentOS 6.3 during installation. Bugs fixed #1031427 and #1051874.
  • When mysqldump was used with --innodb-optimize-keys option it produced invalid SQL for cases when there was an explicitly named foreign key constraint which implied an implicit secondary index with the same name. Fixed by detecting such cases and omitting the corresponding secondary keys from deferred key creation optimization. Bug fixed #1081016.
  • When mysqldump was used with --innodb-optimize-keys and --no-data options, all secondary key definitions would be lost. Bug fixed #989253.
  • Percona Server was built with YaSSL which could cause some of the programs that use it to crash. Fixed by building packages with OpenSSL support rather than the bundled YaSSL library. Bug fixed #1104977.
  • Fix for bug #1070856 introduced a regression in Percona Server 5.1.66-14.2 which could cause a server to hang when binary log is enabled. Bug fixed #1162085.
  • Percona Server would re-create the test database when using rpm on server upgrade, even if the database was previously removed. Bug fixed #710799.
  • Debian packages included the old version of innotop. Fixed by removing innotop and its InnoDBParser Perl package from source and Debian installation. Bug fixed #1032139.
  • Percona Server was missing help texts in the MySQL client because the help tables were missing. Bug fixed #1041981.

Other bugs fixes: bug fixed #1154962, bug fixed #1154959, bug fixed #1154957, bug fixed #1154954, bug fixed #1144059, bug fixed #1050536.

Bugs can be reported on the launchpad bug tracker. Release notes for Percona Server 5.1.68-14.6 are available in our online documentation.

The post MySQL alternative Percona Server 5.1.68 -14.6 now available appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com