Sep
13
2018
--

Percona Toolkit 3.0.12 Is Now Available

percona toolkit

percona toolkitPercona announces the release of Percona Toolkit 3.0.12 on September 13, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

Fixed bugs:

 

  • PT-1611: pt-archiver failed to output UTF-8 characters.
  • PT-1603: pt-table-sync incorrectly calculated chunk boundaries in case of unsorted ENUM fields in indexes.
  • PT-1574: pt-online-schema-change failed on tables with a nullable unique key and a row with NULL values.
  • PT-1572: ENUM fields usage in keys was improved, resulting in higher speed for expressions with sorted ENUM items.
  • PT-1422: pt-mysql-summary could hang when NULL values appear in the processlist Time column.

Documentation change:

  • PT-1321: The required MySQL privileges were detailed in pt-online-schema-change documentation

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.12 Is Now Available appeared first on Percona Database Performance Blog.

Sep
13
2018
--

Analyzing Amazon Aurora Slow Logs with pt-query-digest

Amazon Aurora MySQL slow query logs with pt-query-digest slow

Amazon Aurora MySQL slow query logs with pt-query-digest slowIn this blog post we shall discuss how you can analyze slow query logs from Amazon Aurora for MySQL, (referred to as Amazon Aurora in the remaining blog). The tools and techniques explained here apply to the other MySQL compatible services available under Amazon Aurora. However, we’ll focus specially on analyzing slow logs from Amazon Aurora version 2 (MySQL 5.7 compatible) using pt-query-digest. We believe there is a bug in Aurora where it logs really big numbers for query execution and lock times for otherwise really fast queries.

So, the main steps we need are:

  1. Enable slow query logging on your Amazon Aurora DB parameter group, apply the change when appropriate.
  2. Download the slow log(s) that match the time that you are interested to investigate, and optionally concatenate them.
  3. Run pt-query-digest on the downloaded logs and check the results.

Enable slow query logging

For our testing we decided to capture all the SELECT queries that were hitting our Amazon Aurora instance, mainly because we had a sysbench OLTP read only workload and that wouldn’t really have a lot of slow queries. An easy way to do so is to enable the capture of slow query logs and set long_query_time to 0 — you will need to enable slow query logging. To achieve that, we created a new DB parameter group and applied it to our test Aurora instance with the following three parameters set as below:

slow_query_log=1
long_query_time=0
min_examined_row_limit=0

Once you have the above configuration applied to Amazon RDS, you will be able to see slow query logs being created in the Amazon RDS console.

Download the log file

You can download the log file of your choice using either the Amazon RDS console OR you can use the following AWS CLI command to achieve the same:

$ aws rds download-db-log-file-portion --db-instance-identifier perconasupport  --starting-token 0 --output text --log-file-name slowquery/mysql-slowquery.log.2018-09-03.09 > mysql-slowquery.log.2018-09-03.09

Depending on the size of the chosen log file, the above command will take some time to complete the download.

Run pt-query-digest on the log file

Once the file has been downloaded you can analyse that using the following pt-query-digest command.

$ pt-query-digest --group-by fingerprint --order-by Query_time:sum mysql-slowquery.log.2018-09-03.09

On our Aurora test slow log file, the initial results didn’t look right so we had to apply a workaround. Here is the header of the initial results from pt-query-digest:

# 456.2s user time, 2.5s system time, 43.80M rss, 141.48M vsz
# Current date: Tue Sep 4 15:54:21 2018
# Hostname: aahmed-GL503VD
# Files: mysql-slowquery.log.2018-09-03.09
# Overall: 5.13M total, 60 unique, 1.43k QPS, 507.43Gx concurrency _______
# Time range: 2018-09-03T08:00:04 to 2018-09-03T09:00:03
# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======
# Exec time 1826227663297288s 1us 18446744073710s 355917782s 761us 80127878922s 93us
# Lock time 1401952549601936s 0 18446744073710s 273229812s 44us 70205933577s 23us
# Rows sent 94.71M 0 100 19.35 97.36 37.62 0.99
# Rows examine 216.26M 0 300 44.19 299.03 84.74 0.99
# Query size 196.24M 5 1.24k 40.08 72.65 18.90 36.69
# Profile
# Rank Query ID Response time Calls R/Call
# ==== ====================== =========================== ======= ========
# 1 0xE81D0B3DB4FB31BC5... 1346612317380813.0000 73.7% 3194111 421592210.5966 18... SELECT sbtest?
# 2 0x9934EF6887CC7A638... 147573952589685.0625 8.1% 319381 462062403.8051 18... SELECT sbtest?
# 3 0x8D589AFA4DFAEEED8... 110680464442264.1094 6.1% 319411 346514254.1812 18... BEGIN
# 4 0xFF7C69F51BBD3A736... 92233720368565.1875 5.1% 319388 288782673.0139 18... SELECT sbtest?
# 5 0xFFFCA4D67EA0A7888... 73786976294861.9844 4.0% 321238 229695665.8143 18... COMMIT
# MISC 0xMISC 55340232221335.8281 3.0% 657509 84166501.4796 0.0 <43 ITEMS>

What’s wrong with the above results is that the total query Exec time and Lock time are very large numbers. Digging deeper into the logs revealed a problem with the slow logs themselves that had very large numbers for Query time & Lock time for some queries. For instance in our case, of 5.13 million queries in the log file, only 111 had the anomaly. Even so, it was enough to skew the results.

# Time: 2018-09-03T08:41:47.363522Z
--
SELECT c FROM sbtest1 WHERE id=24278;
# Time: 2018-09-03T08:41:49.363224Z
# User@Host: perconasupport[perconasupport] @ [172.30.2.111] Id: 20869
# Query_time: 18446744073709.550781 Lock_time: 18446744073709.550781 Rows_sent: 1 Rows_examined: 1
SET timestamp=1535964109;
SELECT c FROM sbtest2 WHERE id=989322;
# Time: 2018-09-03T08:41:49.363296Z
--
BEGIN;
# Time: 2018-09-03T08:41:53.362947Z
# User@Host: perconasupport[perconasupport] @ [172.30.2.111] Id: 20873
# Query_time: 18446744073709.550781 Lock_time: 18446744073709.550781 Rows_sent: 1 Rows_examined: 1
SET timestamp=1535964113;
SELECT c FROM sbtest1 WHERE id=246889;
# Time: 2018-09-03T08:41:53.363003Z

Incorrect logging

The above two queries are, in fact, really fast, but for some reason the execution time & lock times are wrongly logged in the slow query log. Since the number of such query log records is statistically negligible compared to the total number of queries, we decided to ask pt-query-digest to ignore them using the command line parameter –attribute-value-limit . The default value of this parameter is 0. We decided to increase that to 2^32, and make it ignore the large numbers from the slow query log. So, the pt-query-digest command became:

$ pt-query-digest --group-by fingerprint --order-by Query_time:sum --attribute-value-limit=4294967296 mysql-slowquery.log.2018-09-03.09

This caused the 111 queries with the bad log times to be ignored and the results looked good. In our case, the ignored queries were bad variants of queries for which good versions existed. You can tell this because the number of unique queries remained the same as before after the bad variants were ignored. However, this may not always hold true and one should expect to lose some fidelity, especially if you are analyzing a smaller slow log.

# 441s user time, 450ms system time, 38.19M rss, 111.76M vsz
# Current date: Tue Sep 4 16:23:33 2018
# Hostname: aahmed-GL503VD
# Files: mysql-slowquery.log.2018-09-03.09
# Overall: 5.13M total, 60 unique, 1.43k QPS, 0.30x concurrency __________
# Time range: 2018-09-03T08:00:04 to 2018-09-03T09:00:03
# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======
# Exec time 1096s 1us 198ms 213us 761us 431us 93us
# Lock time 180s 0 103ms 34us 44us 161us 23us
# Rows sent 94.71M 0 100 19.35 97.36 37.62 0.99
# Rows examine 216.26M 0 300 44.19 299.03 84.74 0.99
# Query size 196.24M 5 1.24k 40.08 72.65 18.90 36.69
# Profile
# Rank Query ID Response time Calls R/Call V/M Ite
# ==== =========================== ============== ======= ====== ===== ===
# 1 0xE81D0B3DB4FB31BC558CAE... 400.1469 36.5% 3194111 0.0001 0.00 SELECT sbtest?
# 2 0xF0C5AE75A52E847D737F39... 161.4065 14.7% 319453 0.0005 0.00 SELECT sbtest?
# 3 0xFFFCA4D67EA0A788813031... 155.8740 14.2% 321238 0.0005 0.00 COMMIT
# 4 0x8D589AFA4DFAEEED85FFF5... 107.9827 9.9% 319411 0.0003 0.00 BEGIN
# 5 0x9934EF6887CC7A6384D1DE... 94.1002 8.6% 319381 0.0003 0.00 SELECT sbtest?
# 6 0xFF7C69F51BBD3A736EEB1B... 79.9279 7.3% 319388 0.0003 0.00 SELECT sbtest?
# 7 0xA729E7889F57828D3821AE... 75.3969 6.9% 319398 0.0002 0.00 SELECT sbtest?
# MISC 0xMISC 21.1212 1.9% 18658 0.0011 0.0 <41 ITEMS>
# Query 1: 1.27k QPS, 0.16x concurrency, ID 0xE81D0B3DB4FB31BC558CAEF5F387E929 at byte 358647353
# Scores: V/M = 0.00
# Time range: 2018-09-03T08:00:04 to 2018-09-03T08:42:00
# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count 62 3194111
# Exec time 36 400s 10us 198ms 125us 332us 300us 80us
# Lock time 74 134s 0 26ms 42us 49us 154us 27us
# Rows sent 3 3.01M 0 1 0.99 0.99 0.11 0.99
# Rows examine 1 3.01M 0 1 0.99 0.99 0.11 0.99
# Query size 57 112.37M 32 38 36.89 36.69 0.53 36.69
# String:
# Databases perconasupport
# Hosts 172.30.2.111
# Users perconasupport
# Query_time distribution
# 1us
# 10us ################################################################
# 100us ##############
# 1ms #
# 10ms #
# 100ms #
# 1s

That number looks familiar

The really big number 18446744073709.550781 seemed to ring a bell. A quick web search revealed that it could be a regression of an old bug in MySQL’s code. The following bugs were found to have the same value being reported for query exec time & query lock time.

  1. https://bugs.mysql.com/bug.php?id=59757
  2. https://bugs.mysql.com/bug.php?id=63524
  3. https://bugs.mysql.com/bug.php?id=35396
Once slow logs were enabled, we used this sysbench command  to generate the workload for the Amazon Aurora instance. You might like to try it yourselves. Please note that this used sysbench version 1.0.14.
$ sysbench --db-driver=mysql --mysql-user=perconasupport --mysql-host=perconasupport-1234567.cgmobiazycdv.eu-west-1.rds.amazonaws.com --mysql-password=XXXXXXX  --mysql-db=perconasupport --range_size=100 --table_size=1000000 --tables=2 --threads=6 --events=0 --time=600 --rand-type=uniform /usr/share/sysbench/oltp_read_only.lua run

If you are an Amazon Aurora user, have you found any problems analyzing slow query logs? You are welcome to use the comments section, below, to let me know.

Percona Toolkit

pt-query-digest is part of Percona Toolkit, a collection of advanced open source command-line tools, developed and used by the Percona technical staff. Percona Toolkit is open source and free to download and use.

The post Analyzing Amazon Aurora Slow Logs with pt-query-digest appeared first on Percona Database Performance Blog.

Aug
22
2018
--

How to Compile Percona Server for MySQL 5.7 in Raspberry Pi 3

Percona Server for MySQL on a Raspberry Pi

Percona Server for MySQL on a Raspberry PiIn this post I’ll give to you the steps to compile Percona Server for MySQL 5.7.22 in Raspberry Pi 3. Why? Well because in general this little computer is cheap, has low power consumption, and is great to use as a test machine for developers.

By default Raspbian OS includes very few versions of MySQL to install

$ apt-cache search mysql | grep server
...
mariadb-server-10.0 - MariaDB database server binaries
mariadb-server-10.1 - MariaDB database server binaries
mariadb-server-core-10.0 - MariaDB database core server files
mariadb-server-core-10.1 - MariaDB database core server files
mysql-server - MySQL database server binaries and system database setup [transitional] (5.5)
...

If you want to install MySQL or MariaDB on an ARM architecture using official pre-built binaries, you are limited to those distributions and versions.

Roel Van de Paar wrote time ago this post “Percona Server on the Raspberry Pi: Your own MySQL Database Server for Under $80” using “Fedora ARM” like OS on the first versions of raspberry cards.

Now we will use the last version of Raspberry Pi 3 with Raspbian OS. In my case, the OS version is this

$ cat /etc/issue
Raspbian GNU/Linux 9 \n \l

The Installation of Percona Server for MySQL on Raspberry Pi 3

Let’s start. We will need many devs packages and cmake to compile the source code. There is the command line to update or install all these packages:

apt-get install screen cmake debhelper autotools-dev libaio-dev wget automake libtool bison libncurses-dev libz-dev cmake bzr libgcrypt11-dev build-essential flex bison automake autoconf bzr libtool cmake libaio-dev mysql-client libncurses-dev zlib1g-dev libboost-dev

Now we need to download the Percona Server for MySQL 5.7.22 source code and then we can proceed to compile.

Before starting to compile the source code, we will need to extend the swap memory. This is necessary to avoid encountering memory problems at compilation time.

$ dd if=/dev/zero of=/swapfile1GB bs=1M count=1024
$ mkswap /swapfile1GB
$ swapon /swapfile1GB
$ chmod 0600 /swapfile1GB

Now we can check the memory and find that memory swap is correct

$ free -m

This is the output in my case

$ free -m
total used free shared buff/cache available
Mem: 927 176 92 2 658 683
Swap: 1123 26 1097

I recommend to use a screen session to compile the source code, because it takes a lot of time.

$ cd /root
$ screen -SL compile_percona_server
$ wget https://www.percona.com/downloads/Percona-Server-LATEST/Percona-Server-5.7.22-22/source/tarball/percona-server-5.7.22-22.tar.gz
$ tar czf percona-server-5.7.22-22.tar.gz
$ cd percona-server-5.7.22-22
$ cmake  -DDOWNLOAD_BOOST=ON -DWITH_BOOST=$HOME/my_boost .
$ make
$ make install

After it has compiled and installed successfully, it’s time to create our datadir directory for this Percona Server version, and the mysql user. Feel free to use other directory names

$ mkdir /var/lib/mysql
$ useradd mysql -d /var/lib/mysql
$ chown mysql.mysql /var/lib/mysql

Now it’s time to create a minimal my.cnf config file to start mysql, you can use the below example

$ vim /data/percona-5.6.38/my.cnf
[mysql]
socket=/var/lib/mysql/mysql.sock
[mysqld]
datadir = /var/lib/mysql
server_id = 2
binlog-format = row
log_bin = /var/lib/mysql/binlog
innodb_buffer_pool_size = 128M
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
[mysqld_safe]
log-error=/var/lib/mysql/mysqld.log
pid-file=/var/lib/mysql/mysqld.pid

then we need to initialize the initial databases/schemas, ibdata and ib_logfile files with the next command

$ /usr/local/mysql/bin/mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/var/lib/mysql

now—finally—it’s time start MySQL

$ /usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/my.cnf --user=mysql &

We can check if MySQL started or not, by taking a look at the mysqld.log file

$ cat /var/lib/mysql/mysqld.log
...
2018-08-13T16:44:55.067352Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2018-08-13T16:44:55.067576Z 0 [Note] IPv6 is available.
2018-08-13T16:44:55.067680Z 0 [Note]   - '::' resolves to '::';
2018-08-13T16:44:55.067939Z 0 [Note] Server socket created on IP: '::'.
2018-08-13T16:44:55.258576Z 0 [Note] Event Scheduler: Loaded 0 events
2018-08-13T16:44:55.259525Z 0 [Note] /usr/local/mysql/bin/mysqld: ready for connections.
Version: '5.7.22-22-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution

In our example it started ok.

Remember MySQL server was installed and started using an alternative path.
Now it’s time to connect and check if everything is running well.

$ /usr/local/mysql/bin/mysql -uroot
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.22-22-log Source distribution
Copyright (c) 2009-2018 Percona LLC and/or its affiliates
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)

To check the version that’s running, you can use the next command

mysql> SHOW VARIABLES LIKE "%version%";
+-------------------------+---------------------+
| Variable_name | Value |
+-------------------------+---------------------+
| innodb_version | 5.7.22-22 |
| protocol_version | 10 |
| slave_type_conversions | |
| tls_version | TLSv1,TLSv1.1 |
| version | 5.7.22-22-log |
| version_comment | Source distribution |
| version_compile_machine | armv7l |
| version_compile_os | Linux |
| version_suffix | -log |
+-------------------------+---------------------+
9 rows in set (0.02 sec)

Improving Performance

Keep in mind that you have configured the datadir directory in the same microSD where are you running the OS:  MySQL will run slowly. If you create a new table perhaps it will take a few seconds. So, I recommend that you use a separate USB SSD disk, and move the datadir directory to this SSD disk. That’s more useful and the performance is much better

I hope you enjoyed this guide on how to use a tiny server to install Percona Server for MySQL.
If you want to test other versions, please go ahead: the steps will be very similar to these.

Other related post about Raspberry PI

The post How to Compile Percona Server for MySQL 5.7 in Raspberry Pi 3 appeared first on Percona Database Performance Blog.

Jul
06
2018
--

Percona Toolkit 3.0.11 Is Now Available

percona toolkit

percona toolkitPercona announces the release of Percona Toolkit 3.0.11 on July 6, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Features:

  • PT-1571: Improved hostname recognition in pt-secure-collect
  • PT-1569: Disabled --alter-foreign-keys-method=drop_swap in pt-online-schema-change
  • PT-242: (pt-stalk) Include SHOW SLAVE STATUS on MySQL 5.7 (Thanks Marcelo Altmann)

Fixed bugs:

  • PT-1570: pt-archiver fails to detect columns with the word *GENERATED* as part of the comment
  • PT-1563: pt-show-grantsfails for MySQL 5.6 producing an error which reports that an unknown column account_locked has been detected.
  • PT-1551: pt-table-checksum fails on MySQL 8.0.11
  • PT-241: (pt-stalk) Slave queries don’t run on MySQL 5.7  because the FQDN was missing (Thanks Marcelo Altmann)

Breaking changes:

Starting with this version, the queries checksum in pt-query-digest will use the full MD5 field as a CHAR(32) field instead of storing just the least significant bytes of the checksum as a BIGINT field. The reason for this change is that storing only the least significant bytes as a BIGINT was producing inconsistent results in MySQL 8 compared to MySQL 5.6+.

pt-online-schema-change in MySQL 8:

Due to a bug in MySQL 8.0+, it is not possible to use the drop_swapmethod to rebuild constraints because renaming a table will result in losing the foreign keys. You must specify a different method explicitly.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.11 Is Now Available appeared first on Percona Database Performance Blog.

May
22
2018
--

Percona Toolkit 3.0.10 Is Now Available

percona toolkit

percona toolkitPercona announces the release of Percona Toolkit 3.0.10 on May 22, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Features:

  • PT-131: pt-table-checksum disables the QRT plugin
    The Query Response Time Plugin provides a tool for analyzing information by counting and displaying the number of queries according to the length of time they took to execute. This feature enables a new flag

    --disable-qrt-plugin

      that leverages Percona Server for MySQL’s new ability to disable QRT plugin at the session level. The advantage to enabling this Toolkit feature is that the QRT metrics are not impacted by the work that pt-table-checksum performs. This means that QRT metrics report only the work your Application is generating on MySQL, and not clouded by the activities of pt-table-checksum.

  • PT-118: pt-table-checksum reports the number of rows of difference between master and slave
    We’re adding support for pt-table-checksum to identify the number of row differences between master and slave. Previously you were able to see only the count of chunks that differed between hosts. This is helpful for situations where you believe you can tolerate some measure of row count drift between hosts, but want to be precise in understanding what that row count difference actually is.

Improvements

  • PT-1546: Improved support for MySQL 8 roles
  • PT-1543: The encrypted table status query causes high load over multiple minutes
    Users reported that listing encrypted table status can be very slow.  We’ve enabled this functionality via --list-encrypted-tables and set it to default of disabled.
  • PT-1536: Added info about encrypted tablespaces in pt-mysql-summary
    We’ve improved pt-mysql-summary to now include information about encrypted tablespaces.  This information is available by using

    --list-encrypted-tables

     .

Bug Fixes:

  • PT-1556pt-table-checksum 3.0.9 does not change binlog_format to statement any more.

pt-show-grants has several known issues when working with MySQL 8 and roles, which Percona aims to address in subsequent Percona Toolkit releases: PT-1560PT-1559, and PT-1558

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.10 Is Now Available appeared first on Percona Database Performance Blog.

Apr
20
2018
--

Percona Toolkit 3.0.9 Is Now Available

Percona Toolkit 3.0.9Percona announces the release of Percona Toolkit 3.0.9 on April 20, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Tools:

  • PT-1501: pt-secure-collect – new tool to collect and sanitize pt-tools outputs

New Features:

  • PT-1530: Add support for encryption status to pt-mysql-summary
  • PT-1526: Add ndb status to pt-mysql-summary (Thanks Fernando Ipar)
  • PT-1525: Add support for MySQL 8 roles into pt-mysql-summary
  • PT-1509: Make pt-table-sync only set binlog_format when necessary (Thanks Moritz Lenz)
  • PT-1508: Add --read-only-interval and --fail-successive-errors flags to pt-heartbeat (Thanks Shlomi Noach)
  • PT-243: Add --max-hostname-length and --max-line-length flags to pt-query-digest

Bug Fixes:

  • PT-1527: Fixed pt-table-checksum ignores --nocheck-binlog-format

Improvements:

  • PT-1507: pt-summary does not reliably read in the transparent huge pages setting (Thanks Nick Veenhof)
  • PT-1488: pt-show-grants support for MySQL 8.0

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.9 Is Now Available appeared first on Percona Database Performance Blog.

Apr
06
2018
--

How to Handle pt-table-checksum Errors

pt-table-checksum Errors

pt-table-checksum ErrorsIn this blog post, we’ll look at how to approach pt-table-checksum errors.

pt-table-checksum is one of the most popular tools in Percona Toolkit, and it is widely used to identify data differences between masters and slaves. Therefore, as Percona Support Engineers we have customers often asking questions related to the pt-table-checksum errors and warnings produced. Below are the most common issues raised with pt-table-checksum, and we decided to address those issues to help with how to mitigate related warnings or errors.

Unable to detect slaves

Cannot connect to h=127.0.0.1,p=...,u=percona
Diffs cannot be detected because no slaves were found. Please read the --recursion-method documentation for information.

It’s possible that the tool cannot connect to the slaves due to not specific enough information found on the master. By default, it is looking for slaves based on the replica threads visible in master’s processlist. This could be the problem if, for example, the slave’s MySQL runs with a different TCP port, the hostname is not resolved correctly or both the master and slave are on the same host, or this is Galera-based replication. In this case, there is –recursion-method option to try with different discovery methods: ‘hosts’ or ‘cluster’. And if all of them fail, you can specify each slave details manually using the ‘dsn’ method.

An example using this option for the cluster looks like this:

# pt-table-checksum --user=root --password=*** --databases="db1" --recursion-method=cluster 192.168.88.82
Checking if all tables can be checksummed ...
Starting checksum ...
Not checking replica lag on pxc02 because it is a cluster node.
Not checking replica lag on pxc03 because it is a cluster node.
TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE
03-03T00:24:13 0 0 12 1 0 0.033 db1.t1
03-03T00:24:13 0 0 4 1 0 0.031 db1.t2

and when a DSN is needed (like for mysqlsandbox instances), we have to add the slave(s) details to the table, similar to below:

master [localhost] {msandbox} ((none)) > create table percona.dsns (id int(11) NOT NULL AUTO_INCREMENT,parent_id int(11) DEFAULT NULL,dsn varchar(255) NOT NULL,PRIMARY KEY (id));
Query OK, 0 rows affected (0.08 sec)
master [localhost] {msandbox} ((none)) > insert into percona.dsns values (null,null,"h=localhost,S=/tmp/mysql_sandbox20997.sock");
Query OK, 1 row affected (0.03 sec)

$ pt-table-checksum --databases="test" --tables="s1"  --recursion-method=dsn=localhost,D=percona,t=dsns u=root,p=msandbox,h=localhost,S=/tmp/mysql_sandbox20996.sock
Checking if all tables can be checksummed ...
Starting checksum ...
TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE
03-19T14:16:05 0 1 0 1 0 0.344 test.s1

ROW format on slave

Replica slave1.myorg.com has binlog_format ROW which could cause pt-table-checksum to break replication. Please read "Replicas using row-based replication" in the LIMITATIONS section of the tool's documentation. If you understand the risks, specify --no-check-binlog-format to disable this check.

The problem is that second and next level replicas (in chain replication topology) will not calculate the diffs as expected. So this message warns that the slave is using binlog_format=ROW, as the tool needs STATEMENT format to calculate the diffs separately on the slave and master. This is done by replicating the command (e.g., INSERT INTO percona.checksum SELECT CRC32 …. WHERE … ) as the original statement, not as a row copy of CRC values already computed on the master. And that is possible as the tool sets the binlog_format=STATEMENT in its session. This session setting does not propagate further into the slave’s own binary log though. This is not a problem when all the slaves are replicating directly from the master, and in such cases we can ignore that message and use the –no-check-binlog-format option.

By the way, the warning message is misleading regarding breaking replication claim, hence the bug reported.

Unable to switch session binlog_format to STATEMENT

# pt-table-checksum --user=root --password=cmon --databases="db1" --recursion-method=cluster 192.168.88.82
03-02T23:54:50 Failed to /*!50108 SET @@binlog_format := 'STATEMENT'*/: DBD::mysql::db do failed: Percona-XtraDB-Cluster prohibits setting binlog_format to STATEMENT or MIXED with pxc_strict_mode = ENFORCING or MASTER [for Statement "/*!50108 SET @@binlog_format := 'STATEMENT'*/"] at /bin/pt-table-checksum line 10064.
This tool requires binlog_format=STATEMENT, but the current binlog_format is set to ROW and an error occurred while attempting to change it. If running MySQL 5.1.29 or newer, setting binlog_format requires the SUPER privilege. You will need to manually set binlog_format to 'STATEMENT' before running this tool.

or:

$ pt-table-checksum -h przemek-aurora57.xxx.rds.amazonaws.com -u przemek -p xxx --databases="test"
02-19T12:51:01 Failed to /!50108 SET @@binlog_format := 'STATEMENT'/: DBD::mysql::db do failed: Access denied; you need (at least one of) the SUPER privilege(s) for this operation for Statement "/*!50108 SET @@binlog_format := 'STATEMENT'*/" at /usr/bin/pt-table-checksum line 10023.
This tool requires binlog_format=STATEMENT, but the current binlog_format is set to ROW and an error occurred while attempting to change it. If running MySQL 5.1.29 or newer, setting binlog_format requires the SUPER privilege. You will need to manually set binlog_format to 'STATEMENT' before running this tool.

This can be an issue if STATEMENT mode is unsupported in the MySQL variant or special edition of it – Amazon RDS for example, or when switching is prohibited either by lack of SUPER privilege (limitation for Amazon Aurora), or Percona XtraDB Cluster Strict Mode safety precaution as seen on the example above. To workaround it in Percona XtraDB Cluster, temporarily relaxing the strict mode (be careful as this may be dangerous) will work:

pxc01 > set global pxc_strict_mode="permissive";
Query OK, 0 rows affected (0.00 sec)

For Aurora though (only in case asynchronous replication is used between Aurora clusters or from Aurora to non-Aurora MySQL), you will have to change the binlog_format globally to STATEMENT using the option groups.

Too large chunk size or no good index

Cannot checksum table db_name.table_name: There is no good index and the table is oversized. at /usr/bin/pt-table-checksum line 6662.

or

Skipping table because on the master it would be checksummed in one chunk but on these replicas it has too many rows:
xxxxx rows on db_name.table_name
The current chunk size limit is xxxxx rows (chunk size=xxxx * chunk size limit=5).

Instead of examining each table with a single big query, the pt-table-checksum splits tables into chunks to ensure that the checksum is non-intrusive and doesn’t cause too much replication lag or load on the server. To create these chunks, it needs an index of some sort (preferably a primary key or unique index). If there is no index, and the table contains a suitably small number of rows, the tool tries to checksum the table in a single chunk.

Skipping the table, as in the second message example, is a common issue with pt-table-checksum and can be caused by different/outdated table statistics on the master or slave side. To alleviate this issue, make sure all your tables contain a primary or unique key. pt-table-checksum requires that to divide a table into chunks effectively. We also suggest that you make sure these messages are not related to real differences in this table (maybe a row count is significantly different). Also, executing pt-table-checksum with PTDEBUG is a good idea as it captures a lot of debugging info and it provides better insight into what is causing the issue.

There can be some random skipping of tables across many tool runs, and it’s probably because of a mix of two variables. One of it is innodb_stats_on_metadata. Turn it off, at least during the checksum running, such that InnoDB index stats won’t change so often. We remind you it’s a dynamic variable, which means you can change it without MySQL server restart. On the other hand, if constant statistics change for a table (even though the innodb_stats_on_metadata=0, statistics change with each significant amount of writes) is a problem, you may want to disable it for the duration of checksum. Check innodb_stats_auto_update option in Percona Server for MySQL for details.

pt-table-checksum uses an EXPLAIN query to determine the number of rows in the chunk, so ever-changing table statistics is most likely the reason for skipped tables. This is where pt-table-checksum decides to skip a chunk or not. This avoids the scenario that a table has fewer rows on the master but many on a replica, and is checksummed in a single large query, which causes a very long delay in replication. This is also affected by –chunk-size-limit, which defaults to 2. Try setting up higher chunk-size-limit or chunk-time so that pt-table-checksum allows larger chunks, but do it during off-peak periods. Of course, allowing too big of a chunk makes the server suffer for heavy selects, and slave lag may also be a problem while –chunk-time adjusts the chunk size dynamically so that the checksum query executes in a defined amount of time.

For tables that can’t be chunked and must be checksummed in a single run, the chunk size should be sufficiently large, and sometimes is not enough. That’s where the chunk-size-limit comes into play. The –chunk-size-limit modifier is a multiplier for chunk-size and allows larger chunks. To make sure your server is not heavily loaded, you can set a threshold at which pt-table-checksum pauses itself. This can be done by using –-max-load parameter of pt-table-checksum so, in this way –chunk-time and –chunk-size-limit won’t noticeably impact your server. We would suggest to start with default value –chunk-size-limit and increase it gradually till it succeeds. High values of –chunk-size-limit guarantee higher rates of successful runs, but there’s no way to tell if it will always be successful because the number of rows processed is only an estimate. It’s worth mentioning that you can also try running ANALYZE TABLE on “skipped tables” before running pt-table-checksum to make sure statistics are up to date. This may help or may not help, as statistics are estimated and it still might not be inaccurate.

Also, scripting retries of skipped chunks can be a good approach. You can redirect the pt-table-checksum output to a log file and parse that log to find out which tables need to be re-tried separately. You can do many re-tries for a single table if necessary, and the checksum result for a particular table in the checksums table gets overwritten without affecting other results.

All the problems described above will not take place when a table has a primary key on auto_increment int column.

Suboptimal query plan

Skipping chunk 1 of db_name.table_name because MySQL used only 3 bytes of the PRIMARY index instead of 9. See the --[no]check-plan documentation for more information.

The tool uses several heuristics to determine whether an execution plan is good or bad. The first is whether EXPLAIN reports that MySQL intends to use the desired index to access the rows. If MySQL chooses a different index, the tool considers the query unsafe. The tool also checks how much of the index MySQL reports that it uses for the query. The EXPLAIN output shows this in the key_len column. The tool remembers the largest key_len seen, and skips chunks where MySQL reports that it uses a smaller prefix of the index. However, it stretches the overall time to run checksum as it runs several heuristics to decide whether execution path is good or bad. This helps to decide the chunk. By default, –check-plan is on. It can bring a little bit of additional load to the server, but if that’s the case you can always monitor the checksum progress during execution and cancel pt-table-checksum at any moment if necessary. In general, it’s good to keep it enabled. Further, it’s best to run pt-table-checksum during low database traffic time.

To deal with the above error, disable the feature by using –no-check-plan when you get one “Skipping chunk” error. The only drawback of using it is leaving the door open for possible (costly) table scans.

Missing or filtered tables on the slave

Error checksumming table test.dummy: Error getting row count estimate of table test.dummy on replica centos1.bm.int.percona.com: DBD::mysql::db  selectrow_hashref failed: Table 'test.dummy' doesn't exist [for Statement "EXPLAIN SELECT * FROM `test`.`dummy` WHERE 1=1"] at pt-table-checksum line 6607.

This above error is clear that table test.dummy exists on the master but is missing on the slave server. This usually occurs with replication filters. pt-table-checksum failed because test.dummy checksummed on the master while failed on replica to checksum. This can be easily reproduced as per the below example:

mysql> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.0.3.164
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: master-bin.000001
          Read_Master_Log_Pos: 704
               Relay_Log_File: centos1-relay-bin.000002
                Relay_Log_Pos: 684
        Relay_Master_Log_File: master-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB: test
[root@slave1]# perl pt-table-checksum --empty-replicate-table --no-check-replication-filters --replicate=percona.checksums --ignore-databases mysql h=localhost,u=checksum_user,p=checksum_password
02-04T03:14:07 Skipping table test.dummy because it has problems on these replicas:
Table test.dummy does not exist on replica slave1
This can break replication.  If you understand the risks, specify --no-check-slave-tables to disable this check.
02-04T03:14:07 Error checksumming table test.dummy: Error getting row count estimate of table test.dummy on replica slave1: DBD::mysql::db selectrow_hashref failed:
Table 'test.dummy' doesn't exist [for Statement "EXPLAIN SELECT * FROM `test`.`dummy` WHERE 1=1"] at pt-table-checksum line 6607.

As per the above example, the ‘test’ database is ignored to replicate via replication filter Replicate_Ignore_DB, which means any updates on that database will not fall to slave.

Waiting to check replicas for differences:   0% 00:00 remain
Waiting to check replicas for differences:   0% 00:00 remain
.
Waiting to check replicas for differences:   0% 00:00 remain
Waiting to check replicas for differences:   0% 00:00 remain
.

That is actually not an error, but it means that pt-table-checksum is waiting on replicas to run checksum queries. We have customers reporting that the tool runs forever and never came out from “Waiting to check replicas for differences”.  We noticed this problem occurs when database tables exist on replicas but are ignored by replication filters. Because pt-table-checksum checksums each chunk with an INSERT/REPLACE…SELECT query, and those queries from the master never fall to replicas via replication because the tables in question are blocked by replication filters. So the tool waits forever to check the checksum result on replicas, which will never happen. To remedy this issue, use the –ignore-databases or –ignore-tables option to ignore filtered tables from the checksum process.

Replication filters can bring unexpected issues as the last two warnings/errors demonstrated.

Conclusion

pt-table-checksum is a robust tool that validates data between master/slaves in a replication environment. However, in some scenarios the task can be quite challenging. Fortunately, there are options to deal with these obstacles. Some, however, involve not only using specific options for the tool, but also properly (re-)designing your schema. A proper primary key may not only allow the tool to work much faster, less expensive, but sometimes to work at all.

The post How to Handle pt-table-checksum Errors appeared first on Percona Database Performance Blog.

Mar
16
2018
--

Percona Toolkit 3.0.8 Is Now Available

Percona Server for MongoDBPercona announces the release of Percona Toolkit 3.0.8 on March 16, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Features:

  • PT-1500: Added the --output=secure-slowlog option to pt-query-digestto replace queries in the output by their fingerprints. This provides the ability to sanitize a slow log.

Bug Fixes:

  • PT-1492:  pt-kill in version 3.0.7 ignores the value of the --busy-time option
  • PT-1503: The post-install script fails on VM due to improper UUID file detection

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

Mar
13
2018
--

Webinar Thursday, March 15, 2018: Basic External MySQL Troubleshooting Tools

Troubleshooting Tools

MySQL Troubleshooting ToolsPlease join Percona’s Principal Support Engineer, Sveta Smirnova, as she presents Basic External MySQL Troubleshooting Tools on March 15, 2018 at 10:00 am PDT (UTC-7) / 1:00 pm EDT (UTC-4).

In my troubleshooting webinar series, I normally like to discuss built-in instruments available via the SQL interface. While they are effective and help to understand what is going on, external tools are also designed to make life of a database administrator easier.

In this webinar, I will discuss the external tools, toolkits and graphical instruments most valued by Support teams and customers. I will show the main advantages of these tools, and provide examples on how to effectively use them.

I will cover Percona Toolkit, MySQL Utilities, MySQL Sandbox, Percona Monitoring and Management (PMM) and a few other instruments.

Register for the webinar now.

Troubleshooting ToolsSveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can quickly solve typical issues, and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in the MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

Mar
08
2018
--

Migrating MySQL Users to Amazon RDS

Migrating MySQL Users to Amazon RDS

Migrating MySQL Users to Amazon RDSIn this blog post, we’ll look at what is needed when migrating MySQL users to Amazon RDS. We’ll discuss how we can transform MySQL user grants and make them compatible with Amazon RDS.

In order to deliver a managed service experience, Amazon RDS does not provide shell access to the underlying operating system. It also restricts access to certain procedures that require advanced privileges.

Every MySQL instance has some users with ALL PRIVILEGES, and you can’t directly migrate these users to Amazon RDS because it does not support following privileges for regular users.

  • SUPER – Enable use of other administrative operations such as CHANGE MASTER TO, KILL, PURGE BINARY LOGS, SET GLOBAL, and mysqladmin debug command. Level: Global.
  • SHUTDOWN – Enable use of mysqladmin shutdown. Level: Global.
  • FILE – Enable the user to cause the server to read or write files. Level: Global.
  • CREATE TABLESPACE – Enable tablespaces and log file groups to be created, altered, or dropped. Level: Global.

The RDS parameter groups manage changes to the MySQL configuration (dynamic and non-dynamic variables). Amazon RDS also provides stored procedures to perform various administrative tasks that require SUPER privileges.

For example, we’ve got this user in MySQL instance running on Amazon EC2.

db01 (none)> show grants for percona@'%';
+-----------------------------------------------------------------------------------------------------------------------------------+
| Grants for percona@%                                                                                                              |
+-----------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION |
+-----------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

If we try to run the same grants in RDS, it will fail.

[RDS] (none)> GRANT ALL PRIVILEGES ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION;
ERROR 1045 (28000): Access denied for user 'admin'@'%' (using password: YES)

We’ll follow these steps for migrating users to RDS.

  1. Identify users with privileges that aren’t supported by RDS.
  2. Export their grants using pt-show-grants.
  3. Import grants in a separate clean MySQL instance running the same version.
  4. Remove the forbidden privileges using the REVOKE statement.
  5. Export grants again using pt-show-grants and load them to RDS.

Identify users having privileges that aren’t supported by RDS

First, we’ll find the users with privileges that aren’t supported by Amazon RDS. I’ve excluded the localhost users because there is no direct shell access in RDS and you shouldn’t migrate these users.

db01 (none)> select concat("'",user,"'@'",host,"'") as 'user',
CONCAT("REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM '",user,"'@'",host,"';") as 'query' from mysql.user
where host not in  ('localhost','127.0.0.1')
and (Super_Priv='Y' OR Shutdown_priv='Y' OR File_priv='Y' OR Create_tablespace_priv='Y');
+---------------+----------------------------------------------------------------------------+
| user          | query                                                                      |
+---------------+----------------------------------------------------------------------------+
| 'appuser'@'%' | REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM 'appuser'@'%'; |
| 'percona'@'%' | REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM 'percona'@'%'; |
+---------------+----------------------------------------------------------------------------+
2 rows in set (0.00 sec)

We’ve two users with incompatible grants. Let’s transform their grants to make them compatible with RDS. We’ll use the query in second column output later in this process.

Export grants using pt-show-grants

The next step is exporting these two users’ grants using pt-show-grants:

[root@db01 ~]# pt-show-grants --only='appuser'@'%','percona'@'%'
-- Grants dumped by pt-show-grants
-- Dumped from server Localhost via UNIX socket, MySQL 5.6.38-83.0 at 2018-02-24 10:02:21
-- Grants for 'appuser'@'%'
GRANT FILE ON *.* TO 'appuser'@'%' IDENTIFIED BY PASSWORD '*46BDE570B30DFEDC739A339B0AFA17DB62C54213';
GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INSERT, LOCK TABLES, SELECT, TRIGGER, UPDATE ON `sakila`.* TO 'appuser'@'%';
-- Grants for 'percona'@'%'
GRANT ALL PRIVILEGES ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION;

As we can see from above output, both users have at least one privilege that isn’t supported by RDS. Now, all we need to do is to import these users into a separate clean MySQL instance running the same version, and REVOKE the privileges that aren’t supported by RDS.

Import users in a separate MySQL instance running the same version

I’m going to import grants in a separate VM where I’ve just installed Percona Server for MySQL 5.6. Let’s call this instance as db02:

[root@db02 ~]# pt-show-grants --host=db01 --only='appuser'@'%','percona'@'%' --user=percona --ask-pass | mysql
Enter password:

Remove the forbidden privileges using the REVOKE statement

In this step, we will use REVOKE statement from Step 1 to remove the privileges that aren’t supported by Amazon RDS:

db02 (none)> REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM 'appuser'@'%';
Query OK, 0 rows affected (0.00 sec)
db02 (none)> REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM 'percona'@'%';
Query OK, 0 rows affected (0.00 sec)

Export grants again using pt-show-grants and load them to RDS

At this point, db02 has the grants that are compatible with RDS. Let’s take a look at them:

[root@db02 ~]# pt-show-grants --only='appuser'@'%','percona'@'%'
-- Grants dumped by pt-show-grants
-- Dumped from server Localhost via UNIX socket, MySQL 5.6.39-83.1 at 2018-02-24 10:10:38
-- Grants for 'appuser'@'%'
GRANT USAGE ON *.* TO 'appuser'@'%' IDENTIFIED BY PASSWORD '*46BDE570B30DFEDC739A339B0AFA17DB62C54213';
GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INSERT, LOCK TABLES, SELECT, TRIGGER, UPDATE ON `sakila`.* TO 'appuser'@'%';
-- Grants for 'percona'@'%'
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE USER, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, PROCESS, REFERENCES, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SELECT, SHOW DATABASES, SHOW VIEW, TRIGGER, UPDATE ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION;

These grants look good, these can be safely migrated to RDS now. Let’s do it:

[RDS] mysql> GRANT USAGE ON *.* TO 'appuser'@'%' IDENTIFIED BY PASSWORD '*46BDE570B30DFEDC739A339B0AFA17DB62C54213';
Query OK, 0 rows affected (0.32 sec)
[RDS] mysql> GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INSERT, LOCK TABLES, SELECT, TRIGGER, UPDATE ON `sakila`.* TO 'appuser'@'%';
Query OK, 0 rows affected (0.31 sec)
[RDS] mysql> GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE USER, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, PROCESS, REFERENCES, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SELECT, SHOW DATABASES, SHOW VIEW, TRIGGER, UPDATE ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION;
Query OK, 0 rows affected (0.34 sec)

We have successfully migrated users to Amazon RDS, which would have failed in direct migration.

What about rest of the users that don’t have SUPER/SHUTDOWN/FILE/CREATE TABLESPACE privileges? Well, it’s easy. We can migrate them directly using pt-show-grants. They don’t need any transformation before migration.

List them using the following query:

db01 (none)> select concat("'",user,"'@'",host,"'") as 'user' from mysql.user where host not in  ('localhost','127.0.0.1') and (Super_Priv<>'Y' AND Shutdown_priv<>'Y' AND File_priv<>'Y' AND Create_tablespace_priv<>'Y');
+-----------------------+
| user                  |
+-----------------------+
| 'readonly'@'%'        |
| 'repl'@'192.168.56.5' |
+-----------------------+
2 rows in set (0.01 sec)

Export them using pt-show grants and load to RDS.

[root@db01 ~]# pt-show-grants --only='readonly'@'%','repl'@'192.168.56.5' | mysql --host=<rds.endpoint> --user=percona -p
Enter password:

Conclusion

Amazon RDS is a great platform for hosting your MySQL databases. When migrating MySQL users to Amazon RDS, some grants might fail because of having privileges that aren’t supported by RDS. Using pt-show-grants from Percona Toolkit and a separate clean MySQL instance, we can easily transform grants and migrate MySQL users to Amazon RDS without any hassle.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com