May
22
2018
--

Percona Toolkit 3.0.10 Is Now Available

percona toolkit

percona toolkitPercona announces the release of Percona Toolkit 3.0.10 on May 22, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Features:

  • PT-131: pt-table-checksum disables the QRT plugin
    The Query Response Time Plugin provides a tool for analyzing information by counting and displaying the number of queries according to the length of time they took to execute. This feature enables a new flag

    --disable-qrt-plugin

      that leverages Percona Server for MySQL’s new ability to disable QRT plugin at the session level. The advantage to enabling this Toolkit feature is that the QRT metrics are not impacted by the work that pt-table-checksum performs. This means that QRT metrics report only the work your Application is generating on MySQL, and not clouded by the activities of pt-table-checksum.

  • PT-118: pt-table-checksum reports the number of rows of difference between master and slave
    We’re adding support for pt-table-checksum to identify the number of row differences between master and slave. Previously you were able to see only the count of chunks that differed between hosts. This is helpful for situations where you believe you can tolerate some measure of row count drift between hosts, but want to be precise in understanding what that row count difference actually is.

Improvements

  • PT-1546: Improved support for MySQL 8 roles
  • PT-1543: The encrypted table status query causes high load over multiple minutes
    Users reported that listing encrypted table status can be very slow.  We’ve enabled this functionality via --list-encrypted-tables and set it to default of disabled.
  • PT-1536: Added info about encrypted tablespaces in pt-mysql-summary
    We’ve improved pt-mysql-summary to now include information about encrypted tablespaces.  This information is available by using

    --list-encrypted-tables

     .

Bug Fixes:

  • PT-1556pt-table-checksum 3.0.9 does not change binlog_format to statement any more.

pt-show-grants has several known issues when working with MySQL 8 and roles, which Percona aims to address in subsequent Percona Toolkit releases: PT-1560PT-1559, and PT-1558

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.10 Is Now Available appeared first on Percona Database Performance Blog.

Apr
20
2018
--

Percona Toolkit 3.0.9 Is Now Available

Percona Toolkit 3.0.9Percona announces the release of Percona Toolkit 3.0.9 on April 20, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Tools:

  • PT-1501: pt-secure-collect – new tool to collect and sanitize pt-tools outputs

New Features:

  • PT-1530: Add support for encryption status to pt-mysql-summary
  • PT-1526: Add ndb status to pt-mysql-summary (Thanks Fernando Ipar)
  • PT-1525: Add support for MySQL 8 roles into pt-mysql-summary
  • PT-1509: Make pt-table-sync only set binlog_format when necessary (Thanks Moritz Lenz)
  • PT-1508: Add --read-only-interval and --fail-successive-errors flags to pt-heartbeat (Thanks Shlomi Noach)
  • PT-243: Add --max-hostname-length and --max-line-length flags to pt-query-digest

Bug Fixes:

  • PT-1527: Fixed pt-table-checksum ignores --nocheck-binlog-format

Improvements:

  • PT-1507: pt-summary does not reliably read in the transparent huge pages setting (Thanks Nick Veenhof)
  • PT-1488: pt-show-grants support for MySQL 8.0

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.9 Is Now Available appeared first on Percona Database Performance Blog.

Apr
06
2018
--

How to Handle pt-table-checksum Errors

pt-table-checksum Errors

pt-table-checksum ErrorsIn this blog post, we’ll look at how to approach pt-table-checksum errors.

pt-table-checksum is one of the most popular tools in Percona Toolkit, and it is widely used to identify data differences between masters and slaves. Therefore, as Percona Support Engineers we have customers often asking questions related to the pt-table-checksum errors and warnings produced. Below are the most common issues raised with pt-table-checksum, and we decided to address those issues to help with how to mitigate related warnings or errors.

Unable to detect slaves

Cannot connect to h=127.0.0.1,p=...,u=percona
Diffs cannot be detected because no slaves were found. Please read the --recursion-method documentation for information.

It’s possible that the tool cannot connect to the slaves due to not specific enough information found on the master. By default, it is looking for slaves based on the replica threads visible in master’s processlist. This could be the problem if, for example, the slave’s MySQL runs with a different TCP port, the hostname is not resolved correctly or both the master and slave are on the same host, or this is Galera-based replication. In this case, there is –recursion-method option to try with different discovery methods: ‘hosts’ or ‘cluster’. And if all of them fail, you can specify each slave details manually using the ‘dsn’ method.

An example using this option for the cluster looks like this:

# pt-table-checksum --user=root --password=*** --databases="db1" --recursion-method=cluster 192.168.88.82
Checking if all tables can be checksummed ...
Starting checksum ...
Not checking replica lag on pxc02 because it is a cluster node.
Not checking replica lag on pxc03 because it is a cluster node.
TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE
03-03T00:24:13 0 0 12 1 0 0.033 db1.t1
03-03T00:24:13 0 0 4 1 0 0.031 db1.t2

and when a DSN is needed (like for mysqlsandbox instances), we have to add the slave(s) details to the table, similar to below:

master [localhost] {msandbox} ((none)) > create table percona.dsns (id int(11) NOT NULL AUTO_INCREMENT,parent_id int(11) DEFAULT NULL,dsn varchar(255) NOT NULL,PRIMARY KEY (id));
Query OK, 0 rows affected (0.08 sec)
master [localhost] {msandbox} ((none)) > insert into percona.dsns values (null,null,"h=localhost,S=/tmp/mysql_sandbox20997.sock");
Query OK, 1 row affected (0.03 sec)

$ pt-table-checksum --databases="test" --tables="s1"  --recursion-method=dsn=localhost,D=percona,t=dsns u=root,p=msandbox,h=localhost,S=/tmp/mysql_sandbox20996.sock
Checking if all tables can be checksummed ...
Starting checksum ...
TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE
03-19T14:16:05 0 1 0 1 0 0.344 test.s1

ROW format on slave

Replica slave1.myorg.com has binlog_format ROW which could cause pt-table-checksum to break replication. Please read "Replicas using row-based replication" in the LIMITATIONS section of the tool's documentation. If you understand the risks, specify --no-check-binlog-format to disable this check.

The problem is that second and next level replicas (in chain replication topology) will not calculate the diffs as expected. So this message warns that the slave is using binlog_format=ROW, as the tool needs STATEMENT format to calculate the diffs separately on the slave and master. This is done by replicating the command (e.g., INSERT INTO percona.checksum SELECT CRC32 …. WHERE … ) as the original statement, not as a row copy of CRC values already computed on the master. And that is possible as the tool sets the binlog_format=STATEMENT in its session. This session setting does not propagate further into the slave’s own binary log though. This is not a problem when all the slaves are replicating directly from the master, and in such cases we can ignore that message and use the –no-check-binlog-format option.

By the way, the warning message is misleading regarding breaking replication claim, hence the bug reported.

Unable to switch session binlog_format to STATEMENT

# pt-table-checksum --user=root --password=cmon --databases="db1" --recursion-method=cluster 192.168.88.82
03-02T23:54:50 Failed to /*!50108 SET @@binlog_format := 'STATEMENT'*/: DBD::mysql::db do failed: Percona-XtraDB-Cluster prohibits setting binlog_format to STATEMENT or MIXED with pxc_strict_mode = ENFORCING or MASTER [for Statement "/*!50108 SET @@binlog_format := 'STATEMENT'*/"] at /bin/pt-table-checksum line 10064.
This tool requires binlog_format=STATEMENT, but the current binlog_format is set to ROW and an error occurred while attempting to change it. If running MySQL 5.1.29 or newer, setting binlog_format requires the SUPER privilege. You will need to manually set binlog_format to 'STATEMENT' before running this tool.

or:

$ pt-table-checksum -h przemek-aurora57.xxx.rds.amazonaws.com -u przemek -p xxx --databases="test"
02-19T12:51:01 Failed to /!50108 SET @@binlog_format := 'STATEMENT'/: DBD::mysql::db do failed: Access denied; you need (at least one of) the SUPER privilege(s) for this operation for Statement "/*!50108 SET @@binlog_format := 'STATEMENT'*/" at /usr/bin/pt-table-checksum line 10023.
This tool requires binlog_format=STATEMENT, but the current binlog_format is set to ROW and an error occurred while attempting to change it. If running MySQL 5.1.29 or newer, setting binlog_format requires the SUPER privilege. You will need to manually set binlog_format to 'STATEMENT' before running this tool.

This can be an issue if STATEMENT mode is unsupported in the MySQL variant or special edition of it – Amazon RDS for example, or when switching is prohibited either by lack of SUPER privilege (limitation for Amazon Aurora), or Percona XtraDB Cluster Strict Mode safety precaution as seen on the example above. To workaround it in Percona XtraDB Cluster, temporarily relaxing the strict mode (be careful as this may be dangerous) will work:

pxc01 > set global pxc_strict_mode="permissive";
Query OK, 0 rows affected (0.00 sec)

For Aurora though (only in case asynchronous replication is used between Aurora clusters or from Aurora to non-Aurora MySQL), you will have to change the binlog_format globally to STATEMENT using the option groups.

Too large chunk size or no good index

Cannot checksum table db_name.table_name: There is no good index and the table is oversized. at /usr/bin/pt-table-checksum line 6662.

or

Skipping table because on the master it would be checksummed in one chunk but on these replicas it has too many rows:
xxxxx rows on db_name.table_name
The current chunk size limit is xxxxx rows (chunk size=xxxx * chunk size limit=5).

Instead of examining each table with a single big query, the pt-table-checksum splits tables into chunks to ensure that the checksum is non-intrusive and doesn’t cause too much replication lag or load on the server. To create these chunks, it needs an index of some sort (preferably a primary key or unique index). If there is no index, and the table contains a suitably small number of rows, the tool tries to checksum the table in a single chunk.

Skipping the table, as in the second message example, is a common issue with pt-table-checksum and can be caused by different/outdated table statistics on the master or slave side. To alleviate this issue, make sure all your tables contain a primary or unique key. pt-table-checksum requires that to divide a table into chunks effectively. We also suggest that you make sure these messages are not related to real differences in this table (maybe a row count is significantly different). Also, executing pt-table-checksum with PTDEBUG is a good idea as it captures a lot of debugging info and it provides better insight into what is causing the issue.

There can be some random skipping of tables across many tool runs, and it’s probably because of a mix of two variables. One of it is innodb_stats_on_metadata. Turn it off, at least during the checksum running, such that InnoDB index stats won’t change so often. We remind you it’s a dynamic variable, which means you can change it without MySQL server restart. On the other hand, if constant statistics change for a table (even though the innodb_stats_on_metadata=0, statistics change with each significant amount of writes) is a problem, you may want to disable it for the duration of checksum. Check innodb_stats_auto_update option in Percona Server for MySQL for details.

pt-table-checksum uses an EXPLAIN query to determine the number of rows in the chunk, so ever-changing table statistics is most likely the reason for skipped tables. This is where pt-table-checksum decides to skip a chunk or not. This avoids the scenario that a table has fewer rows on the master but many on a replica, and is checksummed in a single large query, which causes a very long delay in replication. This is also affected by –chunk-size-limit, which defaults to 2. Try setting up higher chunk-size-limit or chunk-time so that pt-table-checksum allows larger chunks, but do it during off-peak periods. Of course, allowing too big of a chunk makes the server suffer for heavy selects, and slave lag may also be a problem while –chunk-time adjusts the chunk size dynamically so that the checksum query executes in a defined amount of time.

For tables that can’t be chunked and must be checksummed in a single run, the chunk size should be sufficiently large, and sometimes is not enough. That’s where the chunk-size-limit comes into play. The –chunk-size-limit modifier is a multiplier for chunk-size and allows larger chunks. To make sure your server is not heavily loaded, you can set a threshold at which pt-table-checksum pauses itself. This can be done by using –-max-load parameter of pt-table-checksum so, in this way –chunk-time and –chunk-size-limit won’t noticeably impact your server. We would suggest to start with default value –chunk-size-limit and increase it gradually till it succeeds. High values of –chunk-size-limit guarantee higher rates of successful runs, but there’s no way to tell if it will always be successful because the number of rows processed is only an estimate. It’s worth mentioning that you can also try running ANALYZE TABLE on “skipped tables” before running pt-table-checksum to make sure statistics are up to date. This may help or may not help, as statistics are estimated and it still might not be inaccurate.

Also, scripting retries of skipped chunks can be a good approach. You can redirect the pt-table-checksum output to a log file and parse that log to find out which tables need to be re-tried separately. You can do many re-tries for a single table if necessary, and the checksum result for a particular table in the checksums table gets overwritten without affecting other results.

All the problems described above will not take place when a table has a primary key on auto_increment int column.

Suboptimal query plan

Skipping chunk 1 of db_name.table_name because MySQL used only 3 bytes of the PRIMARY index instead of 9. See the --[no]check-plan documentation for more information.

The tool uses several heuristics to determine whether an execution plan is good or bad. The first is whether EXPLAIN reports that MySQL intends to use the desired index to access the rows. If MySQL chooses a different index, the tool considers the query unsafe. The tool also checks how much of the index MySQL reports that it uses for the query. The EXPLAIN output shows this in the key_len column. The tool remembers the largest key_len seen, and skips chunks where MySQL reports that it uses a smaller prefix of the index. However, it stretches the overall time to run checksum as it runs several heuristics to decide whether execution path is good or bad. This helps to decide the chunk. By default, –check-plan is on. It can bring a little bit of additional load to the server, but if that’s the case you can always monitor the checksum progress during execution and cancel pt-table-checksum at any moment if necessary. In general, it’s good to keep it enabled. Further, it’s best to run pt-table-checksum during low database traffic time.

To deal with the above error, disable the feature by using –no-check-plan when you get one “Skipping chunk” error. The only drawback of using it is leaving the door open for possible (costly) table scans.

Missing or filtered tables on the slave

Error checksumming table test.dummy: Error getting row count estimate of table test.dummy on replica centos1.bm.int.percona.com: DBD::mysql::db  selectrow_hashref failed: Table 'test.dummy' doesn't exist [for Statement "EXPLAIN SELECT * FROM `test`.`dummy` WHERE 1=1"] at pt-table-checksum line 6607.

This above error is clear that table test.dummy exists on the master but is missing on the slave server. This usually occurs with replication filters. pt-table-checksum failed because test.dummy checksummed on the master while failed on replica to checksum. This can be easily reproduced as per the below example:

mysql> show slave statusG
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.0.3.164
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: master-bin.000001
          Read_Master_Log_Pos: 704
               Relay_Log_File: centos1-relay-bin.000002
                Relay_Log_Pos: 684
        Relay_Master_Log_File: master-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB: test
[root@slave1]# perl pt-table-checksum --empty-replicate-table --no-check-replication-filters --replicate=percona.checksums --ignore-databases mysql h=localhost,u=checksum_user,p=checksum_password
02-04T03:14:07 Skipping table test.dummy because it has problems on these replicas:
Table test.dummy does not exist on replica slave1
This can break replication.  If you understand the risks, specify --no-check-slave-tables to disable this check.
02-04T03:14:07 Error checksumming table test.dummy: Error getting row count estimate of table test.dummy on replica slave1: DBD::mysql::db selectrow_hashref failed:
Table 'test.dummy' doesn't exist [for Statement "EXPLAIN SELECT * FROM `test`.`dummy` WHERE 1=1"] at pt-table-checksum line 6607.

As per the above example, the ‘test’ database is ignored to replicate via replication filter Replicate_Ignore_DB, which means any updates on that database will not fall to slave.

Waiting to check replicas for differences:   0% 00:00 remain
Waiting to check replicas for differences:   0% 00:00 remain
.
Waiting to check replicas for differences:   0% 00:00 remain
Waiting to check replicas for differences:   0% 00:00 remain
.

That is actually not an error, but it means that pt-table-checksum is waiting on replicas to run checksum queries. We have customers reporting that the tool runs forever and never came out from “Waiting to check replicas for differences”.  We noticed this problem occurs when database tables exist on replicas but are ignored by replication filters. Because pt-table-checksum checksums each chunk with an INSERT/REPLACE…SELECT query, and those queries from the master never fall to replicas via replication because the tables in question are blocked by replication filters. So the tool waits forever to check the checksum result on replicas, which will never happen. To remedy this issue, use the –ignore-databases or –ignore-tables option to ignore filtered tables from the checksum process.

Replication filters can bring unexpected issues as the last two warnings/errors demonstrated.

Conclusion

pt-table-checksum is a robust tool that validates data between master/slaves in a replication environment. However, in some scenarios the task can be quite challenging. Fortunately, there are options to deal with these obstacles. Some, however, involve not only using specific options for the tool, but also properly (re-)designing your schema. A proper primary key may not only allow the tool to work much faster, less expensive, but sometimes to work at all.

The post How to Handle pt-table-checksum Errors appeared first on Percona Database Performance Blog.

Mar
16
2018
--

Percona Toolkit 3.0.8 Is Now Available

Percona Server for MongoDBPercona announces the release of Percona Toolkit 3.0.8 on March 16, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

New Features:

  • PT-1500: Added the --output=secure-slowlog option to pt-query-digestto replace queries in the output by their fingerprints. This provides the ability to sanitize a slow log.

Bug Fixes:

  • PT-1492:  pt-kill in version 3.0.7 ignores the value of the --busy-time option
  • PT-1503: The post-install script fails on VM due to improper UUID file detection

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

Mar
13
2018
--

Webinar Thursday, March 15, 2018: Basic External MySQL Troubleshooting Tools

Troubleshooting Tools

MySQL Troubleshooting ToolsPlease join Percona’s Principal Support Engineer, Sveta Smirnova, as she presents Basic External MySQL Troubleshooting Tools on March 15, 2018 at 10:00 am PDT (UTC-7) / 1:00 pm EDT (UTC-4).

In my troubleshooting webinar series, I normally like to discuss built-in instruments available via the SQL interface. While they are effective and help to understand what is going on, external tools are also designed to make life of a database administrator easier.

In this webinar, I will discuss the external tools, toolkits and graphical instruments most valued by Support teams and customers. I will show the main advantages of these tools, and provide examples on how to effectively use them.

I will cover Percona Toolkit, MySQL Utilities, MySQL Sandbox, Percona Monitoring and Management (PMM) and a few other instruments.

Register for the webinar now.

Troubleshooting ToolsSveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can quickly solve typical issues, and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in the MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

Mar
08
2018
--

Migrating MySQL Users to Amazon RDS

Migrating MySQL Users to Amazon RDS

Migrating MySQL Users to Amazon RDSIn this blog post, we’ll look at what is needed when migrating MySQL users to Amazon RDS. We’ll discuss how we can transform MySQL user grants and make them compatible with Amazon RDS.

In order to deliver a managed service experience, Amazon RDS does not provide shell access to the underlying operating system. It also restricts access to certain procedures that require advanced privileges.

Every MySQL instance has some users with ALL PRIVILEGES, and you can’t directly migrate these users to Amazon RDS because it does not support following privileges for regular users.

  • SUPER – Enable use of other administrative operations such as CHANGE MASTER TO, KILL, PURGE BINARY LOGS, SET GLOBAL, and mysqladmin debug command. Level: Global.
  • SHUTDOWN – Enable use of mysqladmin shutdown. Level: Global.
  • FILE – Enable the user to cause the server to read or write files. Level: Global.
  • CREATE TABLESPACE – Enable tablespaces and log file groups to be created, altered, or dropped. Level: Global.

The RDS parameter groups manage changes to the MySQL configuration (dynamic and non-dynamic variables). Amazon RDS also provides stored procedures to perform various administrative tasks that require SUPER privileges.

For example, we’ve got this user in MySQL instance running on Amazon EC2.

db01 (none)> show grants for percona@'%';
+-----------------------------------------------------------------------------------------------------------------------------------+
| Grants for percona@%                                                                                                              |
+-----------------------------------------------------------------------------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION |
+-----------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

If we try to run the same grants in RDS, it will fail.

[RDS] (none)> GRANT ALL PRIVILEGES ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION;
ERROR 1045 (28000): Access denied for user 'admin'@'%' (using password: YES)

We’ll follow these steps for migrating users to RDS.

  1. Identify users with privileges that aren’t supported by RDS.
  2. Export their grants using pt-show-grants.
  3. Import grants in a separate clean MySQL instance running the same version.
  4. Remove the forbidden privileges using the REVOKE statement.
  5. Export grants again using pt-show-grants and load them to RDS.

Identify users having privileges that aren’t supported by RDS

First, we’ll find the users with privileges that aren’t supported by Amazon RDS. I’ve excluded the localhost users because there is no direct shell access in RDS and you shouldn’t migrate these users.

db01 (none)> select concat("'",user,"'@'",host,"'") as 'user',
CONCAT("REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM '",user,"'@'",host,"';") as 'query' from mysql.user
where host not in  ('localhost','127.0.0.1')
and (Super_Priv='Y' OR Shutdown_priv='Y' OR File_priv='Y' OR Create_tablespace_priv='Y');
+---------------+----------------------------------------------------------------------------+
| user          | query                                                                      |
+---------------+----------------------------------------------------------------------------+
| 'appuser'@'%' | REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM 'appuser'@'%'; |
| 'percona'@'%' | REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM 'percona'@'%'; |
+---------------+----------------------------------------------------------------------------+
2 rows in set (0.00 sec)

We’ve two users with incompatible grants. Let’s transform their grants to make them compatible with RDS. We’ll use the query in second column output later in this process.

Export grants using pt-show-grants

The next step is exporting these two users’ grants using pt-show-grants:

[root@db01 ~]# pt-show-grants --only='appuser'@'%','percona'@'%'
-- Grants dumped by pt-show-grants
-- Dumped from server Localhost via UNIX socket, MySQL 5.6.38-83.0 at 2018-02-24 10:02:21
-- Grants for 'appuser'@'%'
GRANT FILE ON *.* TO 'appuser'@'%' IDENTIFIED BY PASSWORD '*46BDE570B30DFEDC739A339B0AFA17DB62C54213';
GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INSERT, LOCK TABLES, SELECT, TRIGGER, UPDATE ON `sakila`.* TO 'appuser'@'%';
-- Grants for 'percona'@'%'
GRANT ALL PRIVILEGES ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION;

As we can see from above output, both users have at least one privilege that isn’t supported by RDS. Now, all we need to do is to import these users into a separate clean MySQL instance running the same version, and REVOKE the privileges that aren’t supported by RDS.

Import users in a separate MySQL instance running the same version

I’m going to import grants in a separate VM where I’ve just installed Percona Server for MySQL 5.6. Let’s call this instance as db02:

[root@db02 ~]# pt-show-grants --host=db01 --only='appuser'@'%','percona'@'%' --user=percona --ask-pass | mysql
Enter password:

Remove the forbidden privileges using the REVOKE statement

In this step, we will use REVOKE statement from Step 1 to remove the privileges that aren’t supported by Amazon RDS:

db02 (none)> REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM 'appuser'@'%';
Query OK, 0 rows affected (0.00 sec)
db02 (none)> REVOKE SUPER, SHUTDOWN, FILE, CREATE TABLESPACE ON *.* FROM 'percona'@'%';
Query OK, 0 rows affected (0.00 sec)

Export grants again using pt-show-grants and load them to RDS

At this point, db02 has the grants that are compatible with RDS. Let’s take a look at them:

[root@db02 ~]# pt-show-grants --only='appuser'@'%','percona'@'%'
-- Grants dumped by pt-show-grants
-- Dumped from server Localhost via UNIX socket, MySQL 5.6.39-83.1 at 2018-02-24 10:10:38
-- Grants for 'appuser'@'%'
GRANT USAGE ON *.* TO 'appuser'@'%' IDENTIFIED BY PASSWORD '*46BDE570B30DFEDC739A339B0AFA17DB62C54213';
GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INSERT, LOCK TABLES, SELECT, TRIGGER, UPDATE ON `sakila`.* TO 'appuser'@'%';
-- Grants for 'percona'@'%'
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE USER, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, PROCESS, REFERENCES, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SELECT, SHOW DATABASES, SHOW VIEW, TRIGGER, UPDATE ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION;

These grants look good, these can be safely migrated to RDS now. Let’s do it:

[RDS] mysql> GRANT USAGE ON *.* TO 'appuser'@'%' IDENTIFIED BY PASSWORD '*46BDE570B30DFEDC739A339B0AFA17DB62C54213';
Query OK, 0 rows affected (0.32 sec)
[RDS] mysql> GRANT ALTER, CREATE, CREATE TEMPORARY TABLES, DELETE, DROP, EXECUTE, INSERT, LOCK TABLES, SELECT, TRIGGER, UPDATE ON `sakila`.* TO 'appuser'@'%';
Query OK, 0 rows affected (0.31 sec)
[RDS] mysql> GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE USER, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, PROCESS, REFERENCES, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SELECT, SHOW DATABASES, SHOW VIEW, TRIGGER, UPDATE ON *.* TO 'percona'@'%' IDENTIFIED BY PASSWORD '*497030855D20D6B22E65436D0DFC75AA347B32F0' WITH GRANT OPTION;
Query OK, 0 rows affected (0.34 sec)

We have successfully migrated users to Amazon RDS, which would have failed in direct migration.

What about rest of the users that don’t have SUPER/SHUTDOWN/FILE/CREATE TABLESPACE privileges? Well, it’s easy. We can migrate them directly using pt-show-grants. They don’t need any transformation before migration.

List them using the following query:

db01 (none)> select concat("'",user,"'@'",host,"'") as 'user' from mysql.user where host not in  ('localhost','127.0.0.1') and (Super_Priv<>'Y' AND Shutdown_priv<>'Y' AND File_priv<>'Y' AND Create_tablespace_priv<>'Y');
+-----------------------+
| user                  |
+-----------------------+
| 'readonly'@'%'        |
| 'repl'@'192.168.56.5' |
+-----------------------+
2 rows in set (0.01 sec)

Export them using pt-show grants and load to RDS.

[root@db01 ~]# pt-show-grants --only='readonly'@'%','repl'@'192.168.56.5' | mysql --host=<rds.endpoint> --user=percona -p
Enter password:

Conclusion

Amazon RDS is a great platform for hosting your MySQL databases. When migrating MySQL users to Amazon RDS, some grants might fail because of having privileges that aren’t supported by RDS. Using pt-show-grants from Percona Toolkit and a separate clean MySQL instance, we can easily transform grants and migrate MySQL users to Amazon RDS without any hassle.

Nov
21
2017
--

Percona Toolkit 3.0.5 is Now Available

Percona ToolkitPercona announces the release of Percona Toolkit 3.0.5 on November 21, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

You download Percona Toolkit packages from the web site or install from official repositories.

This release includes the following changes:

New Features:

  • PT-216: The pt-mongodb-query-digest supports MongoDB versions lower than 3.2; incorrect output was fixed.
  • PT-182: The pt-summary, pt-mysql-summary, pt-mongodb-summary commands provide output in the the JSON format.
  • pt-mysql-summary shows the output of the SHOW SLAVE HOSTS command.
  • pt-table-sync supports replication channels (requires MySQL version 5.7.6 or higher)
  • PMM-1590: MongoDB Profiler for Percona Management and Monitoring and Percona Toolkit has been improved.

Bug fixes:

  • pt-mext would fail if the Rsa_public_key variable was empty.
  • PT-212: pt-mongodb-query-digest --version produced incorrect values.
  • PT-202: pt-online-schema-change incorrectly processed virtual columns.
  • PT-200: pt-online-schema-change command reported an error when the name of an index contained UNIQUE as as the prefix or suffix.
  • pt-table-checksum did not detect differences on a system with the ROW based replication active.
  • PT-196: pt-onine-schema-change --max-load paused if a status variable was passed 0 as the value.
  • PT-193: pt-table-checksum reported a misleading error if a column comment contained an apostrophe. For more information, see #1708749.
  • PT-187: In some cases, pt-table-checksum did not report that the same table contained different values on the master and slave.
  • PT-186: pt-online-schema-change --alter could fail if field names contained upper case characters. For more information, see #1705998.
  • PT-183: In some cases pt-mongodb-query-digest could not connect to a database using authentication.
  • PT-167: In some cases, pt-kill could ignore the value of the --busy-time parameter. For more information, see #1016272.
  • PT-161: When run with the --skip-check-slave-lag, the pt-table-checksum could could fail in some cases.
Sep
18
2017
--

Webinar Tuesday, September 19, 2017: A Percona Support Engineer Walkthrough for pt-stalk

pt-stalkJoin Percona’s, Principal Support Engineer, Markus Albe as he presents A Percona Support Engineer Walkthrough for pt-stalk on Tuesday, September 19, 2017, at 10:00 am PDT / 1:00 pm EDT (UTC-7).

As a support engineer, I get dozens of pt-stalk captures from our customers containing samples of iostat, vmstat, top, ps, SHOW ENGINE INNODB STATUS, SHOW PROCESSLIST and a multitude of other diagnostics outputs.

These are the tools of the trade for performance and troubleshooting, and we must learn to digest these outputs in an effective and systematic way. This allows us to provide high-quality service to a large volume of customers.

In this presentation, I will share the knowledge we’ve gained working with this data, and how to apply it to your database environment. We will learn to setup, capture data, write plugins to trigger collection and to capture custom data, look at our systematic approach and learn what data to read first and how to unwind the tangled threads of pt-stalk.

By the end of this presentation, you will have expert knowledge on how to capture diagnostic metrics at the right time and have a generic approach to digest the captured data. This allows you to diagnose and solve many of problems common to MySQL setups.

Resister for the webinar here.

Marcos AlbeMarcos Albe, Principal Technical Services Engineer

Marcos Albe has been doing web development for over ten years, providing solutions for various media and technology companies of different sizes. He is now a member of the Percona Support Team. Born and raised in the city of Montevideo, Uruguay, he became passionate about computers at the age of 11, when he got a 25Mhz i386-SX. Ten years later, he became one of the pioneers in telecommuting in Uruguay while leading the IT efforts for the second largest newspaper in the country.

Sep
05
2017
--

Webinar Wednesday, September 6, 2017: Percona Roadmap and Software News Update – Q3 2017

Percona Roadmap

Percona RoadmapCome and listen to Percona CEO Peter Zaitsev on Wednesday, September 6, 2017 at 10am PT / 1pm ET (UTC-7) discuss the Percona roadmap, as well as what’s new in Percona open source software.

 

During this webinar Peter will talk about newly released features in Percona software, show a few quick demos and share with you highlights from the Percona open source software roadmap. This discussion will cover Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

Peter will also talk about new developments in Percona commercial services and finish with a Q&A.

Register for the webinar before seats fill up for this exciting webinar Wednesday, September 6, 2017 at 10am PT / 1pm ET (UTC-7).

Peter ZaitsevPeter Zaitsev, Percona CEO and Co-Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University, where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone have both tapped Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.
Aug
02
2017
--

Percona Toolkit 3.0.4 is Now Available

Percona Server for MongoDB

Percona ToolkitPercona announces the release of Percona Toolkit 3.0.4 on August 2, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

You download Percona Toolkit packages from the web site or install from official repositories.

This release includes the following changes:

New Features

  • PT-90: Added collection of information about prepared statements by pt-stalk when Performance Schema is enabled. For more information, see #1642750.
  • PT-91: Added the --preserve-triggers option for pt-online-schema-change to support AFTER triggers.
  • PT-138: Added --output-format option for pt-mongodb-summary to choose between JSON format and the default plain text.
  • PT-141: Added the --output-format=csv parameter for pt-archiver to archive rows in CSV format.
  • PT-142: Added the --only-same-schema-fks option for pt-online-schema-change to check foreigns keys only on tables with the same schema as the original table. This should speed up the tool’s execution, but keep in mind that if you have foreign keys referencing tables in other schemas, they won’t be detected. For more information, see #1690122.
  • PT-153: Added the --check-unique-key-change option for pt-online-schema-change to abort if the specified statement for --alter is trying to add a unique index. This is supposed to avoid adding duplicate keys that might lead to silently losing data.
  • PT-173: Added the --truncate-replicate-table option for pt-table-checksum to ensure stale data is removed.

Bug fixes

  • PT-136: Fixed pt-table-checksum to support tables that have columns with different collations or charsets. For more information, see #1674266.
  • PT-143: Fixed primary key handling by pt-archiver. For more information, see #1691630.
  • PT-144: Limited constraint name in the new table when running pt-online-schema-change. For more information, see #1491674.
  • PT-146: Fixed the --no-check-binlog-format option for pt-table-checksum to work as expected.
  • PT-148: Fixed the use of uninitialized value in printf() for pt-online-schema-change. For more information, see #1693614.
  • PT-151: Fixed pt-table-sync to prevent field type point to be taken as decimal.
  • PT-154: Reverted PT-116 to remove the --use-insert-ignore option from pt-online-schema-change.
  • PT-161: Fixed the --skip-check-slave-lag feature for pt-table-checksum to safely check for undefined values.
  • PT-178: Fixed regression in --check-slave-lag option for pt-online-schema-change.
  • PT-180: Fixed regression in --skip-check-slave-lag option for pt-online-schema-change.
  • PT-181: Fixed syntax error in pt-online-schema-change.

Other Improvements

  • PT-162: Updated list of tables ignored by pt-table-checksum.

You can find release details in the release notes. Report bugs in Toolkit’s launchpad bug tracker.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com