Q&A on Webinar “Using PMM to Identify and Troubleshoot Problematic MySQL Queries”

Problematic MySQL Queries

Problematic MySQL QueriesHi and thanks to all who attended my webinar on Tuesday, January 26th titled Using PMM to Identify & Troubleshoot Problematic MySQL Queries!

Like we do after all our webinars, we compile the list of questions that were answered verbally and also those that were posed yet remained unanswered since we ran out of time during the broadcast.  Before we get to the questions, I wanted to make sure to include a link to the RED Method for MySQL Queries by Peter Zaitsev, Percona’s CEO:


RED Method for MySQL Queries

Hi Michael, you suggested that table create and update times should be ignored. Surely these values come from information_schema.tables? Does that not reflect what I would see if I do ls -l in datadir?

Yes, I did make this suggestion, but after further research, I ought to qualify my response. TLDR; you will only see useful information in the CREATE_TIME field.

As per the MySQL Manual for SHOW TABLE STATUS which defines the fields CREATE_TIME, UPDATE_TIME, and CHECK_TIME, you will find that only CREATE_TIME for InnoDB tables provides accurate information for when the table was originally created.  You will see either NULL or a recent-ish timestamp value for UPDATE_TIME, but this cannot be trusted as features such as InnoDB Change Buffering will skew this value, and thus the timestamp will not necessarily reflect when the SQL write happened, but only when the delayed write to the ibd file occurred.  Further, if you have your table stored in the system tablespace (like the example below) you will continue to see NULL for the UPDATE_TIME.

mysql> select * from information_schema.tables where table_name = 't1'\G
*************************** 1. row ***************************
CREATE_TIME: 2021-01-28 19:26:27
TABLE_COLLATION: utf8mb4_0900_ai_ci
1 row in set (0.00 sec)

To your point about ls -l on the datadir, or the stat command: you cannot rely on this information at any level of accuracy.  Since ls -l is equivalent to the Modify field of the output of stat, we’ll use this  command to show the behaviour once you create the table, and what it reports after you restart mysqld on your datadir.  So let’s see this in action via an example.

Before restarting you’ll notice that Access time is equivalent to what Percona Server for MySQL reports for CREATE_TIME:

$ stat /var/lib/mysql/michael/t1.ibd
File: /var/lib/mysql/michael/t1.ibd
Size: 114688       Blocks: 160        IO Block: 4096   regular file
Device: fd01h/64769d    Inode: 30016418    Links: 1
Access: (0640/-rw-r-----)  Uid: ( 1001/   mysql)   Gid: ( 1001/   mysql)
Access: 2021-01-28 19:26:27.571903770 +0000
Modify: 2021-01-28 19:28:07.488597476 +0000
Change: 2021-01-28 19:28:07.488597476 +0000
Birth: -

However after you restart mysqld, you will no longer be able to tell the create time as MySQL will have updated the Access time on disk, and now the values don’t have very much material relevance as to the access patterns on the table.

$ stat /var/lib/mysql/michael/t1.ibd
File: /var/lib/mysql/michael/t1.ibd
Size: 114688       Blocks: 160        IO Block: 4096   regular file
Device: fd01h/64769d    Inode: 30016418    Links: 1
Access: (0640/-rw-r-----)  Uid: ( 1001/   mysql)   Gid: ( 1001/   mysql)
Access: 2021-01-28 19:30:08.557438038 +0000
Modify: 2021-01-28 19:28:07.488597476 +0000
Change: 2021-01-28 19:28:07.488597476 +0000
Birth: -

Can I use Percona Monitoring and Management (PMM) with an external bare-metal server of Clickhouse?

PMM leverages an instance of Clickhouse inside the docker container (or your AMI, or your OVF destination) for storage of MySQL query data.  At this time we are shipping PMM as an appliance and therefore we don’t provide instructions on how to connect Query Analytics to an external instance of Clickhouse.

If the question is about “can I monitor Clickhouse database metrics using PMM” the answer is Yes absolutely you can!  In fact, PMM will work with any of the Prometheus Exporters and the way to enable this is via the feature we call External Services – take a look at our Documentation for the correct syntax to use!  Usage of External Services will get you pretty metrics, whereas Grafana (which is what we use in PMM to provide the visuals) already contains a native Clickhouse datasource which you can use to run SQL queries from within PMM against Clickhouse.  Simply define the datasource and you’re done!

All PMM2 features are compatible with MySQL 8?

The latest release of PMM 2.14 (January 28th, 2021) supports MySQL 8 and Percona Server for MySQL 8.  PMM now supports not only traditional asynchronous replication but also MySQL InnoDB Group Replication, and of course Percona’s own Percona XtraDB Cluster (PXC) write-set replication (aka wsrep via Galera).  When using Query Analytics with Percona Server for MySQL or PXC, you’ll also benefit from the Extended Slow Log Format, which provides for a very detailed view of activity at the InnoDB storage engine level:

PMM Query Analytics Detail screen

I added several dbs to PMM, however the QAN shows only few and not all. What could be an issue? How do I approach you for Percona support on such things?

There could be a few things going on here that you’ll want to review from Percona’s Documentation:

  1. Do you have a user provisioned with appropriate access permissions in MySQL?
  2. If sourcing from PERFORMANCE_SCHEMA, is P_S actually enabled & properly configured?
  3. Is long_query_time and other slow log settings properly configured to write events?

Slow Log Configuration

These are the recommended settings for the slow log on Percona Server for MySQL. I prefer the slow log vs P_S because you get the InnoDB storage engine information along with other extended query properties (which are not available in upstream MySQL, nor in RDS, or via PERFORMANCE_SCHEMA):


User Permissions

You’ll want to use this permissions for the PMM user:



Using PERFORMANCE_SCHEMA is less detailed but comes with the benefit that you’re writing and reading from an in-memory only object, so you’re saving IOPS to disk. Further if you’re in AWS or other DBaaS you generally don’t get raw access to the on-disk slow log, so PERFORMANCE_SCHEMA can be your only option.


By default, the latest versions of Percona Server for MySQL and Community MySQL ship with PERFORMANCE_SCHEMA enabled by default, but sometimes users disable it.  If you find it is disabled, a restart of mysqld is required in order to enable.

You want to make sure your my.cnf includes:


You can check via a running MySQL instance by executing:

mysql> show global variables like 'performance_schema';
| Variable_name      | Value |
| performance_schema | ON    |

1 row in set (0.04 sec)


You’ll need to make sure you enable the following consumers so that mysqld writes events to the relevant P_S tables:

select * from setup_consumers WHERE ENABLED='YES';
| NAME                             | ENABLED |
| events_statements_current        | YES     |
| events_statements_history        | YES     |
| global_instrumentation           | YES     |
| thread_instrumentation           | YES     |
| statements_digest                | YES     |

5 rows in set (0.00 sec)

Are there any specific limitations when using PMM for monitoring AWS Aurora?

The most significant limitation is that you cannot access the Slow Log and thus must configure for PERFORMANCE_SCHEMA as the query datasource.  See the previous section on how to configure PERFORMANCE_SCHEMA as needed for PMM Query Analytics.

One great feature of PMM is our native support for AWS Aurora. We have a specific dashboard for those Aurora-only features:

PMM MySQL Amazon Aurora Details dashboard

Thanks for attending!

If you attended (or watched the video), please share via comments any takeaways or further questions you may have!   And let me know if you enjoyed my jokes ?


Google Cloud Platform: MySQL at Scale with Reliable HA Webinar Q&A

MySQL at Scale with Reliable HA

Earlier in November, we had a chance to present the “Google Cloud Platform: MySQL at Scale with Reliable HA.” We discussed different approaches to hosting MySQL in Google Cloud Platform with the available options’ pros and cons. This webinar was recorded and can be viewed here at any time. We had several great questions, which we would like to address and elaborate on the answers given during the webinar.

MySQL at Scale with Reliable HA

Q: What is your view on Cloud SQL High Availability in Google Cloud?

A: Google Cloud SQL provides High Availability through regional instances. If your Cloud SQL database is regional, it means that there’s a standby instance in another zone within the same region. Both instances (primary and standby) are kept synced through synchronous replication on the persistent disk level. Thanks to this approach, in case of an unexpected failover, no data is lost. The biggest disadvantage of this approach is that you have to pay for standby resources even though you can’t use the standby instance for any traffic, which means you double your costs with no performance benefits. Failover typically takes more than 30 seconds.

To sum up, High Availability in Google Cloud SQL is reliable but can be expensive, and failover time is not always enough for critical applications.


Q: How would one migrate from Google Cloud SQL to AWS RDS?

A: The easiest way to migrate if you can afford downtime is stopping the write workload to the Cloud SQL instance, taking a logical backup (mysql or mydumper), restoring it on AWS RDS, and then moving the entire workload to AWS RDS. In most cases, it’s not enough. The situation is more complex when you want to make it with no (or minimal) downtime.

To avoid downtime, you need to establish replication between your Cloud SQL (source) and RDS instances (replica). Cloud SQL can be used as a source instance for external replicas, as described in this documentation. You can take a logical backup from running a Cloud SQL instance (e.g., using mydumper), restore it to RDS and establish the replication between Cloud SQL and RDS. Using an external source for RDS is described here. It’s typically a good idea to use a VPN connection between both cloud regions to ensure your connection is secure and the database is not exposed to the public internet. Once replication is established, the steps are as follows:

  • Stop write traffic on Google Cloud SQL instance
  • Wait for the replication to catch up (synch all binlogs)
  • Make RDS instance writable and stop Cloud SQL -> RDS replication
  • Move write traffic to the RDS instance
  • Decommission your Cloud SQL instance

AWS DMS service can also be used as an intermediary in this operation.


Q: Is replication possible cross-cloud, e.g., Google Cloud SQL to AWS RDS, AWS RDS to Google Cloud SQL? If GCP is down, will RDS act as a primary and vice versa?

A: In general, replication between clouds is possible (see the previous question). Both Google Cloud SQL and AWS RDS can act as source and replica, including external instances as a part of your replication topology. High-availability solutions, though, in both cases, are very specific for a cloud provider implementation, and they can’t cooperate. So it’s not possible to automatically failover from RDS to GCP and vice versa. For such setups, we would recommend custom installation on Google Compute Instance and AWS EC2 with Percona Managed Database Services – if you don’t want to manage such a complex setup on your own.

Q: How did you calculate IOPS and throughput for the storage options?

A: We did not calculate the presented values in any way. Those are taken directly from Google Cloud Platform Documentation.

Q: How does GCP achieve synchronous replication?

A: Synchronous replication is possible only between the source and respective standby instance; it’s impossible to have synchronous replication between the primary and your read replicas. Each instance has its own persistent disk. Those disks are kept in sync – so replication happens on the storage layer, not the database layer. There are no implementation details about how it works available.

Q: Could you explain how to keep the primary instance available and writable during the maintenance window?

A: It’s not possible to guarantee the primary instance availability. Remember that even if you choose your maintenance window when you can accept downtime, it may or may not be followed (it’s just a preference). Maintenance events can happen at any point in time if they’re critical and may not be finished during the assigned window. If that’s not possible to accept by your application, we recommend designing a highly-available solution, e.g., with Percona XtraDB Cluster on Google Compute Engine instances instead. Such a solution won’t have such maintenance window problems.


Converting MongoDB to Percona Server for MongoDB Webinar Q&A

Converting MongoDB to Percona Server for MongoDB Webinar Q&A

Converting MongoDB to Percona Server for MongoDB Webinar Q&AWe had great attendance, questions, and feedback from our “Converting MongoDB to Percona Server for MongoDB” webinar, which was recorded and can be viewed here. You can view another Q&A from a previous webinar on “Converting MongoDB to Percona Server for MongoDB” here. Without further ado, here are your questions and their responses.


Q: If migrating from MongoDB Enterprise Edition to Percona Server for MongoDB, can implementations that use LDAP or Kerberos be migrated using the replica set takeover method to avoid excessive downtime?

A: The intended design when using these two features is that Percona Server for MongoDB (PSMDB) remains a true drop-in replacement. This means no configuration changes will be necessary. This should be tested before go-live to ensure success. 

The above is valid for the Audit Plugin as well, which is a free feature that is included in the enterprise version.


Q: Does the replica set takeover method also work for sharded implementations of MongoDB that use configdb replica sets?

A: Yes, it does, although there are a few more steps to consider as you do not want chunks migrating while you are upgrading. 

To convert a sharded cluster to Percona Server for MongoDB, you would:

  1. Disable the balancer (sh.stopBalancer() )
  2. Convert the config servers (replica set takeover method)
  3. Upgrade the shards (replica set takeover method)
  4. Upgrade the mongos instances
  5. Re-enable the balancer (sh.startBalancer())


Q: Is it necessary to make any change on the driver/application side?

A: No. The drivers have the same compatibility for PSMDB.


Q: Does Percona Monitoring and Management (PMM) support alerting?

A: Yes, PMM supports alerting. This blog post discusses the new and upcoming PMM native alerting, this documentation shows how to configure Prometheus AlertManager integration, and this documentation shows how to utilize Grafana Alerts. 


Q: When is best to migrate from MySQL to MongoDB, and in what scenario would MongoDB be the best replacement?

A: This is a complicated question without a simple answer. It depends on the application workload, internal business directives, internal technology directives, and in-house expertise availability. In general, we recommend choosing the right tool for the right job. In that sense, MySQL was built for structured, relational, and transactional workloads, while MongoDB was built for an unstructured, JSON document model without a lot of transactions linking collections. While you technically can cross-pollinate both models between MySQL and MongoDB, we do not recommend doing so unless there is good reason to do so.  This is the perfect scenario to engage with Percona consulting for true expert input in the decision making process.


Q & A on Webinar “Converting MongoDB to Percona Server for MongoDB”

Converting MongoDB to Percona Server for MongoDB

Converting MongoDB to Percona Server for MongoDBWe had great attendance, questions, and feedback from our “Converting MongoDB to Percona Server for MongoDB” webinar which was recorded and can be viewed here. Without further ado, here are your questions and their responses.

Q: What is the recommended path to go from MongoDB Community 3.4 (no Percona) to Percona for MongoDB 4.2?

We would recommend upgrading on Community version from 3.4 -> 3.6 -> 4.0 -> 4.2 with testing and validation at each major version milestone. Once you are on Community 4.2, you can upgrade to Percona Server for MongoDB 4.2. Because PSMDB is a true drop-in replacement you can really do this at any point, but I would do it separately from a major version upgrade.

Q: How does migration work when replica is configured to use “requireSSL”?

Migration works exactly the same as the steps described in the webinar. SSL connections are part of MongoDB Community, so they function identically in Percona Server for MongoDB.

Q: What about migrating encrypted instances to Percona? I guess we need to do initial sync from scratch.

Yes, that is correct. Because encryption affects the way data is stored, full syncs are required. Nodes can be added to an existing enterprise replica set for a close to zero downtime migration.

Q: Do you have any benchmark details with encryption version of Percona?

We have not published any benchmarks for this. However, by default, Percona Server for MongoDB uses the AES256-CBC cipher mode. If you want to use the AES256-GCM cipher mode, then use the encryptionCipherMode parameter to change it. In general, CBC and GCM cipher modes work differently. CBC is faster and GCM is safer (compared to each other). I found some interesting discussion and benchmark here: https://kazoo.ga/quick-benchmark-cbc-vs-gcm/.

Q: Is there any difference in storage of the data between MongoDB vs Percona?

No. They both utilize the same storage engines. WiredTiger is the default. If you’re interested in exploring MMAP, I suggest this blog post MongoDB Engines: MMAPV1 Vs WiredTiger.

Q: Can you migrate from Enterprise Advanced to Percona?


Q: Are MongoDB dumps compatible?

Yes. This is another migration option if the database can tolerate downtime. The methods in our webinar are designed to limit downtime as much as possible.

Q: How does PMM integrate with the mongo replica set? How do you configure backups? Is it in PMM or a separate process?

Yes, PMM has dashboards and metrics for MongoDB replica sets. Our backup solutions are managed outside of Percona Monitoring and Management at this time, so it would be considered a separate process. Our backups are designed to be run on the MongoDB nodes themselves, so no additional middleware is required for their functionality. PMM should be installed on a separate server as it can potentially be storing, ingesting, and displaying a significant amount of data depending upon the size of your MongoDB deployment. View a free demo of Percona Monitoring and Management.

Q: Do MongoDB drivers working with MongoDB Community will work with Percona MongoDB Server?

Yes, MongoDB drivers are compatible with Percona Server for MongoDB. Percona designs its software to be fully compatible with upstream. From an application or connectivity perspective, they are identical.

Q: What would be the situation for a major binary upgrade, 3.4 Percona binary to 4.2 Percona binary?

We would recommend upgrading from 3.4 -> 3.6 -> 4.0 – 4.2 with testing and validation between each major version upgrade. Be sure to research compatibility changes, test, validate, and have tested rollback plans.

Q: Which tools we can use for backup in Percona? Same as Mongo community edition or different?

Percona Server for MongoDB supports all backup methods supported by the Community edition – for example, mongodump. We also have our own backup tool called Percona Backup for MongoDB that supports cluster-wide consistent backups of Sharded Clusters and ReplicaSets.

Thanks to everyone who attended. If you have any feedback or ideas on future MongoDB workshops, please let us know.

Learn more about the history of Oracle, the growth of MongoDB, and what really qualifies software as open source. If you are a DBA, or an executive looking to adopt or renew with MongoDB, this is a must-read!

Download “Is MongoDB the New Oracle?”


Webinar 10/16: What’s New in Percona Monitoring and Management 2?

Percona Monitoring and Management 2

How can you ensure you are properly managing and optimizing the performance of your database environment?

Join Percona’s Product Manager Michael Coburn as he presents “What’s New in Percona Monitoring and Management 2?” and walks you through practical demonstration. This will be taking place on Wednesday, October 16, 2019, at 11:00 AM EDT.

Register Now

Percona Monitoring and Management (PMM) is a free, open source platform that supports MySQL, MariaDB, MongoDB, and PostgreSQL environments, providing detailed time-based analysis of your data. PMM allows you to embrace multiple database options and can be used on-premises and in the cloud.

Our recent major upgrade to PMM2 gives you far greater Query Analytics performance and usability and enables you to monitor much larger environments. Key features of PMM2 include:

•    New performance and usability query improvements.
•    New query analytics for PostgreSQL.
•    New ability to tag queries.
•    New administrative API.
•    New service-level dashboards.
•    Enhanced security protocols to ensure your data is safe.

Michael Coburn, Product Manager, Percona will provide an overview of these new features and a working demonstration of PMM2. You will also have the opportunity to ask questions in the chat window.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.


Webinar 9/24: Preparing for Disaster Recovery on MySQL and PostgreSQL

Webinar Preparing for Disaster recovery on MySQL and PostgreSQL

Please join Percona Senior Support Engineer Carlos Tuttle as he presents his talk “Preparing for Disaster recovery on MySQL and PostgreSQL” on Tuesday, September 24th, 2019 at 11:00 AM PDT (UTC-7).

Register Now

Think for a moment what would happen if your company lost its current production data center; are you ready for that 3 AM call? Is everything in place for bringing your company back on-line? Is it automated? What would be your first step? Being ready is crucial, and making sure your readiness plan is in tip-top shape is even more important, and this session is all about that!

Join us to discover best practices on HA solutions, backups, failover, and how to apply them in production to be 100% confident your Disaster Recovery implementation will allow you to be up-and-running again in no time.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.


Webinar 9/19: Introducing Java Profiling via Flame Graphs

Webinar Introducing Java Profiling via Flame Graphs

Please join Percona Senior Support Engineer Agustin Gallego as he presents his talk “Introducing Java Profiling via Flame Graphs” on Thursday, September 19th, 2019 at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Register Now

In this talk, you’ll be introduced to the basic concepts of profiling Java code using perf_events (aka perf). You’ll see how to collect stack traces and how to use Flame Graphs to provide a dynamic visual display for them. This, in turn, allows you to see if your code has room for improvement – and where it could be improved – in an easy and scalable way.

It doesn’t matter if you are coding a one-hundred or a one-hundred thousand-line application, the Flame Graph visualizer will make it easy to spot CPU hogs! After checking some basic how-to steps and recipes, I will take a deeper dive into using perf and Flame Graphs, to demonstrate how to make the most of these tools.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.


Webinar 9/17: Percona XtraBackup vs Mariabackup vs MySQL Enterprise Backup

Webinar Percona XtraBackup vs Mariabackup vs MySQL Enterprise Backup

Please join Percona Senior Support Engineer Juan Pablo Arruti as he presents his talk “Percona XtraBackup vs Mariabackup vs MySQL Enterprise Backup” on Tuesday, September 17th, 2019 at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Register Now

There are few ways to take a backup. Some of the most used tools are Percona Xtrabackup, MariaBackup, and MySQL Enterprise Backup.

In this talk, the audience will get an in-depth overview of:

– Differences between the tools
– Comparison of features
– Which tool works on which MySQL/MariaDB flavor
– Supported Storage Engines
– Limitations

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.


Webinar 8/29: MongoDB Sharded Cluster & HA – How to Design your Topology

MongoDB Sharded Cluster & HA

Please join Percona Support Engineer Vinodh Krishnaswamy as he presents his talk “MongoDB Sharded Cluster & HA – How to Design your Topology” on Thursday, August 29th, 2019, at 6:00 AM PDT (UTC-7).

Register Now

MongoDB Sharded Cluster is very well established to handle huge amounts of data across geographically-separate data centers. Sharding the data across the shards evenly or specific to region/data centers based on the application requests is important. But sometimes the sharded cluster setup is not well designed, which leads to data being distributed improperly across shards so the application fails to serve the request within the desired turnaround time. Also, HA design within Sharded Cluster is equally important to avoid any downtime. In this talk, we will see how to design your Shared Cluster topology + HA to make the most out of it.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.


Webinar 8/28: Best Practices for Migrating to Open Source Databases

Best Practices for Migrating to Open Source Databases

Please join Percona CEO Peter Zaitsev as he presents “Best Practices for Migrating to Open Source Databases” on Wednesday, August 28th, 2019 at 11:00 AM PDT (UTC-7).

Register Now

This is a high-level webinar that covers the history of enterprise open source database use. It addresses both the advantages companies see in using open source database technologies as well as the fears and reservations they might have.

In this webinar, we will look at how to address such concerns to help get a migration commitment. We’ll cover picking the right project, selecting the right organizational team to manage the migration, and developing the right path to maximize migration success (from a technical and organizational standpoint).

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com