Oct
25
2017
--

JSON Output of the pmm-admin list Command

In this blog post, we’ll look at Percona Monitoring and Management’s pmm-admin list command.

The pmm-admin list command shows all monitoring services you have added using the pmm-admin add command. Starting with version 1.4.0, Percona Monitoring and Management (PMM) also lists external monitoring services when you run pmm-admin list, i.e., those services that monitor the backends not supported out of the box (such as PostgreSQL databases).

In the output, the external monitoring services appear at the bottom:

The tabular output of the pmm-admin list command

JSON Output for Automatic Verification

But there is also another feature of pmm-admin list. If you run this command with the –json parameter, the command gives you a JSON document as output. This option now enables inspecting the monitoring services by computers due to the strict JSON syntax rules. JSON has become a de-facto standard for exchanging data for many tools. The JSON output provided by the pmm-admin list command can be used by configuration management tools such as ansible or chef.

The output is captured as keys and values. The general information about the computer where this pmm-client is installed is given as top-level elements:

  • Version
  • ServerAddress
  • ServerSecurity
  • ClientName
  • ClientAddress
  • ClientBindAddress
  • Platform

You can quickly determine if there are any errors in built-in monitoring services by inspecting the Err top level element in the JSON output. Similarly, the ExternalErr element reports errors on external services:

The JSON parsing friendly version produced by the pmm-admin list command

Representing Monitoring Services

Two elements contain lists as their values. The Services top-level element contains a list of documents that represent enabled monitoring services. The ExternalServices element contains a list of documents that represent enabled external monitoring services. Each attribute in the Services and ExternalServices elements provides the same information as a column in the tabular output.

Hope this brief post provides some valuable information regarding new Percona Monitoring and Management 1.4.0 functionality. Let me know about any questions in the comments.

Oct
03
2017
--

MyRocks Metrics Now in PMM 1.3.0

MyRocks

One of the most exciting features shipped in the Percona Monitoring and Management 1.3.0 (PMM) release is support for MyRocks metrics via a new Metrics Monitor dashboard titled MySQL MyRocks Metrics. The support in PMM follows the recent Percona Server for MySQL release 5.7.19 from September 6, where Percona delivered an EXPERIMENTAL version of MyRocks for non-Production usage.

The MyRocks storage engine from Facebook is based on RocksDB, a persistent key-value store for fast storage environments. MyRocks is optimized for fast storage and combines outstanding space and write efficiency with acceptable read performance. As a result, MyRocks has the following advantages compared to other storage engines (if your workload uses fast storage, such as SSD):

  • Requires less storage space
  • Provides more storage endurance
  • Ensures better IO capacity

MyRocks Database Operations

This graph will help you visualize MyRocks database operations of Next and Seek attributes:

MyRocks Cache Activity

We also have a graph to help you visualize the count of Hits and Misses on the MyRocks cache:

MyRocks Cache Data Bytes Read/Write

Finally, another important MyRocks graph will help you understand the volume of data read and written to the MyRocks cache:

Please note that the MyRocks storage engine is not suitable (yet) for production workloads, but if you are testing this technology take a moment to install PMM in order to take advantage of our new MySQL MyRocks Metrics dashboard!

In PMM, you can view the metrics provided by the information schema as well as various data reported by the RocksDB engine’s status used by your MySQL database instance.

Sep
25
2017
--

Percona Monitoring and Management 1.3.0 Is Now Available

Percona Monitoring and Management 1.3.0

Percona Monitoring and Management 1.3.0Percona announces the release of Percona Monitoring and Management 1.3.0 on September 26, 2017.

Percona Monitoring and Management 1.3.0 introduces basic support for the MyRocks storage engine. There is a special dashboard in Metrics Monitor that presents the essential metrics of MyRocks as separate graphs. Also, Metrics Monitor graphs now feature on-demand descriptions that remain visible as long as hover over them.

For example, this graph helps you visualize MyRocks database operations of Next and Seek attributes:

There are many improvements to QAN (Query Analytics) both in the user interface design and in its capabilities. In this release, QAN starts supporting all types of MongoDB queries. For example, if you need to limit the list of available queries to only those that you are interested in, use the Query Filter field next to the database selection button:

Orchestrator is not enabled by default because leaving it in a non-configured state was confusing to users. It is still possible to enable it along with the docker run command.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

New Features

  • PMM-1290: Basic support for the metrics of the MyRocks storage engine in MySQL via the mysqld-exporter.
  • PMM-1312: Metrics Monitor now features a MyRocks dashboard.
  • PMM-1330: Basic telemetry data are collected from PMM Servers.
  • PMM-1417: A new dashboard in Metrics Monitor designed to enable exploring any data in Prometheus
  • PMM-1437pmm-admin allows passing parameters to exporters
  • PMM-685: The EXPLAIN command is now supported in QAN.

Improvements

  • PMM-1262: The system checks for updates much faster
  • PMM-1015QAN should shows all collections from a mongod instance. Make sure that profiling is enabled in MongoDB.
  • PMM-1057QAN supports all MongoDB query types.
  • PMM-1270: In Metrics Monitor, the dashboard filter displays only MariaDB hosts.
  • PMM-1287: In pmm-admin mongodb:queries is not experimental anymore and the dev-enable option is no longer needed.
  • PMM-1446: In Metrics Monitor, the MySQL Active Threads graph displays data more accurately.
  • PMM-1455: In Metrics Monitor, features descriptions of graphs
  • PMM-1476: QAN2 is used by default in pmmdemo.percona.com
  • PMM-1479: It is now possible to go to QAN directly from Metrics Monitor.
  • PMM-515Orchestrator is disabled by default. It is possible to enable it when running your docker container.

Bug fixes

  • PMM-1298: In QAN, the query abstract could be empty for MySQL hosts for low-ranking queries. This bug is fixed to contain Low Ranking Queries as the value of the query abstract.
  • PMM-1314: The selected time range in QAN could be applied incorrectly. This bug is now fixed.
  • PMM-1398: Prometheus memory was not updated after PMM upgrade. This bug is now fixed.
  • PMM-1427: The CPU Usage/Load graph in the MySQL Overview dashboard was displayed with slightly incorrect dimensions. This bug is now solved.
  • PMM-1439: If the EXPLAIN command was not supported for the selected query, there could appear a JavaScript error.
  • PMM-1472: It could happen that monitoring of queries for MongoDB with replication could not be enabled.
  • PMM-943: InnoDB AHI Usage Graph had incorrect naming and hit ratio computation.
Sep
25
2017
--

Percona Live Europe: Tutorials Day

Percona Live Tutorials

Percona Live Europe TutorialsWelcome to the first day of the Percona Live Open Source Database Conference Europe 2017: Tutorials day! Technically the first day of the conference, this day focused on provided hands-on tutorials for people interested in learning directly how to use open source tools and technologies.

Today attendees went to training sessions taught by open source database experts and got first-hand experience configuring, working with, and experimenting with various open source technologies and software.

The first full day (which includes opening keynote speakers and breakout sessions) starts Tuesday 9/26 at 9:15 am.

Some of the tutorial topics covered today were:

Percona Live Europe TutorialsMonitoring MySQL Performance with Percona Monitoring and Management (PMM)

Michael Coburn, Percona

This was a hands-on tutorial covering how to set up monitoring for MySQL database servers using the Percona Monitoring and Management (PMM) platform. PMM is an open-source collection of tools for managing and monitoring MySQL and MongoDB performance. It provides thorough time-based analysis for database servers to ensure that they work as efficiently as possible.

We learned about:

  • The best practices on MySQL monitoring
  • Metrics and time series
  • Data collection, management and visualization tools
  • Monitoring deployment
  • How to use graphs to spot performance issues
  • Query analytics
  • Alerts
  • Trending and capacity planning
  • How to monitor HA

Percona Live Europe TutorialsHands-on ProxySQL

Rene Cannao, ProxySQL

ProxySQL is an open source proxy for MySQL that can provide HA and high performance with no changes in the application, using several built-in features and integration with clustering software. Those were only a few of the features we learned about in this hands-on tutorial.

Percona Live Europe TutorialsMongoDB: Sharded Cluster Tutorial

Jason Terpko, ObjectRocket
Antonios Giannopoulos, ObjectRocket

This tutorial guided us through the many considerations when deploying a sharded cluster. It covered the services that make up a sharded cluster, configuration recommendations for these services, shard key selection, use cases, and how data is managed within a sharded cluster. Maintaining a sharded cluster also has its challenges. We reviewed these challenges and how you can prevent them with proper design or ways to resolve them if they exist today.

Percona Live Europe TutorialsInnoDB Architecture and Performance Optimization

Peter Zaitsev, Percona

InnoDB is the most commonly used storage engine for MySQL and Percona Server for MySQL. It is the focus of most of the storage engine development by the MySQL and Percona Server for MySQL development teams.

In this tutorial, we looked at the InnoDB architecture, including new feature developments for InnoDB in MySQL 5.7 and Percona Server for MySQL 5.7. Peter explained how to use InnoDB in a database environment to get the best application performance and provide specific advice on server configuration, schema design, application architecture and hardware choices.

Peter updated this tutorial from previous versions to cover new MySQL 5.7 and Percona Server for MySQL 5.7 InnoDB features.

Join us tomorrow for the first full day of the Percona Live Open Source Database Conference Europe 2017!

Sep
20
2017
--

sysbench Histograms: A Helpful Feature Often Overlooked

Sysbench Histograms

Sysbench HistogramsIn this blog post, I will demonstrate how to run and use sysbench histograms.

One of the features of sysbench that I often I see overlooked (and rarely used) is its ability to produce detailed query response time histograms in addition to computing percentile numbers. Looking at histograms together with throughput or latency over time provides many additional insights into query performance.

Here is how you get detailed sysbench histograms and performance over time:

sysbench --rand-type=uniform --report-interval=1 --percentile=99 --time=300 --histogram --mysql-password=sbtest oltp_point_select --table_size=400000000 run

There are a few command line options to consider:

  • report-interval=1 prints out the current performance measurements every second, which helps see if performance is uniform, if you have stalls or otherwise high variance
  • percentile=99 computes 99 percentile response time, rather than 95 percentile (the default); I like looking at 99 percentile stats as it is a better measure of performance
  • histogram=on produces a histogram at the end of the run (as shown below)

The first thing to note about this histogram is that it is exponential. This means the width of the buckets changes with higher values. It starts with 0.001 ms (one microsecond) and gradually grows. This design is used so that sysbench can deal with workloads with requests that take small fractions of milliseconds, as well as accommodate requests that take many seconds (or minutes).

Next, we learn some us very interesting things about typical request response time distribution for databases. You might think that this distribution would be close to some to some “academic” distributions, such as normal distribution. In reality, we often observe is something of a “camelback” distribution (not a real term) – and our “camel” can have more than two humps (especially for simple requests such as the single primary key lookup shown here).

Why do request response times tend to have this distribution? It is because requests can take multiple paths inside the database. For example, certain requests might get responses from the MySQL Query Cache (which will result in the first hump). A second hump might come from resolving lookups using the InnoDB Adaptive Hash Index. A third hump might come from finding all the data in memory (rather than the Adaptive Hash Index). Finally, another hump might coalesce around the time (or times) it takes to execute on requests that require disk IO.    

You also will likely see some long-tail data that highlights the fact that MySQL and Linux are not hard, real-time systems. As an example, this very simple run with a single thread (and thus no contention) has an outlier at around 18ms. Most of the requests are served within 0.2ms or less.

As you add contention, row-level locking, group commit and other issues, you are likely to see even more complicated diagrams – which can often show you something unexpected:

Latency histogram (values are in milliseconds)
      value  ------------- distribution ------------- count
      0.050 |                                         1
      0.051 |                                         2
      0.052 |                                         2
      0.053 |                                         54
      0.053 |                                         79
      0.054 |                                         164
      0.055 |                                         883
      0.056 |*                                        1963
      0.057 |*                                        2691
      0.059 |**                                       4047
      0.060 |****                                     9480
      0.061 |******                                   15234
      0.062 |********                                 20723
      0.063 |********                                 20708
      0.064 |**********                               26770
      0.065 |*************                            35928
      0.066 |*************                            34520
      0.068 |************                             32247
      0.069 |************                             31693
      0.070 |***************                          41682
      0.071 |**************                           37862
      0.073 |********                                 22691
      0.074 |******                                   15907
      0.075 |****                                     10509
      0.077 |***                                      7853
      0.078 |****                                     9880
      0.079 |****                                     10853
      0.081 |***                                      9243
      0.082 |***                                      9280
      0.084 |***                                      8947
      0.085 |***                                      7869
      0.087 |***                                      8129
      0.089 |***                                      9073
      0.090 |***                                      8364
      0.092 |***                                      6781
      0.093 |**                                       4672
      0.095 |*                                        3356
      0.097 |*                                        2512
      0.099 |*                                        2177
      0.100 |*                                        1784
      0.102 |*                                        1398
      0.104 |                                         1082
      0.106 |                                         810
      0.108 |                                         742
      0.110 |                                         511
      0.112 |                                         422
      0.114 |                                         330
      0.116 |                                         259
      0.118 |                                         203
      0.120 |                                         165
      0.122 |                                         126
      0.125 |                                         108
      0.127 |                                         87
      0.129 |                                         83
      0.132 |                                         55
      0.134 |                                         42
      0.136 |                                         45
      0.139 |                                         41
      0.141 |                                         149
      0.144 |                                         456
      0.147 |                                         848
      0.149 |*                                        2128
      0.152 |**                                       4586
      0.155 |***                                      7592
      0.158 |*****                                    13685
      0.160 |*********                                24958
      0.163 |*****************                        44558
      0.166 |*****************************            78332
      0.169 |*************************************    98616
      0.172 |**************************************** 107664
      0.176 |**************************************** 107154
      0.179 |****************************             75272
      0.182 |******************                       49645
      0.185 |****************                         42793
      0.189 |*****************                        44649
      0.192 |****************                         44329
      0.196 |******************                       48460
      0.199 |*****************                        44769
      0.203 |**********************                   58578
      0.206 |***********************                  61373
      0.210 |**********************                   58758
      0.214 |******************                       48012
      0.218 |*************                            34533
      0.222 |**************                           36517
      0.226 |*************                            34645
      0.230 |***********                              28694
      0.234 |*******                                  17560
      0.238 |*****                                    12920
      0.243 |****                                     10911
      0.247 |***                                      9208
      0.252 |****                                     10556
      0.256 |***                                      7561
      0.261 |**                                       5047
      0.266 |*                                        3757
      0.270 |*                                        3584
      0.275 |*                                        2951
      0.280 |*                                        2078
      0.285 |*                                        2161
      0.291 |*                                        1747
      0.296 |*                                        1954
      0.301 |*                                        2878
      0.307 |*                                        2810
      0.312 |*                                        1967
      0.318 |*                                        1619
      0.324 |*                                        1409
      0.330 |                                         1205
      0.336 |                                         1193
      0.342 |                                         1151
      0.348 |                                         989
      0.354 |                                         985
      0.361 |                                         799
      0.367 |                                         671
      0.374 |                                         566
      0.381 |                                         537
      0.388 |                                         351
      0.395 |                                         276
      0.402 |                                         214
      0.409 |                                         143
      0.417 |                                         80
      0.424 |                                         85
      0.432 |                                         54
      0.440 |                                         41
      0.448 |                                         29
      0.456 |                                         16
      0.464 |                                         15
      0.473 |                                         11
      0.481 |                                         4
      0.490 |                                         9
      0.499 |                                         4
      0.508 |                                         3
      0.517 |                                         4
      0.527 |                                         4
      0.536 |                                         2
      0.546 |                                         4
      0.556 |                                         4
      0.566 |                                         4
      0.587 |                                         1
      0.597 |                                         1
      0.608 |                                         5
      0.619 |                                         3
      0.630 |                                         2
      0.654 |                                         2
      0.665 |                                         5
      0.677 |                                         26
      0.690 |                                         298
      0.702 |                                         924
      0.715 |*                                        1493
      0.728 |                                         1027
      0.741 |                                         1112
      0.755 |                                         1127
      0.768 |                                         796
      0.782 |                                         574
      0.797 |                                         445
      0.811 |                                         415
      0.826 |                                         296
      0.841 |                                         245
      0.856 |                                         202
      0.872 |                                         210
      0.888 |                                         168
      0.904 |                                         217
      0.920 |                                         163
      0.937 |                                         157
      0.954 |                                         204
      0.971 |                                         155
      0.989 |                                         158
      1.007 |                                         137
      1.025 |                                         94
      1.044 |                                         79
      1.063 |                                         52
      1.082 |                                         36
      1.102 |                                         25
      1.122 |                                         25
      1.142 |                                         16
      1.163 |                                         8
      1.184 |                                         5
      1.205 |                                         7
      1.227 |                                         2
      1.250 |                                         4
      1.272 |                                         3
      1.295 |                                         3
      1.319 |                                         2
      1.343 |                                         2
      1.367 |                                         1
      1.417 |                                         2
      1.791 |                                         1
      1.996 |                                         2
      2.106 |                                         2
      2.184 |                                         1
      2.264 |                                         1
      2.347 |                                         2
      2.389 |                                         1
      2.433 |                                         1
      2.477 |                                         1
      2.568 |                                         2
      2.615 |                                         1
      2.710 |                                         1
      2.810 |                                         1
      2.861 |                                         1
      3.187 |                                         1
      3.488 |                                         1
      3.816 |                                         1
      4.028 |                                         1
      6.913 |                                         1
      7.565 |                                         1
      8.130 |                                         1
     17.954 |                                         1

I hope you give sysbench histograms a try, and see what you can discover!

Sep
14
2017
--

Percona Live Europe Featured Talks: Monitoring Open Source Databases with Icinga with Bernd Erk

Percona Live Europe 2017

Percona Live EuropeWelcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Bernd Erk, CEO of Icinga. His talk is titled Monitoring Open Source Databases with Icinga. Icinga is a popular open source successor of Nagios that checks hosts and services, and notifies you of their statuses. But you also need metrics for performance and growth to deal with your scaling needs. Adding conditional behaviors and configuration in Icinga is not just intuitive, but also intelligently adaptive at runtime. In our conversation, we how to intelligently monitor open source databases:

Percona: How did you get into database technology? What do you love about it?

Bernd: I started a position as a junior systems engineer in a large German mail order company. They were totally committed to Oracle databases and the tool stack around it. As Linux gained more and more attention, we became aware of MySQL very early and were fascinated by the simplicity of installation and administration. There were of course so many things Oracle had in those days that MySQL didn’t have, but most of our uses also didn’t require those extra (and of course expensive) features.

Percona: You’re presenting a session called “Monitoring Open Source Databases with Icinga”. Why is monitoring databases important, and what sort of things need to be monitored?

Bernd: Usually databases are a very important part of an IT infrastructure, and need to be online 24/7. I also had the personal experience of database downtime putting a lot of pressure on both the organization in general and the team in charge. Since most open source databases provide very good interfaces, it is not so hard to figure out if they are up and running. Like in many monitoring arenas, knowing what to monitor is the important information.

In addition to the basic local and remote availability checks, monitoring database replication is very important. We often see environments where the standby slave is outdated by, years or not able to keep up with the incoming load. From there you can go into databases and application metrics to learn more about performance and IO behavior.

Percona: Why are you using Icinga specifically? What value does it provide above other monitoring solutions?

Bernd: I’ve been involved with Icinga from the beginning, so it is my number one choice in open source monitoring. In my opinion, the great advance of Icinga 2 is the simplicity of legacy systems like Nagios (or Icinga 1), but also its support for complex environments (such as application-based clustering). There is also the live configuration of the Icinga 2 monitoring core through our REST API. With all the supported tools for metrics, logs and management around it, for me Icinga 2 is the best match for open source monitoring.

Percona: What do you want attendees to take away from your session? Why should they attend?

Bernd: Attendees will get a short overview on Icinga 2, and why it is different to Nagios (Icinga 1). I will also guide them through practical monitoring examples and show implemented checks in a live demo. After my talk, they should be able to adapt and extend on-premise or cloud monitoring with Icinga 2 using the default open source plugins.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Bernd: Getting together with the great database community in all aspects, and going to Dublin (to be honest). I have never been there, and so it is my first time.

Want to find out more about Bernd and database monitoring? Register for Percona Live Europe 2017, and see his talk Monitoring Open Source Databases with Icinga. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Sep
13
2017
--

Percona Live Europe Featured Talks: Visualize Your Data with Grafana Featuring Daniel Lee

Percona Live Europe 2017

Percona Live Europe 2017Welcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Daniel Lee, a software developer at Grafana. His tutorial is Visualize Your Data With Grafana. This presentation teaches you how to create dashboards and graphs in Grafana and how to use them to gain insight into the behavior of your systems. In our conversation, we discussed how data visualization could benefit your database environment:

Percona: How did you get into database technology? What do you love about it?

Daniel: I’m a developer and my first job was working on a transport logistics system, which was mostly composed of Stored Procedures in SQL Server 2000. Today, I would not build a system with all the logic in Stored Procedures – but that database knowledge is the foundation that I built everything else on. Databases and their data flows will always be the core of most interesting systems. More recently, I have switched from Windows to working with MariaDB on Linux. Grafana Labs uses Percona Server for MySQL for most of our internal applications (worldPing and Hosted Grafana). Working with Grafana also means working with time series databases like Graphite, which is also very interesting.

I enjoy working with data as it is one of the ways to learn how users use a system. Design decisions are theories until you have data to either back them up or disprove them.

Percona: Your presenting a session called “Visualize Your Data With Grafana”. How does monitoring make DBAs life easier, and how do graphs make this information easier to apply for DBAs?

Daniel: Good monitoring provides top-level metrics (throughput, number of errors, performance) for alerting, and other lower-level metrics to allow you to dig into the details and quickly diagnose and resolve an outage. Monitoring also helps you find any constraints (for example, finding bottlenecks for query performance: CPU, row locks, disk, buffer pool size, etc.). Performance monitoring allows you to see trends and lets you know when it is time to scale out or purchase more hardware.

Monitoring can also be used to communicate with business people. It is a way of translating lots of different system metrics into a measurable user experience. Visualizing your data with graphs is a very good way to communicate that information, both within your team and with your business stakeholders. Building dashboards with the metrics that are important to you rather than just the standard checklists (CPU, disk, network etc.) allows you to measure the user experience for your application and to see long-term trends.

Percona: Why Grafana? What does Grafana do better than other monitoring solutions?

Daniel: Grafana is the de facto standard in open source for visualizing time series data. It comes with tons of different ways to visualize your data (graphs, heat maps, gauges). Each data source comes with its own custom query editor that simplifies writing complex queries, and it is easy to create dynamic dashboards that look great on a TV.

Being open source, it can be connected to any data source/database, which makes it easy to unify different data sources in the same dashboard (for example, Prometheus or Graphite data combined with MySQL data). This also means your data is not subject to vendor lock-in like it is in other solutions. Grafana has a large and very active community that creates plugins and dashboards that extend Grafana into lots of niches, as well as providing ways to quickly get started with whatever you want to monitor.

Percona: What do you want attendees to take away from your session? Why should they attend?

Daniel: I want them to know that you can make the invisible visible, with that knowledge start to make better decisions based on data. I hope that my session helps someone take the first step to being more proactive in their monitoring by showing them what can be done with Grafana and other tools in the monitoring space.

In my session, I will give an overview of monitoring and metrics, followed by an intro to Grafana. I plan to show how to monitor MySQL and finish off with a quick look at the new MySQL data source for Grafana.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Daniel: Firstly, it is always great to have an excuse to visit Ireland (I’m an Irishman living in Sweden). I’m also looking forward to getting feedback from the community on Grafana’s new MySQL data source plugin, as well as just talking to people and hearing about their experiences with database monitoring.

Want to find out more about Daniel and data visualization? Register for Percona Live Europe 2017, and see their talk Visualize Your Data With Grafana. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Sep
11
2017
--

Updating InnoDB Table Statistics Manually

InnoDB Tables

InnoDB TablesIn this post, we will discuss how to fix cardinality for InnoDB tables manually.

As a support engineer, I often see situations when the cardinality of a table is not correct. When InnoDB calculates the cardinality of an index, it does not scan the full table by default. Instead it looks at random pages, as determined by options innodb_stats_sample_pages, innodb_stats_transient_sample_pages and innodb_stats_persistent_sample_pages, or by the 

CREATE TABLE

 option

STATS_SAMPLE_PAGES

. The default value for persistent statistics is 20. This approach works fine when the number of unique values in your secondary key grows in step with the size of the table. But what if you have a column that has a comparatively small number of unique values? This could be a common service, many-to-many relationship table, for example, or just a table containing a list of sell orders that belong to one of a dozen shops owned by the company. Such tables could grow up to billions of rows with a small (less than 100) number of unique shop IDs.

At some point, InnoDB will report the wrong values for such indexes. Really! If 20 pages have 100 unique shop IDs, how many unique shop IDs would 20000 pages have? 100 times 1000? This seems logical, and after a certain number of rows such indexes will have extraordinarily large cardinality values.

ANALYZE TABLE

 will not help, because it uses the same algorithm. Increasing the number of “stats” sample pages would help, but it has its own downside: the more pages you have to examine, the slower

ANALYZE TABLE

 runs. While this command is not blocking, it still creates side effects as described in this blog post. And the longer it runs, the less control you have.

Another issue with InnoDB statistics: even if it is persistent and

STATS_AUTO_RECALC

 is set to 0, it still adds values for secondary indexes as shown in lp:1538765. Eventually, after you insert million of rows, your statistics get corrupted.

ANALYZE TABLE

  can fix it only if you specify a very large number of “stats” sample pages.

Can we do anything about it?

InnoDB stores statistics in the “mysql” database, in the tables

innodb_table_stats

 and

innodb_index_stats

. Since they are regular MySQL tables, privileged users can access them. We can update them and modify statistics as we like. And these statistics are used by the Optimizer!

I created a small example showing how to do this trick. I used Percona Server for MySQL version 5.7.19, but the trick will work on any supported MySQL and Percona Server for MySQL version.

First, let’s create test tables. The first table has shops, with a few shop profiles with the shop ID and name:

create table shops(
  shop_id int not null auto_increment primary key,
  name varchar(32)
) engine=innodb;

The second table refers to the “shops” table:

create table goods(
  id int not null auto_increment primary key,
  shop_id int not null,
  name varchar(32),
  create_date datetime DEFAULT NULL,
  key (shop_id, create_date)
) engine=innodb;

Let’s check how many unique shops we have:

mysql> select count(distinct shop_id) from shops;
+-------------------------+
| count(distinct shop_id) |
+-------------------------+
| 100                     |
+-------------------------+
1 row in set (0.02 sec)

With 100 distinct shops, and a key on

(shop_id, create_date)

, we expect cardinality in table goods to be not much different than this query result:

mysql> select count(distinct id) as `Cardinality for PRIMARY`,
    -> count(distinct shop_id) as `Cardinality for shop_id column in index shop_id`,
    -> count(distinct shop_id, create_date) as `Cardinality for create_date column in index shop_id`
    -> from goods
*************************** 1. row ***************************
Cardinality for PRIMARY: 8000000
Cardinality for shop_id column in index shop_id: 100
Cardinality for create_date column in index shop_id: 169861
1 row in set (2 min 8.74 sec)

However, 

SHOW INDEX

 returns dramatically different values for the column

shop_id

:

mysql> show index from goods;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| goods |          0 |  PRIMARY |            1 |          id |         A |     7289724 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            1 |     shop_id |         A |       13587 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            2 | create_date |         A |      178787 |     NULL |   NULL |  YES |      BTREE |         |               |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
3 rows in set (0.09 sec)

ANALYZE TABLE

 does not help:

mysql> analyze table goods;
+------------+---------+----------+----------+
|      Table |      Op | Msg_type | Msg_text |
+------------+---------+----------+----------+
| test.goods | analyze |   status |       OK |
+------------+---------+----------+----------+
1 row in set (0.88 sec)
mysql> show index from goods;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| goods |          0 |  PRIMARY |            1 |          id |         A |     7765796 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            1 |     shop_id |         A |       14523 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            2 | create_date |         A |      168168 |     NULL |   NULL |  YES |      BTREE |         |               |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
3 rows in set (0.00 sec)

As a result, if we join the two tables, Optimizer chooses the wrong

JOIN

 order and query execution plan:

mysql> explain select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486);
+----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+
| id | select_type | table | partitions |  type | possible_keys |     key | key_len |                ref | rows | filtered |                    Extra |
+----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+
|  1 |      SIMPLE | shops |       NULL | index |       PRIMARY | PRIMARY |       4 |               NULL |  100 |   100.00 | Using where; Using index |
|  1 |      SIMPLE | goods |       NULL |   ref |       shop_id | shop_id |       4 | test.shops.shop_id |  534 |    11.11 |    Using index condition |
+----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+
2 rows in set, 1 warning (0.13 sec)
mysql> P md5sum
PAGER set to 'md5sum'
mysql> select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486);
4a94dabc4bfbfb7dd225bcb50278055b -
31896 rows in set (43.32 sec)

If compared to 

STRAIGHT_JOIN

 order:

mysql> explain select goods.* from goods straight_join shops on(goods.shop_id = shops.shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486);
+----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+
| id | select_type | table | partitions |   type | possible_keys |     key | key_len |                ref |  rows | filtered |                 Extra |
+----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+
|  1 |      SIMPLE | goods |       NULL |  range |       shop_id | shop_id |      10 |               NULL | 31997 |   100.00 | Using index condition |
|  1 |      SIMPLE | shops |       NULL | eq_ref |       PRIMARY | PRIMARY |       4 | test.goods.shop_id |     1 |   100.00 |           Using index |
+----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+
2 rows in set, 1 warning (0.14 sec)
mysql> P md5sum
PAGER set to 'md5sum'
mysql> select goods.* from goods straight_join shops on(goods.shop_id = shops.shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486);
4a94dabc4bfbfb7dd225bcb50278055b -
31896 rows in set (7.94 sec)

The time difference for a small 8M row table is around six times! For a big table with many columns, it would be even larger.

Is

STRAIGHT_JOIN

 the only solution for this case?

No! It’s also not a great solution because if the query is complicated and involves more than two tables, it may be affected by bug fixes and improvements in the Optimizer code. Then the query order might not be optimal for new versions and updates. Therefore, you’ll need to test such queries at each upgrade, including minor ones.

So why does

ANALYZE TABLE

 not work? Because the default number of pages it uses to calculate statistics is too small for the difference. You can increase the table option

STATS_SAMPLE_PAGES

  until you find a proper one. The drawback is that the greater you set 

STATS_SAMPLE_PAGES

, the longer it takes for 

ANALYZE TABLE

 to finish. Also, if you update a large portion of the table, you are often affected by lp:1538765. At some point, the statistics will again be inaccurate.

Now let’s try our manual statistics update trick

InnoDB stores its persistent statistics in the tables

mysql.innodb_table_stats

  and

mysql.innodb_index_stats

:

mysql> alter table goods stats_persistent=1, stats_auto_recalc=0;
Query OK, 0 rows affected (0.11 sec)
Records: 0 Duplicates: 0 Warnings: 0
+---------------+------------+---------------------+---------+----------------------+--------------------------+
| database_name | table_name |         last_update |  n_rows | clustered_index_size | sum_of_other_index_sizes |
+---------------+------------+---------------------+---------+----------------------+--------------------------+
|          test |      goods | 2017-09-05 00:21:12 | 7765796 |                34624 |                    17600 |
+---------------+------------+---------------------+---------+----------------------+--------------------------+
1 row in set (0.00 sec)
mysql> select * from mysql.innodb_index_stats where table_name='goods';
+---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+
| database_name | table_name | index_name |         last_update |    stat_name | stat_value | sample_size |                  stat_description |
+---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+
|          test |      goods |    PRIMARY | 2017-09-05 00:21:12 | n_diff_pfx01 |    7765796 |          20 |                                id |
|          test |      goods |    PRIMARY | 2017-09-05 00:21:12 | n_leaf_pages |      34484 |        NULL | Number of leaf pages in the index |
|          test |      goods |    PRIMARY | 2017-09-05 00:21:12 |         size |      34624 |        NULL |      Number of pages in the index |
|          test |      goods |    shop_id | 2017-09-05 00:21:12 | n_diff_pfx01 |      14523 |          20 |                           shop_id |
|          test |      goods |    shop_id | 2017-09-05 00:21:12 | n_diff_pfx02 |     168168 |          20 |               shop_id,create_date |
|          test |      goods |    shop_id | 2017-09-05 00:21:12 | n_diff_pfx03 |    8045310 |          20 |            shop_id,create_date,id |
|          test |      goods |    shop_id | 2017-09-05 00:21:12 | n_leaf_pages |      15288 |        NULL | Number of leaf pages in the index |
|          test |      goods |    shop_id | 2017-09-05 00:21:12 |         size |      17600 |        NULL |      Number of pages in the index |
+---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+
8 rows in set (0.00 sec)

And we can update these tables directly:

mysql> update mysql.innodb_table_stats set n_rows=8000000 where table_name='goods';
Query OK, 1 row affected (0.18 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> update mysql.innodb_index_stats set stat_value=8000000 where stat_description in('id', 'shop_id,create_date,id') and table_name='goods';
Query OK, 2 rows affected (0.08 sec)
Rows matched: 2 Changed: 2 Warnings: 0
mysql> update mysql.innodb_index_stats set stat_value=100 where stat_description in('shop_id') and table_name='goods';
Query OK, 1 row affected (0.09 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> update mysql.innodb_index_stats set stat_value=169861 where stat_description in('shop_id,create_date') and table_name='goods';
Query OK, 1 row affected (0.08 sec)
Rows matched: 1 Changed: 1 Warnings: 0

I took index values from earlier, as calculated by this query:

select count(distinct id) as `Cardinality for PRIMARY`, count(distinct shop_id) as `Cardinality for shop_id column in index shop_id`, count(distinct shop_id, create_date) as `Cardinality for create_date column in index shop_id` from goods;

mysql> select * from mysql.innodb_table_stats where table_name='goods';
+---------------+------------+---------------------+---------+----------------------+--------------------------+
| database_name | table_name |         last_update |  n_rows | clustered_index_size | sum_of_other_index_sizes |
+---------------+------------+---------------------+---------+----------------------+--------------------------+
|          test |      goods | 2017-09-05 00:47:45 | 8000000 |                34624 |                    17600 |
+---------------+------------+---------------------+---------+----------------------+--------------------------+
1 row in set (0.00 sec)
mysql> select * from mysql.innodb_index_stats where table_name='goods';
+---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+
| database_name | table_name | index_name |         last_update |    stat_name | stat_value | sample_size |                  stat_description |
+---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+
|          test |      goods |    PRIMARY | 2017-09-05 00:48:32 | n_diff_pfx01 |    8000000 |          20 |                                id |
|          test |      goods |    PRIMARY | 2017-09-05 00:21:12 | n_leaf_pages |      34484 |        NULL | Number of leaf pages in the index |
|          test |      goods |    PRIMARY | 2017-09-05 00:21:12 |         size |      34624 |        NULL |      Number of pages in the index |
|          test |      goods |    shop_id | 2017-09-05 00:49:13 | n_diff_pfx01 |        100 |          20 |                           shop_id |
|          test |      goods |    shop_id | 2017-09-05 00:49:26 | n_diff_pfx02 |     169861 |          20 |               shop_id,create_date |
|          test |      goods |    shop_id | 2017-09-05 00:48:32 | n_diff_pfx03 |    8000000 |          20 |            shop_id,create_date,id |
|          test |      goods |    shop_id | 2017-09-05 00:21:12 | n_leaf_pages |      15288 |        NULL | Number of leaf pages in the index |
|          test |      goods |    shop_id | 2017-09-05 00:21:12 |         size |      17600 |        NULL |      Number of pages in the index |
+---------------+------------+------------+---------------------+--------------+------------+-------------+-----------------------------------+
8 rows in set (0.00 sec)

Now the statistics are up to date, but not used:

mysql> explain select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486);
+----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+
| id | select_type | table | partitions |  type | possible_keys |     key | key_len |                ref | rows | filtered |                    Extra |
+----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+
|  1 |      SIMPLE | shops |       NULL | index |       PRIMARY | PRIMARY |       4 |               NULL |  100 |   100.00 | Using where; Using index |
|  1 |      SIMPLE | goods |       NULL |   ref |       shop_id | shop_id |       4 | test.shops.shop_id |  534 |    11.11 |    Using index condition |
+----+-------------+-------+------------+-------+---------------+---------+---------+--------------------+------+----------+--------------------------+
2 rows in set, 1 warning (0.04 sec)

To finalize the changes, we need to run

FLUSH TABLE goods

:

mysql> FLUSH TABLE goods;
Query OK, 0 rows affected (0.00 sec)
mysql> explain select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486);
+----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+
| id | select_type | table | partitions |   type | possible_keys |     key | key_len |                ref |  rows | filtered |                 Extra |
+----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+
|  1 |      SIMPLE | goods |       NULL |  range |       shop_id | shop_id |      10 |               NULL | 31997 |   100.00 | Using index condition |
|  1 |      SIMPLE | shops |       NULL | eq_ref |       PRIMARY | PRIMARY |       4 | test.goods.shop_id |     1 |   100.00 |           Using index |
+----+-------------+-------+------------+--------+---------------+---------+---------+--------------------+-------+----------+-----------------------+
2 rows in set, 1 warning (0.28 sec)
mysql> P md5sum
PAGER set to 'md5sum'
mysql> select goods.* from goods join shops using(shop_id) where create_date BETWEEN CONVERT_TZ('2015-11-01 00:00:00', 'MET','GMT') AND CONVERT_TZ('2015-11-07 23:59:59', 'MET','GMT') and goods.shop_id in(4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486);
4a94dabc4bfbfb7dd225bcb50278055b -
31896 rows in set (7.79 sec)

Now everything is good.

But

FLUSH TABLE

 is a blocking operation, right? Won’t it block queries and create a worse scenario than described for ANALYZE TABLE in this post?

At first glance this is true. But we can use the same trick Percona Toolkit uses: set

lock_wait_timeout

 to 1 and call

FLUSH

 in a loop. To demonstrate how it works, I use a similar scenario as described in the

ANALYZE TABLE

 blog post.

First, let’s reset the statistics to ensure our

FLUSH

 works as expected:

mysql> analyze table goods;
+------------+---------+----------+----------+
|      Table |      Op | Msg_type | Msg_text |
+------------+---------+----------+----------+
| test.goods | analyze |   status |       OK |
+------------+---------+----------+----------+
1 row in set (0.38 sec)
mysql> show indexes from goods;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| goods |          0 |  PRIMARY |            1 |          id |         A |     7765796 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            1 |     shop_id |         A |       14523 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            2 | create_date |         A |      168168 |     NULL |   NULL |  YES |      BTREE |         |               |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
3 rows in set (0.00 sec)

And then update

mysql.innodb_*_stats

 tables manually. Then check that Optimizer still sees outdated statistics:

mysql> update mysql.innodb_table_stats set n_rows=8000000 where table_name='goods';
Query OK, 1 row affected (0.09 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> update mysql.innodb_index_stats set stat_value=8000000 where stat_description in('id', 'shop_id,create_date,id') and table_name='goods';
Query OK, 2 rows affected (0.09 sec)
Rows matched: 2 Changed: 2 Warnings: 0
mysql> update mysql.innodb_index_stats set stat_value=100 where stat_description in('shop_id') and table_name='goods';
Query OK, 1 row affected (0.11 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> update mysql.innodb_index_stats set stat_value=169861 where stat_description in('shop_id,create_date') and table_name='goods';
Query OK, 1 row affected (0.10 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> show indexes from goods;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| goods |          0 |  PRIMARY |            1 |          id |         A |     7765796 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            1 |     shop_id |         A |       14523 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            2 | create_date |         A |      168168 |     NULL |   NULL |  YES |      BTREE |         |               |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
3 rows in set (0.00 sec)

Now let’s start a long running query in one session that blocks our

FLUSH TABLE

 command:

mysql> select sleep(1) from goods limit 1000, 300;

And let’s run

FLUSH TABLE

 in a loop:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ until (`mysqlmtr -P13001 -e "set lock_wait_timeout=1; flush table goods;" test`); do sleep 1; done
ERROR 1205 (HY000) at line 1: Lock wait timeout exceeded; try restarting transaction
ERROR 1205 (HY000) at line 1: Lock wait timeout exceeded; try restarting transaction
ERROR 1205 (HY000) at line 1: Lock wait timeout exceeded; try restarting transaction
...

Now let’s ensure we can access the table:

mysql> select * from goods order by id limit 10;
^C

We cannot! We cannot even connect to the database where the table is stored:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ mysqlmtr -P13001 test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
^C

The reason for this is that while the 

FLUSH TABLE

 command was killed due to the metadata lock wait timeout, it also requested table lock for flushing and blocked other incoming queries.

But we can enclose

FLUSH TABLE

 into

LOCK TABLE ... WRITE; ... UNLOCK TABLES;

 operations. In this case, the 

LOCK TABLE

 command gets blocked until all queries release metadata lock on the table. Then it exclusively locks the table,

FLUSH TABLE

 runs and then the script immediately unlocks the table. Since closing the session causes an implicit unlock, I used a PHP one-liner to have everything in a single session:

$ php -r '
> $link = new mysqli("127.0.0.1", "root", "", "test", 13001);
> $link->query("set lock_wait_timeout=1");
> while(!$link->query("lock table goods write")) {sleep(1);}
> $link->query("flush table goods");
> $link->query("unlock tables");'

We can confirm if a parallel session can access the table:

mysql> select * from goods order by id limit 10;
+----+---------+----------------------------------+---------------------+
| id | shop_id |                             name |         create_date |
+----+---------+----------------------------------+---------------------+
|  1 |      58 | 5K0z2sHTgjWKKdryTaniQdZmjGjA9wls | 2015-09-19 00:00:00 |
|  2 |      17 | xNll02kgUTWAFURj6j5lL1zXAubG0THG | 2013-10-19 00:00:00 |
|  3 |      30 | clHX7uQopKmoTtEFH5LYBgQncsxRtTIB | 2017-08-01 00:00:00 |
|  4 |      93 | bAzoQTN98AmFjPOZs7PGfbiGfaf9Ye4b | 2013-02-24 00:00:00 |
|  5 |      20 | rQuTO5GHjP60kDbN6WoPpE2S8TtMbrVL | 2017-08-05 00:00:00 |
|  6 |      37 | WxqxA5tBHxikaKbuvbIF84H9QuaCnqQ3 | 2013-10-18 00:00:00 |
|  7 |      13 | DoYnFpQZSVV8UswBsWklgGBUc8zW9mVW | 2017-02-06 00:00:00 |
|  8 |      81 | dkNxMQyZNZuTrONEX4gxRLa0DOedatIs | 2015-07-05 00:00:00 |
|  9 |      12 | Z0t2uQ9itexpPf01KUpa7qBWlT5fBmXR | 2014-06-25 00:00:00 |
| 10 |      90 | 6urABBQyaUVVyxljvd11D3kUxbdDRPRV | 2013-10-23 00:00:00 |
+----+---------+----------------------------------+---------------------+
10 rows in set (0.00 sec)
mysql> update goods set name='test' where id=100;
Query OK, 1 row affected (0.08 sec)
Rows matched: 1 Changed: 1 Warnings: 0

After the PHP script finishes its job, statistics are corrected:

mysql> show index from goods;
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| goods |          0 |  PRIMARY |            1 |          id |         A |     8000000 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            1 |     shop_id |         A |         100 |     NULL |   NULL |      |      BTREE |         |               |
| goods |          1 |  shop_id |            2 | create_date |         A |      169861 |     NULL |   NULL |  YES |      BTREE |         |               |
+-------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
3 rows in set (0.00 sec)

Conclusion

We can manually update persistent InnoDB statistics to fix Optimizer plans for our queries, with almost no impact on a live server.

May
22
2017
--

Webinar May 23, 2017: MongoDB Monitoring and Performance for the Savvy DBA

MongoDB Monitoring

MongoDB MonitoringJoin Percona’s Senior Technical Services Engineer Bimal Kharel on Tuesday, May 23, 2017, as he presents a webinar on MongoDB monitoring called How to Help Your DBA’s Sleep Better at Night at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Are you trying to stay on top of your database before things turn ugly? Between metrics for throughput, database performance, resource utilization, resource saturation, errors (asserts) and many others, how do you know which one needs to be looked at NOW (and which can wait)?

Both DBAs and system admins must stay on top of the systems they manage. But filtering between metrics that need immediate attention and those that should be watched over time is challenging. In this webinar, Bimal narrows down the list of metrics that help you decide whether the on-call DBA gets their recommended eight hours of shuteye, or gets to run on caffeine with no sleep.

Bimal also discusses which graphs relate to each other, with examples from Percona’s Monitoring and Management (PMM) tool, to help you understand how things in MongoDB can impact other areas.

Please register for the webinar here.

MongoDB MonitoringBimal Kharel, Senior Technical Services Engineer, Percona

Bimal is a MongoDB support engineer at Percona. Before Percona he worked as a MongoDB DBA at EA and Charles Schwab. He has been in various roles throughout his career, from graphics to web developer to systems administration. MongoDB was the first database Bimal got into (he used MySQL for some websites but never other relational databases).

May
15
2017
--

After acquisition by Juniper, cloud optimization service AppFormix adds support for VMware

 Last December, Juniper acquired the cloud operations management and optimization startup AppFormix and said that it planned to integrate it with its own Contrail product line. That integration is moving swiftly, but under the aegis of Juniper, it’s also adding new features to its platform that are meant to better support Juniper’s customers. Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com