Sep
06
2017
--

Upcoming Webinar Thursday, September 7: Using PMM to Troubleshoot MySQL Performance Issues

Troubleshooting MySQL Performance

Troubleshooting MySQL PerformanceJoin Percona’s Product Manager, Michael Coburn as he presents Using Percona Monitoring and Management to Troubleshoot MySQL Performance Issues on Thursday, September 7, 2017, at 10:00 am PDT / 1:00 pm EDT (UTC-7).

 

Successful applications often become limited by MySQL performance. Michael will show you how to get great MySQL performance using Percona Monitoring and Management (PMM). There will be a demonstration of how to leverage the combination of the query analytics and metrics monitor when troubleshooting MySQL performance issues. We’ll review the essential components of PMM, and use some of the most common database slowness cases as examples of where to look and what to do.

By the end of the webinar you will have a better understanding of:

  • Query metrics, including bytes sent, lock time, rows sent, and more
  • Metrics monitoring
  • How to identify MySQL performance issues
  • Point-in-time visibility and historical trending of database performance

Register for the webinar here.

Troubleshoot MySQL PerformanceMichael Coburn, Product Manager

Michael joined Percona as a Consultant in 2012 after having worked with high volume stock photography websites and email service provider platforms. WIth a foundation in systems administration, Michael enjoys working with SAN technologies and high availability solutions. A Canadian, Michael currently lives in the Nicoya, Costa Rica area with his wife, two children, and two dogs.
Aug
30
2017
--

Nested Data Structures in ClickHouse

Nested Data Structures

Nested Data StructuresIn this blog post, we’ll look at nested data structures in ClickHouse and how this can be used with PMM to look at queries.

Nested structures are not common in Relational Database Management Systems. Usually, it’s just flat tables. Sometimes it would be convenient to store unstructured information in structured databases.

We are working to adapt ClickHouse as a long term storage for Percona Monitoring and Management (PMM), and particularly to store detailed information about queries. One of the problems we are trying to solve is to count the different errors that cause a particular query to fail.

For example, for date 2017-08-17 the query:

"SELECT foo FROM bar WHERE id=?"

was executed 1000 times. 25 times it failed with error code “1212”, and eight times it failed with error code “1250”. Of course, the traditional way to store this in relational data would be to have a table "Date, QueryID, ErrorCode, ErrorCnt" and then perform a JOIN to this table. Unfortunately, columnar databases don’t perform well with multiple joins, and often the recommendation is to have de-normalized tables.

We can create a column for each possible ErrorCode, but this is not an optimal solution. There could be thousands of them, and most of the time they would be empty.

In this case, ClickHouse proposes Nested data structures. For our case, these can be defined as:

CREATE TABLE queries
(
    Period Date,
    QueryID UInt32,
    Fingerprint String,
    Errors Nested
    (
        ErrorCode String,
        ErrorCnt UInt32
    )
)Engine=MergeTree(Period,QueryID,8192);

This solution has obvious questions: How do we insert data into this table? How do we extract it?

Let’s start with INSERT. Insert can look like:

INSERT INTO queries VALUES ('2017-08-17',5,'SELECT foo FROM bar WHERE id=?',['1220','1230','1212'],[5,6,2])

which means that the inserted query during 2017-08-17 gave error 1220 five times, error 1230 six times and error 1212 two times.

Now, during a different date, it might produce different errors:

INSERT INTO queries VALUES ('2017-08-18',5,'SELECT foo FROM bar WHERE id=?',['1220','1240','1258'],[3,2,1])

Let’s take a look at ways to SELECT data. A very basic SELECT:

SELECT *
FROM queries
??????Period???QueryID???Fingerprint???Errors.ErrorCode?????????Errors.ErrorCnt??
? 2017-08-17 ?       5 ? SELECT foo  ? ['1220','1230','1212'] ? [5,6,2]         ?
? 2017-08-18 ?       5 ? SELECT foo  ? ['1220','1240','1260'] ? [3,16,12]       ?
?????????????????????????????????????????????????????????????????????????????????

If we want to use a more familiar tabular output, we can use the ARRAY JOIN extension:

SELECT *
FROM queries
ARRAY JOIN Errors
??????Period???QueryID???Fingerprint???Errors.ErrorCode???Errors.ErrorCnt??
? 2017-08-17 ?       5 ? SELECT foo  ? 1220             ?            5    ?
? 2017-08-17 ?       5 ? SELECT foo  ? 1230             ?            6    ?
? 2017-08-17 ?       5 ? SELECT foo  ? 1212             ?            2    ?
? 2017-08-18 ?       5 ? SELECT foo  ? 1220             ?            3    ?
? 2017-08-18 ?       5 ? SELECT foo  ? 1240             ?           16    ?
? 2017-08-18 ?       5 ? SELECT foo  ? 1260             ?           12    ?
???????????????????????????????????????????????????????????????????????????

However, usually we want to see the aggregation over multiple periods, which can be done with traditional aggregation functions:

SELECT
    QueryID,
    Errors.ErrorCode,
    SUM(Errors.ErrorCnt)
FROM queries
ARRAY JOIN Errors
GROUP BY
    QueryID,
    Errors.ErrorCode
??QueryID???Errors.ErrorCode???SUM(Errors.ErrorCnt)??
?       5 ? 1212             ?                 2    ?
?       5 ? 1230             ?                 6    ?
?       5 ? 1260             ?                12    ?
?       5 ? 1240             ?                16    ?
?       5 ? 1220             ?                 8    ?
?????????????????????????????????????????????????????

If we want to get really creative and return only one row per QueryID, we can do that as well:

SELECT
    QueryID,
    groupArray((ecode, cnt))
FROM
(
    SELECT
        QueryID,
        ecode,
        sum(ecnt) AS cnt
    FROM queries
    ARRAY JOIN
        Errors.ErrorCode AS ecode,
        Errors.ErrorCnt AS ecnt
    GROUP BY
        QueryID,
        ecode
)
GROUP BY QueryID
??QueryID???groupArray(tuple(ecode, cnt))???????????????????????????????
?       5 ? [('1230',6),('1212',2),('1260',12),('1220',8),('1240',16)] ?
????????????????????????????????????????????????????????????????????????

Conclusion

ClickHouse provides flexible ways to store data in a less structured manner and variety of functions to extract and aggregate it – despite being a columnar database.

Happy data warehousing!

Aug
23
2017
--

Percona Monitoring and Management 1.2.2 is Now Available

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM)Percona announces the release of Percona Monitoring and Management 1.2.2 on August 23, 2017.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

This release contains bug fixes related to performance and introduces various improvements. It also contains an updated version of Grafana.

Changes in PMM Server

We introduced the following changes in PMM Server 1.2.2:

Bug fixes

  • PMM-927: The error “Cannot read property ‘hasOwnProperty’ of undefined” was displayed on the QAN page for MongoDB.

    After enabling monitoring and generating data for MongoDB, the PMM client showed the following error message on the QAN page: “Cannot read property ‘hasOwnProperty’ of undefined”. This bug is now fixed.

  • PMM-949: Percona Server was not detected properly, the log_slow_* variables were not properly detected.

  • PMM-1081: Performance Schema Monitor treated queries that didn’t show up in every snapshot as new queries reporting a wrong number of counts between snapshots.

  • PMM-1272: MongoDB: the query empty abstract. This bug is now fixed.

  • PMM-1277: The QPS Graph had inappropriate Prometheus query. This bug is now fixed.

  • PMM-1279: The MongoDB summary did not work in QAN2 if mongodb authentication was activated. This bug is now fixed.

  • PMM-1284: Dashboards pointed to QAN2 instead of QAN. This bug is now fixed.

Improvements

  • PMM-586: The wsrep_evs_repl_latency parameter is now monitored in Grafana dashboards

  • PMM-624: The Grafana User ID remains the same in the pmm-server docker image

  • PMM-1209: OpenStack support is now enabled during the OVA image creation

  • PMM-1211: It is now possible to configure a static IP for an OVA image

    The root password can only be set from the console. If the root password is not changed from the default, a warning message appears on the console requesting the user to change the root password on the root first login from the console. Web/SSH users can neither use the root account password nor detect if the root password is set to the default value.

  • PMM-1221: Grafana updated to version 4.4.3

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.

Aug
16
2017
--

Percona Monitoring and Management 1.2.1 is Now Available

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM)Percona announces the release of Percona Monitoring and Management 1.2.1 on August 16, 2017.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

This hotfix release improves memory consumption.

Changes in PMM Server

We’ve introduced the following changes in PMM Server 1.2.1:

Bug fixes

  • PMM-1280: PMM server affected by nGinx CVE-2017-7529. An integer overflow exploit could result in a DOS (Denial of Service) for the affected nginx service with the max_ranges directive not set. This problem is solved by setting the set max_ranges directive to 1 in the nGinx configuration.

Improvements

  • PMM-1232: Update the default value of the METRICS_MEMORY configuration setting. Previous versions of PMM Server used a different value for the METRICS_MEMORY configuration setting which allowed Prometheus to use up to 768MB of memory. PMM Server 1.2.0 used the storage.local.target-heap-size setting, its default value is 256MB. Unintentionally, this value reduced the amount of memory that Prometheus could use. As a result, the performance of Prometheus was affected. To improve the performance of Prometheus, the default setting of storage.local.target-heap-size has been set to 768 MB.

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.

Aug
04
2017
--

Saturation Metrics in PMM 1.2.0

Saturation Metrics 5 small

One of the new graphs added to Percona Monitoring and Management (PMM) is saturation metrics. This blog post explains how to use the information provided by these graphs.

You might have heard about Brendan Gregg’s USE Method  (Utilization-Saturation-Errors) as a way to analyze the performance of any system. Our goal in PMM is to support this method fully over time, and these graphs take us one step forward.

When it comes to utilization, there are many graphs available in PMM. There is the CPU Usage graph:

Saturation Metrics 1

There is also Disk IO Utilization:

Saturation Metrics 2

And there is Network Traffic:

Saturation Metrics 3

If you would like to look at saturation type metrics, there is classical the Load Average graph:

Saturation Metrics 4

While Load Average is helpful for understanding system saturation in general, it does not really distinguish whether it is the CPU or Disk that is saturated. Load Average, as the name says, is already averaged — so we can’t really observe short saturation spikes with Load Average. It is averaged for at least one minute. Finally, the problem with Load Average is it does not keep the number of CPU cores/threads into account. Suppose I have a CPU-bound Load Average of 16, for example. That is quite a load and will cause high saturation and queueing if you have two CPU threads. But if you have 64 threads, then 16 becomes a trivial load with no saturation at all.

Let’s take a look at the Saturation Metrics graph:

Saturation Metrics 5

It provides us two metrics: one showing the CPU load and another is showing the IO load.These values roughly correspond to  the “r” and “b” columns in VMSTAT output:

Saturation Metrics 6

These are sampled every second and then averaged over the reporting interval.

We also normalize the CPU load by dividing the raw number of runnable processes by a number of threads available. “Rocky” has 56 threads, which is why the normalized CPU load is about one even though the number of runnable processes shown by VMSTAT is around 50.

We do not normalize the IO load, as systems can have multiple IO devices and a number of requests they can handle in parallel is largely unknown. If you want to understand specific IO device performance, you should check out the Disk Performance Dashboard.

Testing Saturation Metrics in Practice

Let’s see if saturation graphs indeed show us when CPU saturation is the issue. I will use a sysbench CPU test for illustration, run as:

sysbench cpu  --cpu-max-prime=100000 --threads=1 --time=60 run

This will use the said number of threads to execute compute jobs, each of which will compute the said number of prime numbers. If we have enough CPU resources available, with no saturation, the latency of executing such requests should be about the same. When we overload the system, so there are not enough CPU execution units to process everything in the parallel, the average latency should increase.   

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=1 --time=300 run sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Prime numbers limit: 100000
Initializing worker threads...
Threads started!
General statistics:
   total time:                          300.0234s
   total number of events:              12784
Latency (ms):
        min:                                 23.39
        avg:                                 23.47
        max:                                 28.07
        95th percentile:                     23.52
        sum:                             300018.06

As we can see with one thread working, the average time it takes to handle a single request is 23ms. Obviously, there is no saturation happening in this case:

Saturation Metrics 7

“Ts140i” has four CPU cores, and as you can see the Normalized CPU load stays below one. You may wonder why isn’t it closer to 0.25 in this case, with one active thread and four cores available? The reason is at exactly the time when the metrics are being captured, there often happen to be an additional two to three threads active to facilitate the process. They are only active for a very few milliseconds at the time, so they do not produce much load — but they tend to skew the number a little bit.

Let’s now run with four threads. The number of threads matches the number of CPU cores available (and it is true cores in this case, no hyperthreading). In this case, don’t expect too much increase in the event processing time.

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=4 --time=300 run
sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 4
Initializing random number generator from current time
Prime numbers limit: 100000
Initializing worker threads...
Threads started!
General statistics:
   total time:                          300.0215s
   total number of events:              48285
Latency (ms):
        min:                                 24.19
        avg:                                 24.85
        max:                                 43.61
        95th percentile:                     24.83
        sum:                            1200033.93

As you see test confirms the theory – we have avg latency increased just by about 6%  with Normalized CPU load in saturation metrics mostly hovering between 1 and 2:

Saturation Metrics 8

Let’s now do the test with 16 threads, which is four times more than available CPU cores. We should see the latency dramatically increase due to CPU overload (or saturation). The same will happen to your CPU bound MySQL queries if you have more concurrency than CPUs available.

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=16 --time=300 run
sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 16
Initializing random number generator from current time
Prime numbers limit: 100000
Initializing worker threads...
Threads started!
General statistics:
   total time:                          300.0570s
   total number of events:              48269
Latency (ms):
        min:                                 27.83
        avg:                                 99.44
        max:                                189.05
        95th percentile:                    121.08
        sum:                            4799856.52

We can see it takes about four times longer to process each request due to CPU overload and queueing. Let’s see what saturation metrics tell us:

Saturation Metrics 9

As you can see, Normalized CPU Load floats between four and five on the graph, consistent with saturation we’re observing.

You may ask does the CPU utilization graph help us here? Not really. You will see 100% CPU usage for both the run with four threads and 16 threads, while request latencies are completely different.   

Summary

As we can see from our test, Normalized CPU Load is very helpful for understanding when the CPU is overloaded. An overloaded CPU causes response times to increase and performance to degrade. Furthermore, you can use it to (roughly) see how serious the overload is. As a rule of thumb, if you see Normalized CPU saturation over two, it indicates your CPUs are overloaded.

Aug
01
2017
--

Group Replication: the Sweet and the Sour

Group Replication

In this blog, we’ll look at group replication and how it deals with flow control (FC) and replication lag. 

Overview

In the last few months, we had two main actors in the MySQL ecosystem: ProxySQL and Group-Replication (with the evolution to InnoDB Cluster). 

While I have extensively covered the first, my last serious work on Group Replication dates back to some lab version years past.

Given that Oracle decided to declare it GA, and Percona’s decision to provide some level of Group Replication support, I decided it was time for me to take a look at it again.

We’ve seen a lot of coverage already too many Group Replication topics. There are articles about Group Replication and performance, Group Replication and basic functionalities (or lack of it like automatic node provisioning), Group Replication and ProxySQL, and so on.

But one question kept coming up over and over in my mind. If Group Replication and InnoDB Cluster have to work as an alternative to other (virtually) synchronous replication mechanisms, what changes do our customers need to consider if they want to move from one to the other?

Solutions using Galera (like Percona XtraDB Cluster) must take into account a central concept: clusters are data-centric. What matters is the data and the data state. Both must be the same on each node at any given time (commit/apply). To guarantee this, Percona XtraDB Cluster (and other solutions) use a set of data validation and Flow Control processes that work to the ensure a consistent cluster data set on each node.

The upshot of this principle is that an application can query ANY node in a Percona XtraDB Cluster and get the same data, or write to ANY node and know that the data is visible everywhere in the cluster at (virtually) the same time.

Last but not least, inconsistent nodes should be excluded and either rebuild or fixed before rejoining the cluster.

If you think about it, this is very useful. Guaranteeing consistency across nodes allows you to transparently split write/read operations, failover from one node to another with very few issues, and more.

When I conceived of this blog on Group Replication (or InnoDB Cluster), I put myself in the customer shoes. I asked myself: “Aside from all the other things we know (see above), what is the real impact of moving from Percona XtraDB Cluster to Group Replication/InnoDB Cluster for my application? Since Group Replication still (basically) uses replication with binlogs and relaylog, is there also a Flow Control mechanism?” An alarm bell started to ring in my mind.

My answer is: “Let’s do a proof of concept (PoC), and see what is really going on.”

The POC

I setup a simple set of servers using Group Replication with a very basic application performing writes on a single writer node, and (eventually) reads on the other nodes. 

You can find the schema definition here. Mainly I used the four tables from my windmills test suite — nothing special or specifically designed for Group Replication. I’ve used this test a lot for Percona XtraDB Cluster in the past, so was a perfect fit.

Test Definition

The application will do very simple work, and I wanted to test four main cases:

  1. One thread performing one insert at each transaction
  2. One thread performing 50 batched inserts at each transaction
  3. Eight threads performing one insert to each transaction
  4. Eight threads performing 50 batched inserts at each transaction

As you can see, a pretty simple set of operations. Then I decided to test it using the following four conditions on the servers:

  1. One slave worker FC as default
  2. One slave worker FC set to 25
  3. Eight slave workers FC as default
  4. Eight slave workers FC set to 25

Again nothing weird or strange from my point of view. I used four nodes:

  1. Gr1 Writer
  2. Gr2 Reader
  3. Gr3 Reader minimal latency (~10ms)
  4. Gr4 Reader minimal latency (~10ms)

Finally, I had to be sure I measured the lag in a way that allowed me to reference it consistently on all nodes. 

I think we can safely say that the incoming GTID (last_ Received_transaction_set from replication_connection_status) is definitely the last change applied to the master that the slave node knows about. More recent changes could have occurred, but network delay can prevent them from being “received.” The other point of reference is GTID_EXECUTED, which refers to the latest GTID processed on the node itself.

The closest query that can track the distance will be:

select @last_exec:=SUBSTRING_INDEX(SUBSTRING_INDEX(SUBSTRING_INDEX( @@global.GTID_EXECUTED,':',-2),':',1),'-',-1) last_executed;select  @last_rec:=SUBSTRING_INDEX(SUBSTRING_INDEX(SUBSTRING_INDEX( Received_transaction_set,':',-2),':',1),'-',-1) last_received FROM performance_schema.replication_connection_status WHERE Channel_name = 'group_replication_applier'; select (@last_rec - @last_exec) as real_lag

Or in the case of a single worker:

select @last_exec:=SUBSTRING_INDEX(SUBSTRING_INDEX( @@global.GTID_EXECUTED,':',-1),'-',-1) last_executed;select  @last_rec:=SUBSTRING_INDEX(SUBSTRING_INDEX(Received_transaction_set,':',-1),'-',-1) last_received FROM performance_schema.replication_connection_status WHERE Channel_name = 'group_replication_applier'; select (@last_rec - @last_exec) as real_lag;

The result will be something like this:

+---------------+
| last_executed |
+---------------+
| 23607         |
+---------------+
+---------------+
| last_received |
+---------------+
| 23607         |
+---------------+
+----------+
| real_lag |
+----------+
|        0 |
+----------+

The whole set of tests can be found here, with all the commands you need to run the application (you can find it here) and replicate the tests. I will focus on the results (otherwise this blog post would be far too long), but I invite you to see the details.

The Results

Efficiency on Writer by Execution Time and Rows/Sec

Using the raw data from the tests (Excel spreadsheet available here), I was interested in identifying if and how the Writer is affected by the use of Group Replication and flow control.

Reviewing the graph, we can see that the Writer has a linear increase in the execution time (when using default flow control) that matches the increase in the load. Nothing there is concerning, and all-in-all we see what is expected if the load is light. The volume of rows at the end justifies the execution time.

It’s a different scenario if we use flow control. The execution time increases significantly in both cases (single worker/multiple workers). In the worst case (eight threads, 50 inserts batch) it becomes four times higher than the same load without flow control.

What happens to the inserted rows? In the application, I traced the rows inserted/sec. It is easy to see what is going on there:

We can see that the Writer with flow control activated inserts less than a third of the rows it processes without flow control. 

We can definitely say that flow control has a significant impact on the Writer performance. To clarify, let’s look at this graph:

Without flow control, the Writer processes a high volume of rows in a limited amount of time (results from the test of eight workers, eight threads, 50 insert batch). With flow control, the situation changes drastically. The Writer takes a long time processing a significantly smaller number of rows/sec. In short, performance drops significantly.

But hey, I’m OK with that if it means having a consistent data-set cross all nodes. In the end, Percona XtraDB Cluster and similar solutions pay a significant performance price match the data-centric principle. 

Let’s see what happen on the other nodes.

Entries Lag

Well, this scenario is not so good:

When NOT using flow control, the nodes lag behind the writer significantly. Remember that by default flow control in Group Replication is set to 25000 entries (I mean 25K of entries!!!).

What happens is that as soon as I put some salt (see load) on the Writer, the slave nodes start to lag. When using the default single worker, that will have a significant impact. While using multiple workers, we see that the lag happens mainly on the node(s) with minimal (10ms) network latency. The sad thing is that is not really going down with respect to the single thread worker, indicating that the simple minimal latency of 10ms is enough to affect replication.

Time to activate the flow control and have no lag:

Unfortunately, this is not the case. As we can see, the lag of single worker remains high for Gr2 (154 entries). While using multiple workers, the Gr3/4 nodes can perform much better, with significantly less lag (but still high at ~1k entries).

It is important to remember that at this time the Writer is processing one-third or less of the rows it is normally able to. It is also important to note that I set 25 to the entry limit in flow control, and the Gr3 (and Gr4) nodes are still lagging more than 1K entries behind.

To clarify, let check the two graphs below:

Using the Writer (Master) as a baseline in entry #N, without flow control, the nodes (slaves) using Group Replication start to significantly lag behind the writer (even with a light load).

The distance in this PoC ranged from very minimal (with 58 entries), up to much higher loads (3849 entries):

Using flow control, the Writer (Master) diverges less, as expected. If it has a significant drop in performance (one-third or less), the nodes still lag. The worst-case is up to 1363 entries. 

I need to underline here that we have no further way (that I am aware of, anyway) to tune the lag and prevent it from happening.

This means an application cannot transparently split writes/reads and expect consistency. The gap is too high.

A Graph That Tells Us a Story

I used Percona Monitoring and Management (PMM) to keep an eye on the nodes while doing the tests. One of the graphs really showed me that Group Replication still has some “limits” as the replication mechanism for a cluster:

This graph shows the MySQL queries executed on all the four nodes, in the testing using 8-50 threads-batch and flow control. 

As you can see, the Gr1 (Writer) is the first one to take off, followed by Gr2. Nodes Gr3 and Gr4 require a bit more, given the binlog transmission (and 10ms delay). Once the data is there, they match (inconsistently) the Gr2 node. This is an effect of flow control asking the Master to slow down. But as previously seen, the nodes will never match the Writer. When the load test is over, the nodes continue to process the queue for additional ~130 seconds. Considering that the whole load takes 420 seconds on the Writer, this means that one-third of the total time on the Writer is spent syncing the slave AFTERWARDS.

The above graph shows the same test without flow control. It is interesting to see how the Writer moved above 300 queries/sec, while G2 stayed around 200 and Gr3/4 far below. The Writer was able to process the whole load in ~120 seconds instead 420, while Gr3/4 continue to process the load for an additional ~360 seconds.

This means that without flow control set, the nodes lag around 360 seconds behind the Master. With flow control set to 25, they lag 130 seconds.

This is a significant gap.

Conclusions

Going back to the reason why I was started this PoC, it looks like my application(s) are not a good fit for Group Replication given that I have set Percona XtraDB Cluster to scale out the reads and efficiently move my writer to another when I need to. 

Group Replication is still based on asynchronous replication (as my colleague Kenny said). It makes sense in many other cases, but it doesn’t compare to solutions based on virtually synchronous replication. It still requires a lot of refinement.

On the other hand, for applications that can afford to have a significant gap between writers and readers it is probably fine. But … doesn’t standard replication already cover that? 

Reviewing the Oracle documentations (https://dev.mysql.com/doc/refman/5.7/en/group-replication-background.html), I can see why Group Replication as part of the InnoDB cluster could help improve high availability when compared to standard replication. 

But I also think it is important to understand that Group Replication (and derived solutions like InnoDB cluster) are not comparable or a replacement for data-centric solutions as Percona XtraDB Cluster. At least up to now.

Good MySQL to everyone.

Jul
14
2017
--

Percona Monitoring and Management 1.2.0 is Now Available

Percona Monitoring and Management (PMM)

Percona announces the release of Percona Monitoring and Management 1.2.0 on July 14, 2017.

For installation instructions, see the Deployment Guide.


Changes in PMM Server

PMM Server 1.2.0 introduced the following changes:

New Features

  • PMM-737: New graphs in System Overview dashboard:
      • Memory Advanced Details
      • Saturation Metrics

  • PMM-1090: Added ESXi support for PMM Server virtual appliance.

UI Fixes

  • PMM-707: Fixed QPS metric in MySQL Overview dashboard to always show queries per second regardless of the selected interval.
  • PMM-708: Fixed tooltips for graphs that displayed incorrectly.
  • PMM-739PMM-797: Fixed PMM Server update feature on the landing page.
  • PMM-823: Fixed arrow padding for collapsible blocks in QAN.
  • PMM-887: Disabled the Add button when no table is specified for showing query info in QAN.
  • PMM-888: Disabled the Apply button in QAN settings when nothing is changed.
  • PMM-889: Fixed the switch between UTC and local time zone in the QAN time range selector.
  • PMM-909: Added message No query example when no example for a query is available in QAN.
  • PMM-933: Fixed empty tooltips for Per Query Stats column in the query details section of QAN.
  • PMM-937: Removed the percentage of total query time in query details for the TOTAL entry in QAN (because it is 100% by definition).
  • PMM-951: Fixed the InnoDB Page Splits graph formula in the MySQL InnoDB Metrics Advanced dashboard.
  • PMM-953: Enabled stacking for graphs in MySQL Performance Schema dashboard.
  • PMM-954: Renamed Top Users by Connections graph in MySQL User Statistics dashboard to Top Users by Connections Created and added the Connections/sec label to the Y-axis.
  • PMM-957: Refined titles for Client Connections and Client Questions graphs in ProxySQL Overview dashboard to mentioned that they show metrics for all host groups (not only the selected one).
  • PMM-961: Fixed the formula for Client Connections graph in ProxySQL Overview dashboard.
  • PMM-964: Fixed the gaps for high zoom levels in MySQL Connections graph on the MySQL Overview dashboard.
  • PMM-976: Fixed Orchestrator handling by supervisorctl.
  • PMM-1129: Updated the MySQL Replication dashboard to support new connection_name label introduced in mysqld_exporter for multi-source replication monitoring.
  • PMM-1054: Fixed typo in the tooltip for the Settings button in QAN.
  • PMM-1055: Fixed link to Query Analytics from Metrics Monitor when running PMM Server as a virtual appliance.
  • PMM-1086: Removed HTML code that showed up in the QAN time range selector.

Bug Fixes

  • PMM-547: Added warning page to Query Analytics app when there are no PMM Clients running the QAN service.
  • PMM-799: Fixed Orchestrator to show correct version.
  • PMM-1031: Fixed initialization of Query Profile section in QAN that broke after upgrading Angular.
  • PMM-1087: Fixed QAN package building.

Other Improvements

  • PMM-348: Added daily log rotation for nginx.
  • PMM-968: Added Prometheus build information.
  • PMM-969: Updated the Prometheus memory usage settings to leverage new flag. For more information about setting memory consumption by PMM, see FAQ.

Changes in PMM Client

PMM Client 1.2.0 introduced the following changes:

New Features

  • PMM-1114: Added PMM Client packages for Debian 9 (“stretch”).

Bug Fixes

  • PMM-481PMM-1132: Fixed fingerprinting for queries with multi-line comments.
  • PMM-623: Fixed mongodb_exporter to display correct version.
  • PMM-927: Fixed bug with empty metrics for MongoDB query analytics.
  • PMM-1126: Fixed promu build for node_exporter.
  • PMM-1201: Fixed node_exporter version.

Other Improvements

  • PMM-783: Directed mongodb_exporter log messages to stderr and excluded many generic messages from the default INFO logging level.
  • PMM-756: Merged upstream node_exporter version 0.14.0.
    PMM deprecated several collectors in this release:

    • gmond – Out of scope.
    • megacli – Requires forking, to be moved to textfile collection.
    • ntp – Out of scope.

    It also introduced the following breaking change:

    • Collector errors are now a separate metric: node_scrape_collector_success, not a label on node_exporter_scrape_duration_seconds
  • PMM-1011: Merged upstream mysqld_exporter version 0.10.0.
    This release introduced the following breaking change:

    • mysql_slave_... metrics now include an additional connection_name label to support MariaDB multi-source replication.

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

Percona Monitoring and Management is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

Jun
21
2017
--

Percona Monitoring and Management 1.1.5 is Now Available

Percona Monitoring and Management (PMM)

Percona announces the release of Percona Monitoring and Management 1.1.5 on June 21, 2017.

For installation instructions, see the Deployment Guide.


Changes in PMM Server

  • PMM-667: Fixed the Latency graph in the ProxySQL Overview dashboard to plot microsecond values instead of milliseconds.

  • PMM-800: Fixed the InnoDB Page Splits graph in the MySQL InnoDB Metrics Advanced dashboard to show correct page merge success ratio.

  • PMM-1007: Added links to Query Analytics from MySQL Overview and MongoDB Overview dashboards. The links also pass selected host and time period values.

    NOTE: These links currently open QAN2, which is still considered experimental.

Changes in PMM Client

  • PMM-931: Fixed pmm-admin script when adding MongoDB metrics monitoring for secondary in a replica set.

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

Jun
15
2017
--

Three Methods of Installing Percona Monitoring and Management

Installing Percona Monitoring and Management

Installing Percona Monitoring and ManagementIn this blog post, we’ll look at three different methods for installing Percona Monitoring and Management (PMM).

Percona offers multiple methods of installing Percona Monitoring and Management, depending on your environment and scale. I’ll also share comments on which installation methods we’ve decided to forego for now. Let’s begin by reviewing the three supported methods:

  1. Virtual Appliance
  2. Amazon Machine Image
  3. Docker

Virtual Appliance

We ship an OVF/OVA method to make installation as simple as possible, with the least amount of effort required and at the lowest cost to you. You can leverage the investment in your virtualization deployment platform. OVF is an open standard for packaging and distributing virtual appliances, designed to be run in virtual machines.

Using OVA with VirtualBox as a first step is common in order to quickly play with a working PMM system, and get right to adding clients and observing activity within your own environment against your MySQL and MongoDB instances. But you can also use the OVA file for enterprise deployments. It is a flexible file format that can be imported into other popular hypervisor systems such as VMware, Red Hat Virtualization, XenServer, Microsoft System Centre Virtual Machine Manager and others.

We’d love to hear your feedback on this installation method!

AWS AMI

We also have an AWS AMI in order to provide easy scaling of PMM Server in AWS, so that you can deploy onto any instance size required for your monitoring instance. Depending on the AWS region you’re in, you’ll need to choose from the appropriate AMI Instance ID. Soon we’ll be moving to the AWS Marketplace for even easier deployment. When this is implemented, you will no longer need to clone an existing AMI ID.

Docker

Docker is our most common production deployment method. It is easy (three commands) and scalable (tuning passed on the command line to Docker run). While we recognize that Docker is still a relatively new deployment system for many users, it is dramatically gaining adoption. It is also where Percona is investing the bulk of our development efforts. We deploy PMM Server as two Docker containers: one for storing the data that persists across restarts/upgrades, and the other for running the actual PMM Server binaries (Grafana, Prometheus, consul, Orchestrator, QAN, etc.).

Where are the RPM/DEB/tar.gz packages?!

A common question I hear is why doesn’t Percona support binary-based installation?

We hear you: RPM/DEB/tar.gz methods are commonly used today for many of your own applications. Percona is striving for simplicity in our deployment of PMM Server, and we spend considerable development and QA effort validating the specific versions of Grafana/Prometheus/QAN/consul/Orchestrator all work seamlessly together.

Percona wants to ensure OS compatibility and long-term support of PMM, and to do binary distribution “right” means it can quickly get expensive to build and QA across all the popular Linux distributions available today. We’re in no way against binary distributions. For example, see our list of the nine supported platforms for which we provide bug fix support.

Percona decided to focus our development efforts on stability and features, and less on the number of supported platforms. Hence the hyper-focus on Docker. We don’t have any current plans to move to a binary deployment method for PMM, but we are always open to hearing your feedback. If there is considerable interest, then please let me know via the comments below. We’ll take these thoughts into consideration for PMM planning in the second half of 2017.

Which other methods of installing Percona Monitoring and Management would you like to see?

May
29
2017
--

Percona Monitoring and Management 1.1.4 is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management 1.1.4 on May 29, 2017.

For installation instructions, see the Deployment Guide.

This release includes experimental support for MongoDB in Query Analytics, including updated QAN interface.

Query Analytics for MongoDB

To enable MongoDB query analytics, use the mongodb:queries alias when adding the service. As an experimental feature, it also requires the --dev-enable option:

sudo pmm-admin add --dev-enable mongodb:queries

NOTE: Currently, it monitors only collections that are present when you enable MongoDB query analytics. Query data for collections that you add later is not gathered. This is a known issue and it will be fixed in the future.

Query Analytics Redesign

The QAN web interface was updated for better usability and functionality (including the new MongoDB query analytics data). The new UI is experimental and available by specifying /qan2 after the URL of PMM Server.

New Query Analytics web interface

NOTE: The button on the main landing page still points to the old QAN interface.

You can check out the new QAN web UI at https://pmmdemo.percona.com/qan2

New in PMM Server

  • PMM-724: Added the Index Condition Pushdown (ICP) graph to the MySQL InnoDB Metrics dashboard.
  • PMM-734: Fixed the MySQL Active Threads graph in the MySQL Overview dashboard.
  • PMM-807: Fixed the MySQL Connections graph in the MySQL Overview dashboard.
  • PMM-850: Updated the MongoDB RocksDB and MongoDB WiredTiger dashboards.
  • Removed the InnoDB Deadlocks and Index Collection Pushdown graphs from the MariaDB dashboard.
  • Added tooltips with descriptions for graphs in the MySQL Query Response Time dashboard.Similar tooltips will be gradually added to all graphs.

New in PMM Client

  • PMM-801: Improved PMM Client upgrade process to preserve credentials that are used by services.
  • Added options for pmm-admin to enable MongoDB cluster connections.

About Percona Monitoring and Management

Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com