Dec
21
2017
--

Three P’s of a Successful Black Friday: Percona, Pepper Media Holding, and PMM

Successful Black Friday

As we close out the holiday season, let’s look at some data that tells us how to guarantee a successful Black Friday (from a database perspective).

There are certain peak times of the year where companies worldwide hold their breath in the hope that their databases do not become overloaded or unresponsive. A large percentage of yearly profits are achieved in a matter of hours during peak events. It is critical that the database environment remains online and responsive. According to a recent survey, users will not wait more than 2.5 seconds for a site to load before navigating elsewhere. Percona has partnered with many clients over the years to ensure success during these critical events. Our goal is always to provide our clients with the most responsive, stable open-source database environments in order to meet their business needs.

First Stop: Germany

In this blog post, we are going to take a closer look at what happened during Black Friday for a high-demand, high-traffic, business-critical application. Pepper Media Holding runs global deals sites where users post and vote on top deals on products in real-time. To give you a better idea of what the user sees, there is a screenshot below from their Germany mydealz.de branch of Pepper Media Holding.Successful Black Friday

As you can imagine, Black Friday results in a huge spike in traffic and user contribution. In order to ensure success during these crucial times, Pepper Media Holding utilizes Percona’s fully managed service offering. Percona’s Managed Services team has become an extension of Pepper Media Holding’s team by helping plan, prepare, and implement MySQL best-practices across their entire database environment.

Pepper Media Holding and Percona thought it would be interesting to reflect on Black Friday 2017 and how we worked together to flourish under huge spikes in query volume and user connections.

Below is a graph of MySQL query volume for Germany servers supporting the mydealz.de front-end. This graph is taken from Percona’s Managed Service Team’s installation of Percona Monitoring and Management (PMM), which they use to monitor Pepper Media’s environment.

As to be expected, MySQL query volume peaked shortly before and during midnight local time. It also spiked early in the morning as users were waking up. The traffic waned throughout the day. The most interesting data point is the spike from 5 AM to 9 AM which saw an 800% increase from the post-midnight dip. The sustained two-day traffic surge was on average a 200% increase when compared to normal, day-to-day query traffic hitting the database.

For more statistics on how the mydealz.de fared from a front-end and user perspective, visit Pepper Media Holding’s newsroom where Pepper Media has given a breakdown of various statistics related to website traffic during Black Friday.

Next Stop: United Kingdom

Another popular Pepper Media Holding branch is in the United Kingdom – better known as HotUKDeals. HotUKDeals hosts user-aggregated and voted-on deals for UK users. This is the busiest Pepper Media Holding database environment on average. Below is a screenshot of the user interface.

The below graphs are from our Managed Service Team’s Percona Monitoring and Management installation and representative of the UK servers supporting the HotUKDeals website traffic.

The first graph we are taking a look at is MySQL Replication Delay. As you can see, the initial midnight wave of Black Friday deals caused a negligible replica delay. The Percona Monitoring and Management MySQL Replication Delay graph is based on seconds_behind_master which is an integer value only. This means the delay is somewhere between 0 and 1 most of the time. Only once did it go between 1 and 2 over the entire course of Black Friday traffic.

The below graphs highlight the MySQL Traffic seen on the UK servers during the Black Friday traffic spike. One interesting note with this graph is the gradual lead-up to the midnight Black Friday spike. It looks like Black Friday is overstepping its boundaries into Gray Thursday. The traffic spikes here mimic the ones we saw in Germany. There’s an initial spike at midnight on Black Friday and then another spike as shoppers are waking up for their day. The UK servers saw a 361% spike in traffic the morning of Black Friday.

MySQL connections also saw an expected and significant spike during this time. Neglecting to consider max_connections system parameter during an event rush might result in “ERROR 1040 (00000): Too many connections.” However, our CEO, Peter Zaitsev, cautions against absent-mindedly setting this parameter at an unreachable level just to avoid this error. In a blog post, he explained best-practices for this scenario.

The MySQL query graph below shows a 400% spike in MySQL queries during the peak Black Friday morning traffic rush. The average number of queries hitting the database over this two day period is significantly higher than normal – approximately 183%.

Conclusion

Percona reported no emergencies during the Black Friday period for its Managed Service customers – including Pepper Media Holding. We saw similarly high traffic spikes among our customers during this 2017 Black Friday season. I hope that this run-down of a few PMM graphs taken during Pepper Media Holding’s Black Friday traffic period was informative and interesting. Special thanks to Pepper Media Holding for working with us to create this blog post.

Note: Check out our Pepper Media case study on how Percona helps them manage their database environment.

If you would like to further explore the graphs and statistics that Percona Monitoring and Management has to offer, we have a live demo available at https://pmmdemo.percona.com. To discuss how Percona Managed Services can help your database thrive during event-based traffic spikes (and all year round), please call us at +1-888-316-9775 (USA), +44 203 608 6727 (Europe), or have us contact you.

Dec
19
2017
--

Percona Monitoring and Management a Trend Setting Product for 2018

DBTA Trend-Setting Products in Data and Information Management for 2018

…but don’t just take our word for it…

Percona Monitoring and Management
DBTA Trend-Setting Products for 2018

The online industry news magazine Database Trends and Applications (DBTA) has included Percona Monitoring and Management in its roundup of Trend-Setting Products in Data and Information Management for 2018.

“…each year, Database Trends and Applications magazine looks for offerings that promise to help organizations derive greater benefit from their data, make better decisions, work more efficiently, achieve greater security, and address emerging challenges.” [December 7, 2018]

As well as including PMM in its selection of products to watch in 2018, DBTA has included a product spotlight penned by Michael Coburn, Product Manager for PMM. This provides a synopsis of the features and benefits of the product for DBAs.

While here at Percona we know that the PMM team are doing excellent work, it’s great they’ve had independent recognition. This news marks a welcome end to a busy and productive year.

The team isn’t resting on its laurels though! We are looking forward to producing further enhancements to PMM throughout 2018.

Watch this space!

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. PMM incorporates some best-of-breed open source tools to provide a comprehensive database monitoring and management facility. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL, MariaDB® and MongoDB servers to ensure that your data works as efficiently as possible. Recent developments have included enhanced support for Amazon RDS and Amazon Aurora.

Dec
18
2017
--

Percona Monitoring and Management 1.5.3 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management 1.5.3. This release contains fixes for bugs found after the release of Percona Monitoring and Management 1.5.2, as well as some important fixes and improvements not related to the previous release.

Improvements

  • PMM-1874: The read timeout of the proxy server (/prometheus) has been increased from the default of 60 seconds to avoid nxginx gateway timeout error when loading data-rich dashboards.
  • PMM-1863: We improved our handling of temporary Grafana credentials

Bug fixes

  • PMM-1828: On CentOS 6.9, pmm-admin list incorrectly reported that no monitoring services were running.
  • PMM-1842: It was not possible to restart the mysql:queries monitoring service after PMM Client was upgraded from version 1.0.4.
  • PMM-1797: It was not possible to update the CloudWatch data source credentials.
  • PMM-1829: When the user clicked a link in the Query Abstract column, an outdated version of QAN would open.
  • PMM-1836PMM Server installed in a Docker container could not be started if the updating procedure had been temporarily interrupted.
  • PMM-1871: In some cases, RDS instances could not be discovered.
  • PMM-1845: Converted FLUSH SLOW LOGS to FLUSH NO_WRITE_TO_BINLOG SLOW LOGS so that GTID event isn’t created
  • PMM-1816: Fixed a rendering error in Firefox.
Dec
06
2017
--

Percona Monitoring and Management 1.5.2 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management 1.5.2. This release contains fixes to bugs found after Percona Monitoring and Management 1.5.1 was released.

Bug fixes

  • PMM-1790QAN displayed query metrics even for a host that was not configured for mysql:queries or mongodb:queries. We have fixed the behaviour to display an appropriate warning message when there are no query metrics for the selected host.
  • PMM-1826: If PMM Server 1.5.0 is deployed via Docker, the Update button would not upgrade the instance to a later release.
  • PMM-1830: If PMM Server 1.5.0 is deployed via AMI (Amazon Machine Image) instance, the Update button would not upgrade the instance to a later release.
Dec
04
2017
--

Internal Temporary Tables in MySQL 5.7

InnoDB row operations graph from PMM

In this blog post, I investigate a case of spiking InnoDB Rows inserted in the absence of a write query, and find internal temporary tables to be the culprit.

Recently I was investigating an interesting case for a customer. We could see the regular spikes on a graph depicting “InnoDB rows inserted” metric (jumping from 1K/sec to 6K/sec), however we were not able to correlate those spikes with other activity. The

innodb_row_inserted

 graph (picture from PMM demo) looked similar to this (but on a much larger scale):

InnoDB row operations graph from PMM

Other graphs (Com_*, Handler_*) did not show any spikes like that. I’ve examined the logs (we were not able to enable general log or change the threshold of the slow log), performance_schema, triggers, stored procedures, prepared statements and even reviewed the binary logs. However, I was not able to find any single write query which could have caused the spike to 6K rows inserted.

Finally, I figured out that I was focusing on the wrong queries. I was trying to correlate the spikes on the InnoDB Rows inserted graph to the DML queries (writes). However, the spike was caused by SELECT queries! But why would SELECT queries cause the massive InnoDB insert operation? How is this even possible?

It turned out that this is related to temporary tables on disk. In MySQL 5.7 the default setting for internal_tmp_disk_storage_engine is set for InnoDB. That means that if the SELECT needs to create a temporary table on disk (e.g., for GROUP BY) it will use the InnoDB storage engine.

Is that bad? Not necessarily. Krunal Bauskar published a blog post originally about the InnoDB Intrinsic Tables performance in MySQL 5.7. The InnoDB internal temporary tables are not redo/undo logged. So in general performance is better. However, here is what we need to watch out for:

  1. Change of the place where MySQL stores temporary tables. InnoDB temporary tables are stored in ibtmp1 tablespace file. There are a number of challenges with that:
    • Location of the ibtmp1 file. By default it is located inside the innodb datadir. Originally MyISAM temporary tables were stored in  tmpdir. We can configure the size of the file, but the location is always relative to InnoDB datadir, so to move it to tmpdir we need something like this: 
      innodb_temp_data_file_path=../../../tmp/ibtmp1:12M:autoextend
    • Like other tablespaces it never shrinks back (though it is truncated on restart). The huge temporary table can fill the disk and hang MySQL (bug opened). One way to fix that is to set the maximum size of ibtmp1 file: 
      innodb_temp_data_file_path=ibtmp1:12M:autoextend:max:1G
    • Like other InnoDB tables it has all the InnoDB limitations, i.e., InnoDB row or column limits. If it exceeds these, it will return “Row size too large” or “Too many columns” errors. The workaround is to set internal_tmp_disk_storage_engine to MYISAM.
  2. When all temp tables go to InnoDB, it may increase the total engine load as well as affect other queries. For example, if originally all datasets fit into buffer_pool and temporary tables were created outside of the InnoDB, it will not affect the InnoDB memory footprint. Now, if a huge temporary table is created as an InnoDB table it will use innodb_buffer_pool and may “evict” the existing pages so that other queries may perform slower.

Conclusion

Beware of the new change in MySQL 5.7, the internal temporary tables (those that are created for selects when a temporary table is needed) are stored in InnoDB ibtmp file. In most cases this is faster. However, it can change the original behavior. If needed, you can switch the creation of internal temp tables back to MyISAM: 

set global internal_tmp_disk_storage_engine=MYISAM

Dec
01
2017
--

Percona Monitoring and Management 1.5: QAN in Grafana Interface

Percona-Monitoring-and-Management-1.5-QAN-1 small

In this post, we’ll examine how we’ve improved the GUI layout for Percona Monitoring and Management 1.5 by moving the Query Analytics (QAN) functions into the Grafana interface.

For Percona Monitoring and Management users, you might notice that QAN appears a little differently in our 1.5 release. We’ve taken steps to unify the PMM interface so that it feels more natural to move from reviewing historical trends in Metrics Monitor to examining slow queries in QAN.  Most significantly:

  1. QAN moves from a stand-alone application into Metrics Monitor as a dashboard application
  2. We updated the color scheme of QAN to match Metrics Monitor (but you can toggle a button if you prefer to still see QAN in white!)
  3. Date picker and host selector now use the same methods as Metrics Monitor

Percona Monitoring and Management 1.5 QAN 1

Starting from the PMM landing page, you still see two buttons – one for Metrics Monitor and another for Query Analytics (this hasn’t changed):

Percona Monitoring and Management 1.5 QAN 2

Once you select Query Analytics on the left, you see the new Metrics Monitor dashboard page for PMM Query Analytics. It is now hosted as a Metrics Monitor dashboard, and notice the URL is no longer /qan:

Percona Monitoring and Management 1.5 QAN 3

Another advantage of the Metrics Monitor dashboard integration is that the QAN inherits the host selector from Grafana, which supports partial string matching. This makes it simpler to find the host you’re searching for if you have more than a handful of instances:

Percona Monitoring and Management 1.5 QAN 4

The last feature enhancement worth mentioning is the native Grafana time selector, which lets you select down to the minute resolution time frames. This was a frequent source of feature requests — previously PMM limited you to our pre-defined default ranges. Keep in mind that QAN has an internal archiving job that caps QAN history at eight days.

Percona Monitoring and Management 1.5 QAN 5

Last but not least is the ability to toggle between the default dark interface and the optional white. Look for the small lightbulb icon at the bottom left of any QAN screen (Percona Monitoring and Management 1.5 QAN 6) and give it a try!

Percona Monitoring and Management 1.5 QAN 7

We hope you enjoy the new interface, and we look forward to your feedback on these improvements!

Nov
28
2017
--

Percona Monitoring and Management 1.5.0 Is Now Available

Percona Monitoring and Management

Percona announces the release of Percona Monitoring and Management 1.5.0 on November 28, 2017.

This release focuses on the following features:

  • Enhanced support for MySQL on Amazon RDS and Amazon Aurora – Dedicated Amazon Aurora dashboard offers maximum visibility into key database characteristics, eliminating the need for additional monitoring nodes.  We renamed Amazon RDS OS Metrics to Amazon RDS / Aurora MySQL Metrics
  • Simpler configuration – Percona Monitoring and Management now offers easier configuration of key Amazon RDS and Amazon Aurora settings via a web interface
  • One-click data collection – One button retrieves vital information on server performance to assist with troubleshooting
  • Improved interface – A simple, consistent user interface makes it faster and more fluid to switch between Query Analytics and Metrics Monitor

Highlights from our new Amazon RDS / Aurora MySQL Metrics dashboard:

Shared elements for Amazon Aurora MySQL and RDS MySQL

Amazon Aurora MySQL unique elements

Amazon RDS for MySQL unique elements

We’ve integrated Query Analytics into Metrics Monitor, and it appears as a separate dashboard known as PMM Query Analytics.

With this release, Percona Monitoring and Management introduces a new deployment option via AWS Marketplace. This is in addition to our distribution method of Amazon Machine Images (AMI).

We have upgraded Grafana and Prometheus in this release. PMM now includes Grafana 4.6.1. One of the most prominent features that the upgraded Grafana offers is the support of annotations. You can mark a point or select a region in a graph and give it a meaningful description. For more information, see the release highlights.

Prometheus version 1.8.2, shipped with this release, offers a number of bug fixes. For more information, see the Prometheus change log.

New features

  • PMM-434PMM enables monitoring of Amazon RDS and Amazon Aurora metrics
  • PMM-1133Query Analytics is available from Grafana as a dashboard
  • PMM-1470: Integrated Cloudwatch metrics into Prometheus
  • PMM-699: Combined AWS RDS and Amazon Aurora metrics into one dashboard
  • PMM-722: Distributed the MariaDB dashboard graph elements among other existing dashboards and removed the MariaDB dashboard. Further, we renamed the MyISAM dashboard  to MyISAM/Aria Metrics
  • PMM-1258: The DISABLE_UPDATES option enables preventing manual updates when PMM Server is run from a Docker container.
  • PMM-1500: Added InnoDB Buffer Disk Reads to graph InnoDB Buffer Pool Requests to better understand missed InnoDB BP cache hits

Improvements

  • PMM-1577: Updated Prometheus to version 1.8.2
  • PMM-1603: Updated Grafana to version 4.6.1
  • PMM-1669: The representation of numeric values in the Context Switches graph in the System Overview dashboard was changed to improve readability.
  • PMM-1575: Templating rules were improved for the MyRocks and TokuDB dashboards so that only those instances with these storage engines are displayed

Bug fixes

  • PMM-1082: The CPU Usage graph on the Trends dashboard showed incorrect spikes
  • PMM-1549: The authentication of the mongodb:queries monitoring service did not work properly when the name of the database to authenticate was not provided.
  • PMM-1673: Fixed display issue with Microsoft Internet Explorer 11
Nov
14
2017
--

Webinars on Wednesday November 15, 2017: Proxy Wars and Percona Software Update for Q4

Webinars double bill

Do you need to get to grips with MySQL proxies? Or maybe you could do with discovering the latest developments and plans for Percona’s software?

Webinars double billWell, wait no more because …

on Wednesday November 15, 2017, we bring you a webinar double bill.

Securing Your MySQLJoin Percona’s Chief Evangelist, Colin Charles as he presents “The Proxy Wars – MySQL Router, ProxySQL, MariaDB MaxScale” at 7:00 am PST / 10:00 am EST (UTC-8).

Reflecting on his past experience with MySQL proxies, Colin will provide a short review of three open source solutions. He’ll run through a comparison of MySQL Router, MariaDB MaxScale and ProxySQL and talk about the reasons for using the right tool for an application.

 

Percona Live EuropeMeanwhile, return a little later in the day at 10:00 am PST / 1:00 pm EST (UTC-8) to hear Percona CEO Peter Zaitsev discuss what’s new in Percona open source software. In “Percona Software News and Roadmap Update – Q4 2017”, Peter will talk about new features in Percona software, show some quick demos and share highlights from the Percona open source software roadmap. He will also talk about new developments in Percona commercial services and finish with a Q&A.

 

You are, of course, very welcome to register for either one or both webinars. Please register for your place soon!


Peter Zaitsev, Percona CEO and Co-Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 30+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University, where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone have both tapped Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

Colin Charles, Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within many open source communities and has spoken on the conference circuit.

 

Nov
09
2017
--

MySQL and Linux Context Switches

Context Switches

In this blog post, I’ll look at MySQL and Linux context switches and what is the normal number per second for a database environment.

You might have heard many times about the importance of looking at the number of context switches to indicate if MySQL is suffering from the internal contention issues. I often get the question of what is a “normal” or “acceptable” number, and at what point should you worry about the number of context switches per second?

First, let’s talk about what context switches are in Linux. This StackOverflow Thread provides a good discussion, with a lot of details, but basically it works like this:  

The process (or thread in MySQL’s case) is running its computations. Sooner or later, it has to do some blocking operation: disk IO, network IO, block waiting on a mutex or yield. The execution switches to the other process, and this is called voluntary context switch.On the other hand, the process/thread may need to be preempted by the scheduler because it used an allotted amount of CPU time (and now other tasks need to run) or because it is required to run high priority task. This is called involuntary context switches. When all the process in the system are added together and totaled, this is the system-wide number of context switches reported (using, for example, vmstat):

root@nuc2:~# vmstat 10
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
17  0      0 12935036 326152 2387388    0    0     0     5     0      1  9  0 91  0  0
20  0      0 12933936 326152 2387384    0    0     0     3 32228 124791 77 22  1  0  0
17  0      0 12933348 326152 2387364    0    0     0    11 33212 124575 78 22  1  0  0
16  0      0 12933380 326152 2387364    0    0     0    78 32470 126100 78 22  1  0  0

This is a global number. In many cases, however, it is better to look at it as context switches per CPU logical core. This is because cores execute tasks independently. As such, they have mostly independent causes for context switches. If you have a large number of cores, there can be quite a difference:

MySQL Context Switches

The number of context switches per second on this system looks high (at more than 1,000,000). Considering it has 56 logical cores, however, it is only about 30,000 per second per logical core (which is not too bad).

So how do we judge if the number of context switches is too high in your system? One answer is that it is too high if you’re wasting too much CPU on context switches. This brings up the question: how many context switches can the system handle if it is only doing context switches?

It is easy to find this out!  

Sysbench has a “threads” test designed specifically to measure this. For example:

sysbench --thread-locks=128 --time=7200 --threads=1024 threads run

Check the vmstat output or the Context Switches PMM graph:

MySQL Context Switches 1

We can see this system can handle up to 35 million context switches per second in total (or some 500K per logical CPU core on average).

I don’t recommend using more than 10% of CPU resources on context switching, so I would try to keep the number of the context switches at no more than 50K per logical CPU core.

Now let’s think about context switches from the other side: how many context switches do we expect to have at the very minimum for given load? Even if all the stars align and your query to MySQL doesn’t need any disk IO or context switches due to waiting for mutexes, you should expect at least two context switches: one to the client thread which processes the query and one for the query response sent to the client.    

Using this logic, if we have 100,000 queries/sec we should expect 200,000 context switches at the very minimum.

In the real world, though, I would not worry about contention being a big issue if you have less than ten context switches per query.

It is worth noting that in MySQL not every contention results in a context switch. InnoDB implements its own mutexes and RW-locks, which often try to “spin” to wait for a resource to become available. This wastes CPU time directly rather than doing a context switch.

Summary:

  • Look at the number of context switches per logical core rather than the total for easier-to-compare numbers
  • Find out how many context switches your system can handle per second, and don’t get too concerned if your context switches are no more than 10% of that number
  • Think about the number of context switches per query: the minimum possible is two, and values less than 10 make contention an unlikely issue
  • Not every MySQL contention results in a high number of context switches
Oct
31
2017
--

MySQL Dashboard Improvements in Percona Monitoring and Management 1.4.0

In this blog post, I’ll walk through some of the improvements to the Percona Monitoring and Management (PMM) MySQL dashboard in release 1.4.0.

As the part of Percona Monitoring and Management development, we’re constantly looking for better ways to visualize information and help you to spot and resolve problems faster. We’ve made some updates to the MySQL dashboard in the 1.4.0 release. You can see those improvements in action in our Percona Monitoring and Management Demo Site: check out the MySQL Overview and MySQL InnoDB Metrics dashboards.

MySQL Client Thread Activity

Percona Monitoring and Management 1

One of the best ways to characterize a MySQL workload is to look at the number of MySQL server-client connections (Threads Connected). You should compare this number to how many of those threads are actually doing something on the server side (Threads Running), rather than just sitting idle waiting for a client to send the next request.

MySQL can handle thousands of connected threads quite well. However, many threads (hundred) running concurrently often increases query latency. Increased internal contention can make the situation much worse.

The problem with those metrics is that they are extremely volatile – one second you might have a lot of threads connected and running, and then none. This is especially true when some stalls on the MySQL level (or higher) causes pile-ups.

To provide better insight, we now show Peak Threads Connected and Peak Threads Running to help easily spot such potential pile-ups, as well as Avg Threads Running. These stats allow you look at a high number of threads connected and running to see if it there are just minor spikes (which tend to happen in many systems on a regular basis), or something more prolonged that warrants deeper investigation.

To simplify it even further: Threads Running spiking for a few seconds is OK, but spikes persisting for 5-10 seconds or more are often signs of problems that are impacting users (or problems about to happen).

InnoDB Logging Performance

Percona Monitoring and Management 2

Since I wrote a blog post about Choosing MySQL InnoDB Log File Size, I thought it would be great to check out how long the log file space would last (instead of just looking at how much log space is written per hour). Knowing how long the innodb_log_buffer_size lasts is also helpful for tuning this variable, in general.

This graph shows you how much data is written to the InnoDB Log Files, which helps to understand your disk bandwidth consumption. It also tells you how long it will take to go through your combined Redo Log Space and InnoDB Log Buffer Size (at this rate).

As I wrote in the blog post, there are a lot of considerations for choosing the InnoDB log file size, but having enough log space to accommodate all the changes for an hour is a good rule of thumb. As we can see, this system is close to full at around 50 minutes.

When it comes to innodb_log_buffer_sizeeven if InnoDB is not configured to flush the log at every transaction commit, it is going to be flushed every second by default. This means 10-15 seconds is usually good enough to accommodate the spikes. This system has it set at about 40 seconds (which is more than enough).

InnoDB Read-Ahead

Percona Monitoring and Management 3

This graph helps you understand how InnoDB Read-Ahead is working out, and is a pretty advanced graph.

In general, Innodb Read-Ahead is not very well understood. I think in most cases it is hard to tell if it is helping or hurting the current workload in its current configuration.

The for Read-Ahead in any system (not just InnoDB) is to pre-fetch data before it is really needed (in order to reduce latency and improve performance). The risk, however, is pre-fetching data that isn’t needed. This is wasteful.

InnoDB has two Read-Ahead options: Linear Read-Ahead (designed to speed up workloads that have physically sequential data access) and Random Read-Ahead (designed to help workloads that tend to access the data in the same vicinity but not in a linear order).

Due to potential overhead, only Linear Read-Ahead is enabled by default. You need to enable Random Read-Ahead separately if you want to determine its impact on your workload

Back to the graph in question: we show a number of pages pre-fetched by Linear and Random Read-Aheads to confirm if these are even in use with your workload. We show Number of Pages Fetched but Never Accessed (evicted without access) – shown as both the number of pages and as a percent of pages. If Fetched but Never Accessed is more than 30% or so, Read-Ahead might be producing more waste instead of helping your workload. It might need tuning.

We also show the portion of IO requests that InnoDB Read-Ahead served, which can help you understand the portion of resources spent on InnoDB Read-Ahead

Due to the timing of how InnoDB increments counters, the percentages of IO used for Read-Ahead and pages evicted without access shows up better on larger scale graphs.

Conclusion

I hope you find these graphs helpful. We’ll continue making Percona Monitoring and Management more helpful for troubleshooting database systems and getting better performance!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com