Oct
31
2017
--

MySQL Dashboard Improvements in Percona Monitoring and Management 1.4.0

In this blog post, I’ll walk through some of the improvements to the Percona Monitoring and Management (PMM) MySQL dashboard in release 1.4.0.

As the part of Percona Monitoring and Management development, we’re constantly looking for better ways to visualize information and help you to spot and resolve problems faster. We’ve made some updates to the MySQL dashboard in the 1.4.0 release. You can see those improvements in action in our Percona Monitoring and Management Demo Site: check out the MySQL Overview and MySQL InnoDB Metrics dashboards.

MySQL Client Thread Activity

Percona Monitoring and Management 1

One of the best ways to characterize a MySQL workload is to look at the number of MySQL server-client connections (Threads Connected). You should compare this number to how many of those threads are actually doing something on the server side (Threads Running), rather than just sitting idle waiting for a client to send the next request.

MySQL can handle thousands of connected threads quite well. However, many threads (hundred) running concurrently often increases query latency. Increased internal contention can make the situation much worse.

The problem with those metrics is that they are extremely volatile – one second you might have a lot of threads connected and running, and then none. This is especially true when some stalls on the MySQL level (or higher) causes pile-ups.

To provide better insight, we now show Peak Threads Connected and Peak Threads Running to help easily spot such potential pile-ups, as well as Avg Threads Running. These stats allow you look at a high number of threads connected and running to see if it there are just minor spikes (which tend to happen in many systems on a regular basis), or something more prolonged that warrants deeper investigation.

To simplify it even further: Threads Running spiking for a few seconds is OK, but spikes persisting for 5-10 seconds or more are often signs of problems that are impacting users (or problems about to happen).

InnoDB Logging Performance

Percona Monitoring and Management 2

Since I wrote a blog post about Choosing MySQL InnoDB Log File Size, I thought it would be great to check out how long the log file space would last (instead of just looking at how much log space is written per hour). Knowing how long the innodb_log_buffer_size lasts is also helpful for tuning this variable, in general.

This graph shows you how much data is written to the InnoDB Log Files, which helps to understand your disk bandwidth consumption. It also tells you how long it will take to go through your combined Redo Log Space and InnoDB Log Buffer Size (at this rate).

As I wrote in the blog post, there are a lot of considerations for choosing the InnoDB log file size, but having enough log space to accommodate all the changes for an hour is a good rule of thumb. As we can see, this system is close to full at around 50 minutes.

When it comes to innodb_log_buffer_sizeeven if InnoDB is not configured to flush the log at every transaction commit, it is going to be flushed every second by default. This means 10-15 seconds is usually good enough to accommodate the spikes. This system has it set at about 40 seconds (which is more than enough).

InnoDB Read-Ahead

Percona Monitoring and Management 3

This graph helps you understand how InnoDB Read-Ahead is working out, and is a pretty advanced graph.

In general, Innodb Read-Ahead is not very well understood. I think in most cases it is hard to tell if it is helping or hurting the current workload in its current configuration.

The for Read-Ahead in any system (not just InnoDB) is to pre-fetch data before it is really needed (in order to reduce latency and improve performance). The risk, however, is pre-fetching data that isn’t needed. This is wasteful.

InnoDB has two Read-Ahead options: Linear Read-Ahead (designed to speed up workloads that have physically sequential data access) and Random Read-Ahead (designed to help workloads that tend to access the data in the same vicinity but not in a linear order).

Due to potential overhead, only Linear Read-Ahead is enabled by default. You need to enable Random Read-Ahead separately if you want to determine its impact on your workload

Back to the graph in question: we show a number of pages pre-fetched by Linear and Random Read-Aheads to confirm if these are even in use with your workload. We show Number of Pages Fetched but Never Accessed (evicted without access) – shown as both the number of pages and as a percent of pages. If Fetched but Never Accessed is more than 30% or so, Read-Ahead might be producing more waste instead of helping your workload. It might need tuning.

We also show the portion of IO requests that InnoDB Read-Ahead served, which can help you understand the portion of resources spent on InnoDB Read-Ahead

Due to the timing of how InnoDB increments counters, the percentages of IO used for Read-Ahead and pages evicted without access shows up better on larger scale graphs.

Conclusion

I hope you find these graphs helpful. We’ll continue making Percona Monitoring and Management more helpful for troubleshooting database systems and getting better performance!

Oct
25
2017
--

JSON Output of the pmm-admin list Command

In this blog post, we’ll look at Percona Monitoring and Management’s pmm-admin list command.

The pmm-admin list command shows all monitoring services you have added using the pmm-admin add command. Starting with version 1.4.0, Percona Monitoring and Management (PMM) also lists external monitoring services when you run pmm-admin list, i.e., those services that monitor the backends not supported out of the box (such as PostgreSQL databases).

In the output, the external monitoring services appear at the bottom:

The tabular output of the pmm-admin list command

JSON Output for Automatic Verification

But there is also another feature of pmm-admin list. If you run this command with the –json parameter, the command gives you a JSON document as output. This option now enables inspecting the monitoring services by computers due to the strict JSON syntax rules. JSON has become a de-facto standard for exchanging data for many tools. The JSON output provided by the pmm-admin list command can be used by configuration management tools such as ansible or chef.

The output is captured as keys and values. The general information about the computer where this pmm-client is installed is given as top-level elements:

  • Version
  • ServerAddress
  • ServerSecurity
  • ClientName
  • ClientAddress
  • ClientBindAddress
  • Platform

You can quickly determine if there are any errors in built-in monitoring services by inspecting the Err top level element in the JSON output. Similarly, the ExternalErr element reports errors on external services:

The JSON parsing friendly version produced by the pmm-admin list command

Representing Monitoring Services

Two elements contain lists as their values. The Services top-level element contains a list of documents that represent enabled monitoring services. The ExternalServices element contains a list of documents that represent enabled external monitoring services. Each attribute in the Services and ExternalServices elements provides the same information as a column in the tabular output.

Hope this brief post provides some valuable information regarding new Percona Monitoring and Management 1.4.0 functionality. Let me know about any questions in the comments.

Oct
05
2017
--

Graph Descriptions for Metrics Monitor in Percona Monitoring and Management 1.3.0

PMM 1.3.0

The Metrics Monitor of Percona Monitoring and Management 1.3.0 (PMM) provides graph descriptions to display more information about the monitored data without cluttering the interface.

Percona Monitoring and Management 1.3.0 is a free and open-source platform for managing and monitoring MySQL®, MariaDB® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL, MariaDB® and MongoDB servers to ensure that your data works as efficiently as possible.

Each dashboard graph in PMM contains a lot of information. Sometimes, it is not easy to understand what the plotted line represents. The metric labels and the plotted data are limited and have to account for the space they can use in dashboards. It is simply not possible to provide additional information which might be helpful when interpreting the monitored metrics.

The new version of the PMM dashboards introduces on-demand descriptions with more details about the metrics in the given graph and about the data.

Percona Monitoring and Management 1.3.0

These on-demand descriptions are available when you hover the mouse pointer over the icon at the top left corner of each graph. The descriptions do not use the valuable space of your dashboard. The graph descriptions appear in small boxes. If more information exists about the monitored metrics, the description contains a link to the associated documentation.

Percona Monitoring and Management 1.3.0

In release 1.3.0 of PMM, Metrics Monitor only starts to use this convenient tool. In subsequent releases, graph descriptions are going to become a standard attribute of each Metrics Monitor dashboard.

Oct
02
2017
--

Big Dataset: All Reddit Comments – Analyzing with ClickHouse

ClickHouse

In this blog, I’ll use ClickHouse and Tabix to look at a new very large dataset for research.

It is hard to come across interesting datasets, especially a big one (and by big I mean one billion rows or more). Before, I’ve used on-time airline performance available from BUREAU OF TRANSPORTATION STATISTICS. Another recent example is NYC Taxi and Uber Trips data, with over one billion records.

However, today I wanted to mention an interesting dataset I found recently that has been available since 2015. This is Reddit’s comments and submissions dataset, made possible thanks to Reddit’s generous API. The dataset was first mentioned at “I have every publicly available Reddit comment for research,” and currently you can find it at pushshift.io. However, there is no guarantee that pushshift.io will provide this dataset in the future. I think it would be valuable for Amazon or another cloud provider made this dataset available for researchers, just as Amazon provides https://aws.amazon.com/public-datasets/.

The dataset contains 2.86 billion records to the end of 2016 and is 709GB in size, uncompressed. This dataset is valuable for a variety of research scenarios, from simple stats to natural language processing and machine learning.

Now let’s see what simple info we can collect from this dataset using ClickHouse and https://tabix.io/, a GUI tool for ClickHouse. In this first round, we’ll figure some basic stats, like number of comments per month, number of authors per month and number of subreddits. I also added how many comments in average are left for a post.

Queries to achieve this:

SELECT toYYYYMM(created_date) dt,count(*) comments FROM commententry1 GROUP BY dt ORDER BY dt
;;
SELECT toYYYYMM(created_date) dt,count(DISTINCT author) authors FROM commententry1 GROUP BY dt ORDER BY dt
;;
SELECT toYYYYMM(created_date) dt,count(DISTINCT subreddit) subreddits FROM commententry1 GROUP BY dt ORDER BY dt
;;
SELECT toYYYYMM(created_date) dt,count(*)/count(distinct link_id) comments_per_post FROM commententry1 GROUP BY dt ORDER BY dt

And the graphical result:
ClickHouse
ClickHouse
It impressive to see the constant growth in comments (to 70mln per month by the end of 2016) and authors (to 3.5mln for the same time period). There is something interesting happening with subreddits, which jump up and down. It’s interesting to see that the average count of comments per post stays stable, with a slight decline to 13 comments/post by the end of 2016.

Now let’s check most popular subreddits:

SELECT subreddit,count(*) cnt FROM commententry1 GROUP BY subreddit ORDER BY cnt DESC limit 100
DRAW_TREEMAP
{
    path:'subreddit.cnt'
}

and using a treemap (available in Tabix.io):
ClickHouse

We can measure subreddits that get the biggest increase in comments in 2016 compared to 2015:

SELECT subreddit,cntnew-cntold diff FROM (SELECT subreddit,count(*) cntnew FROM commententry1 WHERE toYear(created_date)=2016 GROUP BY subreddit) ALL INNER JOIN (SELECT subreddit,count(*) cntold FROM commententry1 WHERE toYear(created_date)=2015 GROUP BY subreddit) USING (subreddit) ORDER BY diff DESC LIMIT 50
 DRAW_TREEMAP
{
    path:'subreddit.diff'
}

ClickHouse

Obviously, Reddit was affected by the United States Presidential Election 2016, but not just that. The gaming community saw an increase in Overwatch, PokemonGO and Dark Souls 3.

Now we can try to run our own DB-Ranking, but only based on Reddit comments. This is how I can do this for MySQL, PostgreSQL and MongoDB:

SELECT toStartOfQuarter(created_date) Quarter,
sum(if(positionCaseInsensitive(body,'mysql')>0,1,0)) mysql,
sum(if(positionCaseInsensitive(body,'postgres')>0,1,0)) postgres,
sum(if(positionCaseInsensitive(body,'mongodb')>0,1,0)) mongodb
FROM commententry1
GROUP BY Quarter ORDER BY Quarter;

I would say the result is aligned with https://db-engines.com/en/ranking, where MySQL is the most popular among the three, followed by PostgreSQL and then MongoDB. There is an interesting spike for PostgreSQL in the second quarter in 2015, caused by a bot in “leagueoflegend” tournaments. The bot was actively announcing that it is powered by PostgreSQL in the comments, like this: http://reddit.com/r/leagueoflegends/comments/37cvc3/c/crln2ef.

To highlight more ClickHouse features: along with standard SQL functions, it provides a variety of statistical functions (for example, Quantile calculations). We can try to see the distribution of the number of comments left by authors:

SELECT
    quantileExact(0.1)(cnt),
    quantileExact(0.2)(cnt),
    quantileExact(0.3)(cnt),
    quantileExact(0.4)(cnt),
    quantileExact(0.5)(cnt),
    quantileExact(0.6)(cnt),
    quantileExact(0.7)(cnt),
    quantileExact(0.8)(cnt),
    quantileExact(0.9)(cnt),
    quantileExact(0.99)(cnt)
FROM
(
    SELECT
        author,
        count(*) AS cnt
    FROM commententry1
    WHERE author != '[deleted]'
    GROUP BY author
)

The result is:

quantileExact(0.1)(cnt) - 1
quantileExact(0.2)(cnt) - 1
quantileExact(0.3)(cnt) - 1
quantileExact(0.4)(cnt) - 2
quantileExact(0.5)(cnt) - 4
quantileExact(0.6)(cnt) - 7
quantileExact(0.7)(cnt) - 16
quantileExact(0.8)(cnt) - 42
quantileExact(0.9)(cnt) - 160
quantileExact(0.99)(cnt) - 2271

Which means that 30% of authors left only one comment, and 50% of authors left four comments or less.

In general, ClickHouse was a pleasure to use when running analytical queries. However, I should note the missing support of WINDOW functions is a huge limitation. Even MySQL 8.0, which recently was released as RC, provides support for WINDOW functions. I hope ClickHouse will implement this as well.

Sep
20
2017
--

sysbench Histograms: A Helpful Feature Often Overlooked

Sysbench Histograms

Sysbench HistogramsIn this blog post, I will demonstrate how to run and use sysbench histograms.

One of the features of sysbench that I often I see overlooked (and rarely used) is its ability to produce detailed query response time histograms in addition to computing percentile numbers. Looking at histograms together with throughput or latency over time provides many additional insights into query performance.

Here is how you get detailed sysbench histograms and performance over time:

sysbench --rand-type=uniform --report-interval=1 --percentile=99 --time=300 --histogram --mysql-password=sbtest oltp_point_select --table_size=400000000 run

There are a few command line options to consider:

  • report-interval=1 prints out the current performance measurements every second, which helps see if performance is uniform, if you have stalls or otherwise high variance
  • percentile=99 computes 99 percentile response time, rather than 95 percentile (the default); I like looking at 99 percentile stats as it is a better measure of performance
  • histogram=on produces a histogram at the end of the run (as shown below)

The first thing to note about this histogram is that it is exponential. This means the width of the buckets changes with higher values. It starts with 0.001 ms (one microsecond) and gradually grows. This design is used so that sysbench can deal with workloads with requests that take small fractions of milliseconds, as well as accommodate requests that take many seconds (or minutes).

Next, we learn some us very interesting things about typical request response time distribution for databases. You might think that this distribution would be close to some to some “academic” distributions, such as normal distribution. In reality, we often observe is something of a “camelback” distribution (not a real term) – and our “camel” can have more than two humps (especially for simple requests such as the single primary key lookup shown here).

Why do request response times tend to have this distribution? It is because requests can take multiple paths inside the database. For example, certain requests might get responses from the MySQL Query Cache (which will result in the first hump). A second hump might come from resolving lookups using the InnoDB Adaptive Hash Index. A third hump might come from finding all the data in memory (rather than the Adaptive Hash Index). Finally, another hump might coalesce around the time (or times) it takes to execute on requests that require disk IO.    

You also will likely see some long-tail data that highlights the fact that MySQL and Linux are not hard, real-time systems. As an example, this very simple run with a single thread (and thus no contention) has an outlier at around 18ms. Most of the requests are served within 0.2ms or less.

As you add contention, row-level locking, group commit and other issues, you are likely to see even more complicated diagrams – which can often show you something unexpected:

Latency histogram (values are in milliseconds)
      value  ------------- distribution ------------- count
      0.050 |                                         1
      0.051 |                                         2
      0.052 |                                         2
      0.053 |                                         54
      0.053 |                                         79
      0.054 |                                         164
      0.055 |                                         883
      0.056 |*                                        1963
      0.057 |*                                        2691
      0.059 |**                                       4047
      0.060 |****                                     9480
      0.061 |******                                   15234
      0.062 |********                                 20723
      0.063 |********                                 20708
      0.064 |**********                               26770
      0.065 |*************                            35928
      0.066 |*************                            34520
      0.068 |************                             32247
      0.069 |************                             31693
      0.070 |***************                          41682
      0.071 |**************                           37862
      0.073 |********                                 22691
      0.074 |******                                   15907
      0.075 |****                                     10509
      0.077 |***                                      7853
      0.078 |****                                     9880
      0.079 |****                                     10853
      0.081 |***                                      9243
      0.082 |***                                      9280
      0.084 |***                                      8947
      0.085 |***                                      7869
      0.087 |***                                      8129
      0.089 |***                                      9073
      0.090 |***                                      8364
      0.092 |***                                      6781
      0.093 |**                                       4672
      0.095 |*                                        3356
      0.097 |*                                        2512
      0.099 |*                                        2177
      0.100 |*                                        1784
      0.102 |*                                        1398
      0.104 |                                         1082
      0.106 |                                         810
      0.108 |                                         742
      0.110 |                                         511
      0.112 |                                         422
      0.114 |                                         330
      0.116 |                                         259
      0.118 |                                         203
      0.120 |                                         165
      0.122 |                                         126
      0.125 |                                         108
      0.127 |                                         87
      0.129 |                                         83
      0.132 |                                         55
      0.134 |                                         42
      0.136 |                                         45
      0.139 |                                         41
      0.141 |                                         149
      0.144 |                                         456
      0.147 |                                         848
      0.149 |*                                        2128
      0.152 |**                                       4586
      0.155 |***                                      7592
      0.158 |*****                                    13685
      0.160 |*********                                24958
      0.163 |*****************                        44558
      0.166 |*****************************            78332
      0.169 |*************************************    98616
      0.172 |**************************************** 107664
      0.176 |**************************************** 107154
      0.179 |****************************             75272
      0.182 |******************                       49645
      0.185 |****************                         42793
      0.189 |*****************                        44649
      0.192 |****************                         44329
      0.196 |******************                       48460
      0.199 |*****************                        44769
      0.203 |**********************                   58578
      0.206 |***********************                  61373
      0.210 |**********************                   58758
      0.214 |******************                       48012
      0.218 |*************                            34533
      0.222 |**************                           36517
      0.226 |*************                            34645
      0.230 |***********                              28694
      0.234 |*******                                  17560
      0.238 |*****                                    12920
      0.243 |****                                     10911
      0.247 |***                                      9208
      0.252 |****                                     10556
      0.256 |***                                      7561
      0.261 |**                                       5047
      0.266 |*                                        3757
      0.270 |*                                        3584
      0.275 |*                                        2951
      0.280 |*                                        2078
      0.285 |*                                        2161
      0.291 |*                                        1747
      0.296 |*                                        1954
      0.301 |*                                        2878
      0.307 |*                                        2810
      0.312 |*                                        1967
      0.318 |*                                        1619
      0.324 |*                                        1409
      0.330 |                                         1205
      0.336 |                                         1193
      0.342 |                                         1151
      0.348 |                                         989
      0.354 |                                         985
      0.361 |                                         799
      0.367 |                                         671
      0.374 |                                         566
      0.381 |                                         537
      0.388 |                                         351
      0.395 |                                         276
      0.402 |                                         214
      0.409 |                                         143
      0.417 |                                         80
      0.424 |                                         85
      0.432 |                                         54
      0.440 |                                         41
      0.448 |                                         29
      0.456 |                                         16
      0.464 |                                         15
      0.473 |                                         11
      0.481 |                                         4
      0.490 |                                         9
      0.499 |                                         4
      0.508 |                                         3
      0.517 |                                         4
      0.527 |                                         4
      0.536 |                                         2
      0.546 |                                         4
      0.556 |                                         4
      0.566 |                                         4
      0.587 |                                         1
      0.597 |                                         1
      0.608 |                                         5
      0.619 |                                         3
      0.630 |                                         2
      0.654 |                                         2
      0.665 |                                         5
      0.677 |                                         26
      0.690 |                                         298
      0.702 |                                         924
      0.715 |*                                        1493
      0.728 |                                         1027
      0.741 |                                         1112
      0.755 |                                         1127
      0.768 |                                         796
      0.782 |                                         574
      0.797 |                                         445
      0.811 |                                         415
      0.826 |                                         296
      0.841 |                                         245
      0.856 |                                         202
      0.872 |                                         210
      0.888 |                                         168
      0.904 |                                         217
      0.920 |                                         163
      0.937 |                                         157
      0.954 |                                         204
      0.971 |                                         155
      0.989 |                                         158
      1.007 |                                         137
      1.025 |                                         94
      1.044 |                                         79
      1.063 |                                         52
      1.082 |                                         36
      1.102 |                                         25
      1.122 |                                         25
      1.142 |                                         16
      1.163 |                                         8
      1.184 |                                         5
      1.205 |                                         7
      1.227 |                                         2
      1.250 |                                         4
      1.272 |                                         3
      1.295 |                                         3
      1.319 |                                         2
      1.343 |                                         2
      1.367 |                                         1
      1.417 |                                         2
      1.791 |                                         1
      1.996 |                                         2
      2.106 |                                         2
      2.184 |                                         1
      2.264 |                                         1
      2.347 |                                         2
      2.389 |                                         1
      2.433 |                                         1
      2.477 |                                         1
      2.568 |                                         2
      2.615 |                                         1
      2.710 |                                         1
      2.810 |                                         1
      2.861 |                                         1
      3.187 |                                         1
      3.488 |                                         1
      3.816 |                                         1
      4.028 |                                         1
      6.913 |                                         1
      7.565 |                                         1
      8.130 |                                         1
     17.954 |                                         1

I hope you give sysbench histograms a try, and see what you can discover!

Sep
19
2017
--

Percona Live Europe Featured Talks: Automatic Database Management System Tuning Through Large-Scale Machine Learning with Dana Van Aken

Percona Live Europe 2017

Percona Live EuropeWelcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Dana Van Aken, a Ph.D. student in Computer Science at Carnegie Mellon University. Her talk is titled Automatic Database Management System Tuning Through Large-Scale Machine Learning. DBMSs are difficult to manage because they have hundreds of configuration “knobs” that control factors such as the amount of memory to use for caches and how often to write data to storage. Organizations often hire experts to help with tuning activities, but experts are prohibitively expensive for many. In this talk, Dana will present OtterTune, a new tool that can automatically find good settings for a DBMS’s configuration knobs. In our conversation, we discussed how machine learning helps DBAs manage DBMSs:

Percona: How did you get into database technology? What do you love about it?

Dana: I got involved with research as an undergrad and ended up working on a systems project with a few Ph.D. students. It turned out to be a fantastic experience and is what convinced me to go for my Ph.D. I visited potential universities and chatted with many faculty members. I met with my current advisor at Carnegie Mellon University, Andy Pavlo, for a half hour and left his office excited about databases and the research problems he was interested in. Three years later, I’m even more excited about databases and the progress we’ve made in developing smarter auto-tuning techniques.

Percona: You’re presenting a session called “Automatic Database Management System Tuning Through Large-Scale Machine Learning”. How does automation make DBAs life easier in a DBMS production environment?

Dana: The role of the DBA is becoming more challenging due to the advent of new technologies and increasing scalability requirements of data-intensive applications. Many DBAs are constantly having to adjust their responsibilities to manage more database servers or support new platforms to meet an organization’s needs as they change over time. Automation is critical for reducing the DBA’s workload to a manageable size so that they can focus on higher-value tasks. Many organizations now automate at least some of the repetitive tasks that were once DBA responsibilities: several have adopted public/private cloud-based services whereas others have built their own automated solutions internally.

The problem is that the tasks that have now become the biggest time sinks for DBAs are much harder to automate. For example, DBMSs have dozens of configuration options. Tuning them is an essential but tedious task for DBAs, because it’s a trial and error approach even for experts. What makes this task even more time-consuming is that the best configuration for one DBMS may not be the best for another. It depends on the application’s workload and the server’s hardware. Given this, successfully automating DBMS tuning is a big win for DBAs since it would streamline common configuration tasks and give DBAs more time to deal with other issues. This is why we’re working hard to develop smarter tuning techniques that are mature and practical enough to be used in a production environment.

Percona: What do you want attendees to take away from your session? Why should they attend?

Dana: I’ll be presenting OtterTune, a new tool that we’re developing at Carnegie Mellon University that can automatically find good settings for a DBMS’s configuration knobs. I’ll first discuss the practical aspects and limitations of the tool. Then I’ll move on to our machine learning (ML) pipeline. All of the ML algorithms that we use are popular techniques that have both practical and theoretical work backing their effectiveness. I’ll discuss each algorithm in our pipeline using concrete examples from MySQL to give better intuition about what we are doing. I will also go over the outputs from each stage (e.g., the configuration parameters that the algorithm find to be the most impactful on performance). I will then talk about lessons I learned along the way, and finally wrap up with some exciting performance results that show how OtterTune’s configurations compared to those created by top-notch DBAs!

My talk will be accessible to a general audience. You do not need a machine learning background to understand our research.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Dana: This is my first Percona Live conference, and I’m excited about attending. I’m looking forward to talking with other developers and DBAs about the projects they’re working on and the challenges they’re facing and getting feedback on OtterTune and our ideas.

Want to find out more about Dana and machine learning for DBMS management? Register for Percona Live Europe 2017, and see his talk Automatic Database Management System Tuning Through Large-Scale Machine Learning. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Sep
18
2017
--

Webinar Tuesday, September 19, 2017: A Percona Support Engineer Walkthrough for pt-stalk

pt-stalkJoin Percona’s, Principal Support Engineer, Markus Albe as he presents A Percona Support Engineer Walkthrough for pt-stalk on Tuesday, September 19, 2017, at 10:00 am PDT / 1:00 pm EDT (UTC-7).

As a support engineer, I get dozens of pt-stalk captures from our customers containing samples of iostat, vmstat, top, ps, SHOW ENGINE INNODB STATUS, SHOW PROCESSLIST and a multitude of other diagnostics outputs.

These are the tools of the trade for performance and troubleshooting, and we must learn to digest these outputs in an effective and systematic way. This allows us to provide high-quality service to a large volume of customers.

In this presentation, I will share the knowledge we’ve gained working with this data, and how to apply it to your database environment. We will learn to setup, capture data, write plugins to trigger collection and to capture custom data, look at our systematic approach and learn what data to read first and how to unwind the tangled threads of pt-stalk.

By the end of this presentation, you will have expert knowledge on how to capture diagnostic metrics at the right time and have a generic approach to digest the captured data. This allows you to diagnose and solve many of problems common to MySQL setups.

Resister for the webinar here.

Marcos AlbeMarcos Albe, Principal Technical Services Engineer

Marcos Albe has been doing web development for over ten years, providing solutions for various media and technology companies of different sizes. He is now a member of the Percona Support Team. Born and raised in the city of Montevideo, Uruguay, he became passionate about computers at the age of 11, when he got a 25Mhz i386-SX. Ten years later, he became one of the pioneers in telecommuting in Uruguay while leading the IT efforts for the second largest newspaper in the country.

Sep
18
2017
--

Percona Live Europe Featured Talks: Debugging with Logs (and Other Events) Featuring Charity Majors

Percona Live Europe 2017

Percona Live EuropeWelcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Charity Majors, CEO/Cofounder of Honeycomb. Her talk is Debugging with Logs (and Other Events). Her presentation covers some of the lessons every engineer should know (and often learns the hard way): why good logging solutions are so expensive, why treating your logs as strings can be costly and dangerous, how logs can impact code efficiency and add/fix/change race conditions in your code. In our conversation, we discussed debugging your database environment:

Percona: How did you get into database technology? What do you love about it?

Charity: Oh dear, I don’t. I hate databases. Data is the scariest, hardest part of computing. The stakes are highest and the mistakes the most permanent. Data is where you can kill any company with the smallest number of errors. That’s why I always end up in charge of the databases – I just don’t trust anybody else with the power. (Also, I’m an adrenaline junkie who gets off on high stakes. I could gamble or I could do databases, and I know too much math to gamble.) Literally, nobody loves databases. If they tell you anything different, they are either lying to you or they’re nowhere near production.

I got into databases from operations. I’ve been on call since I was 17, over half my life. I am really stubborn, have an inflated sense of my own importance and like solving problems, so operations was a natural fit. I started diving on the databases grenades when I worked at Linden Lab and MySQL was repeatedly killing us. It seemed impossible, so I volunteered to own it. I’ve been doing that ever since.

Percona: You’re presenting a session called “Debugging with Logs (and Other Events)”. What is the importance of logs for databases and DBAs?

Charity: I mean, it’s not really about logs. I might change my title. It’s about understanding WTF is going on. Logs are one way of extracting events in a format that humans can understand. My startup is all about “what’s happening right now; what’s just happened?” Which is something we are pretty terrible at as an industry. Databases are just another big complex piece of software, and the only reason we have DBAs is because the tooling has historically been so bad that you had to specialize in this piece of software as your entire career.

The tooling is getting better. With the right tools, you don’t have to skulk around like a detective trying to model and predict what might be happening, as though it were a living organism. You can simply sum up the lock time being held, and show what actor is holding it. It’s extremely important that we move away from random samples and pre-aggregated metrics, toward dynamic sampling and rich events. That’s the only way you will ever truly understand what is happening under the hood in your database. That’s part of what my company was built to do.

Percona: How can logging be used in debugging to track down database issues? Can logging affect performance?

Charity: Of course logging can affect performance. For any high traffic website, you should really capture your logs (events) by streaming tcpdump over the wire. Most people know how to do only one thing with db logs: look for slow queries. But those slow queries can be actively misleading! A classic example is when somebody says “this query is getting slow” and they look at source control and the query hasn’t been modified in years. The query is getting slower either because the data volume is growing (or data shape is changing), or because reads can yield but writes can’t, and the write volume has grown to the point where reads are spending all their time waiting on the lock.

Yep, most db logs are terrible.

Percona: What do you want attendees to take away from your session? Why should they attend?

Charity: Lots of cynicism. Everything in computers is terrible, but especially so with data. Everything is a tradeoff, all you can hope to do is be aware of the tradeoffs you are making, and what costs you are incurring whenever you solve a given problem. Also, I hope people come away trembling at the thought of adding any more strings of logs to production. Structure your logs, people! Grep is not the answer to every single question! It’s 2017, nearly 2018, and unstructured logs do not belong anywhere near production.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Charity: My coauthor Laine and I are going to be signing copies of our book Database Reliability Engineering and giving a short keynote on the changes in our field. I love the db community, miss seeing Mark Callaghan and all my friends from the MongoDB and MySQL world, and cannot wait to laugh at them while they cry into their whiskey about locks or concurrency or other similar nonsense. Yay!

Want to find out more about Charity and database debugging? Register for Percona Live Europe 2017, and see her talk Debugging with Logs (and Other Events). Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Sep
14
2017
--

Percona Live Europe Featured Talks: Monitoring Open Source Databases with Icinga with Bernd Erk

Percona Live Europe 2017

Percona Live EuropeWelcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Bernd Erk, CEO of Icinga. His talk is titled Monitoring Open Source Databases with Icinga. Icinga is a popular open source successor of Nagios that checks hosts and services, and notifies you of their statuses. But you also need metrics for performance and growth to deal with your scaling needs. Adding conditional behaviors and configuration in Icinga is not just intuitive, but also intelligently adaptive at runtime. In our conversation, we how to intelligently monitor open source databases:

Percona: How did you get into database technology? What do you love about it?

Bernd: I started a position as a junior systems engineer in a large German mail order company. They were totally committed to Oracle databases and the tool stack around it. As Linux gained more and more attention, we became aware of MySQL very early and were fascinated by the simplicity of installation and administration. There were of course so many things Oracle had in those days that MySQL didn’t have, but most of our uses also didn’t require those extra (and of course expensive) features.

Percona: You’re presenting a session called “Monitoring Open Source Databases with Icinga”. Why is monitoring databases important, and what sort of things need to be monitored?

Bernd: Usually databases are a very important part of an IT infrastructure, and need to be online 24/7. I also had the personal experience of database downtime putting a lot of pressure on both the organization in general and the team in charge. Since most open source databases provide very good interfaces, it is not so hard to figure out if they are up and running. Like in many monitoring arenas, knowing what to monitor is the important information.

In addition to the basic local and remote availability checks, monitoring database replication is very important. We often see environments where the standby slave is outdated by, years or not able to keep up with the incoming load. From there you can go into databases and application metrics to learn more about performance and IO behavior.

Percona: Why are you using Icinga specifically? What value does it provide above other monitoring solutions?

Bernd: I’ve been involved with Icinga from the beginning, so it is my number one choice in open source monitoring. In my opinion, the great advance of Icinga 2 is the simplicity of legacy systems like Nagios (or Icinga 1), but also its support for complex environments (such as application-based clustering). There is also the live configuration of the Icinga 2 monitoring core through our REST API. With all the supported tools for metrics, logs and management around it, for me Icinga 2 is the best match for open source monitoring.

Percona: What do you want attendees to take away from your session? Why should they attend?

Bernd: Attendees will get a short overview on Icinga 2, and why it is different to Nagios (Icinga 1). I will also guide them through practical monitoring examples and show implemented checks in a live demo. After my talk, they should be able to adapt and extend on-premise or cloud monitoring with Icinga 2 using the default open source plugins.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Bernd: Getting together with the great database community in all aspects, and going to Dublin (to be honest). I have never been there, and so it is my first time.

Want to find out more about Bernd and database monitoring? Register for Percona Live Europe 2017, and see his talk Monitoring Open Source Databases with Icinga. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Sep
13
2017
--

Percona Live Europe Featured Talks: Visualize Your Data with Grafana Featuring Daniel Lee

Percona Live Europe 2017

Percona Live Europe 2017Welcome to another post in our series of interview blogs for the upcoming Percona Live Europe 2017 in Dublin. This series highlights a number of talks that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post is with Daniel Lee, a software developer at Grafana. His tutorial is Visualize Your Data With Grafana. This presentation teaches you how to create dashboards and graphs in Grafana and how to use them to gain insight into the behavior of your systems. In our conversation, we discussed how data visualization could benefit your database environment:

Percona: How did you get into database technology? What do you love about it?

Daniel: I’m a developer and my first job was working on a transport logistics system, which was mostly composed of Stored Procedures in SQL Server 2000. Today, I would not build a system with all the logic in Stored Procedures – but that database knowledge is the foundation that I built everything else on. Databases and their data flows will always be the core of most interesting systems. More recently, I have switched from Windows to working with MariaDB on Linux. Grafana Labs uses Percona Server for MySQL for most of our internal applications (worldPing and Hosted Grafana). Working with Grafana also means working with time series databases like Graphite, which is also very interesting.

I enjoy working with data as it is one of the ways to learn how users use a system. Design decisions are theories until you have data to either back them up or disprove them.

Percona: Your presenting a session called “Visualize Your Data With Grafana”. How does monitoring make DBAs life easier, and how do graphs make this information easier to apply for DBAs?

Daniel: Good monitoring provides top-level metrics (throughput, number of errors, performance) for alerting, and other lower-level metrics to allow you to dig into the details and quickly diagnose and resolve an outage. Monitoring also helps you find any constraints (for example, finding bottlenecks for query performance: CPU, row locks, disk, buffer pool size, etc.). Performance monitoring allows you to see trends and lets you know when it is time to scale out or purchase more hardware.

Monitoring can also be used to communicate with business people. It is a way of translating lots of different system metrics into a measurable user experience. Visualizing your data with graphs is a very good way to communicate that information, both within your team and with your business stakeholders. Building dashboards with the metrics that are important to you rather than just the standard checklists (CPU, disk, network etc.) allows you to measure the user experience for your application and to see long-term trends.

Percona: Why Grafana? What does Grafana do better than other monitoring solutions?

Daniel: Grafana is the de facto standard in open source for visualizing time series data. It comes with tons of different ways to visualize your data (graphs, heat maps, gauges). Each data source comes with its own custom query editor that simplifies writing complex queries, and it is easy to create dynamic dashboards that look great on a TV.

Being open source, it can be connected to any data source/database, which makes it easy to unify different data sources in the same dashboard (for example, Prometheus or Graphite data combined with MySQL data). This also means your data is not subject to vendor lock-in like it is in other solutions. Grafana has a large and very active community that creates plugins and dashboards that extend Grafana into lots of niches, as well as providing ways to quickly get started with whatever you want to monitor.

Percona: What do you want attendees to take away from your session? Why should they attend?

Daniel: I want them to know that you can make the invisible visible, with that knowledge start to make better decisions based on data. I hope that my session helps someone take the first step to being more proactive in their monitoring by showing them what can be done with Grafana and other tools in the monitoring space.

In my session, I will give an overview of monitoring and metrics, followed by an intro to Grafana. I plan to show how to monitor MySQL and finish off with a quick look at the new MySQL data source for Grafana.

Percona: What are you most looking forward to at Percona Live Europe 2017?

Daniel: Firstly, it is always great to have an excuse to visit Ireland (I’m an Irishman living in Sweden). I’m also looking forward to getting feedback from the community on Grafana’s new MySQL data source plugin, as well as just talking to people and hearing about their experiences with database monitoring.

Want to find out more about Daniel and data visualization? Register for Percona Live Europe 2017, and see their talk Visualize Your Data With Grafana. Register now to get the best price! Use discount code SeeMeSpeakPLE17 to get 10% off your registration.

Percona Live Open Source Database Conference Europe 2017 in Dublin is the premier European open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, MariaDB, MongoDB, time series database, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference Europe will be September 25-27, 2017 at the Radisson Blu Royal Hotel, Dublin.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com