May
23
2018
--

Percona Monitoring and Management 1.11.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

In PMM Release 1.11.0, we deliver the following changes:

  • Configurable MySQL Slow Log Rotation – enable or disable rotation, and specify how many files to keep on disk
  • Predictable Graphs – we’ve updated our formulas to use aggregation functions over time for more reliable graphs
  • MySQL Exporter Parsing of my.cnf – we’ve improved how we read my.cnf
  • Annotation improvements – passing multiple strings results in single annotation being written

The issues in the release includes 1 new features & improvements, and 9 bugs fixed.

MySQL Slow Log Rotation Improvements

We spent some time this release going over how we handle MySQL’s Slow Log rotation logic. Query Analytics requires that slow logging be enabled (either to file, or to PERFORMANCE_SCHEMA) and we found that users of Percona Server for MySQL overwhelmingly choose logging to a file in order to take advantage of log_slow_verbosity which provides enhanced InnoDB Usage information. However, the challenge with MySQL’s Slow Log is that it is very verbose and thus the number one concern is disk space. PMM strives to do no harm and so MySQL Slow Log Rotation was a natural fit, but until this release we were very strict and hadn’t enabled any configuration of these parameters.

Percona Server for MySQL Users have long known about Slow Query Log Rotation and Expiration, but until now had no way of using the in-built Percona Server for MySQL feature while ensuring that PMM wasn’t missing any queries from the Slow Log during file rotation. Or perhaps your use case is that you want to do Slow Log Rotation using logrotate or some other facility. Today with Release 1.11 this is now possible!

We’ve made two significant changes:

  1. You can now specify the number of Slow Log files to remain on disk, and let PMM handle deleting the oldest files first. Default remains unchanged – 1 Slow Log to remain on disk.
  2. Slow Log rotation can now be disabled, for example if you want to manage rotation using logrotate or Percona Server for MySQL Slow Query Log Rotation and Expiration. Default remains unchanged – Slow Log Rotation is ON.

Number of Slow Logs Retained on Disk

Slow Logs Rotation – On or Off

You specify each of these two new controls when setting up the MySQL service. The following example specifies that 5 Slow Log files should remain on disk:

pmm-admin add mysql ... --retain-slow-logs=5

While the following example specifies that Slow Log rotation is to be disabled (flag value of false), with the assumption that you will perform your own Slow Log Rotation:

pmm-admin add mysql ... --slow-log-rotation=false

We don’t currently support modifying option parameters for an existing service definition. This means you must remove, then re-add the service and include the new options.

We’re including a logrotate script in this post to get you started, and it is designed to keep 30 copies of Slow Logs at 1GB each. Note that you’ll need to update the Slow Log location, and ensure a MySQL User Account with SUPER, RELOAD are used for this script to successfully execute.

Example logrotate
/var/mysql/mysql-slow.log {
    nocompress
    create 660 mysql mysql
    size 1G
    dateext
    missingok
    notifempty
    sharedscripts
    postrotate
       /bin/mysql -e 'SELECT @@global.long_query_time INTO @LQT_SAVE; SET GLOBAL long_query_time=2000; SELECT SLEEP(2); FLUSH SLOW LOGS; SELECT SLEEP(2); SET GLOBAL long_query_time=@LQT_SAVE;'
    endscript
    rotate 30
}

Predictable Graphs

We’ve updated the logic on four dashboards to better handle predictability and also to allow zooming to look at shorter time ranges.  For example, refreshing PXC/Galera graphs prior to 1.11 led to graphs spiking at different points during the metric series. We’ve reviewed each of these graphs and their corresponding queries and added in <aggregation>_over_time() functions so that graphs display a consistent view of the metric series. This improves your ability to drill in on the dashboards so that no matter how short your time range, you will still observe the same spikes and troughs in your metric series. The four dashboards affected by this improvement are:

  • Home Dashboard
  • PXC/Galera Graphs Dashboard
  • MySQL Overview Dashboard
  • MySQL InnoDB Metrics Dashboard

MySQL Exporter parsing of my.cnf

In earlier releases, the MySQL Exporter expected only key=value type flags. It would ignore options without values (i.e. disable-auto-rehash), and could sometimes read the wrong section of the my.cnf file.  We’ve updated the parsing engine to be more MySQL compatible.

Annotation improvements

Annotations permit the display of an event on all dashboards in PMM.  Users reported that passing more than one string to pmm-admin annotate would generate an error, so we updated the parsing logic to assume all strings passed during annotation creation generates a single annotation event.  Previously you needed to enclose your strings in quotes so that it would be parsed as a single string.

Issues in this release

New Features & Improvements

  • PMM-2432 – Configurable MySQL Slow Log File Rotation

Bug fixes

  • PMM-1187 – Graphs breaks at tight resolution 
  • PMM-2362 – Explain is a part of query 
  • PMM-2399 – RPM for pmm-server is missing some files 
  • PMM-2407 – Menu items are not visible on PMM QAN dashboard 
  • PMM-2469 – Parsing of a valid my.cnf can break the mysqld_exporter 
  • PMM-2479 – PXC/Galera Cluster Overview dashboard: typo in metric names 
  • PMM-2484 – PXC/Galera Graphs display unpredictable results each time they are refreshed 
  • PMM-2503 – Wrong InnoDB Adaptive Hash Index Statistics 
  • PMM-2513 – QAN-agent always changes max_slowlog_size to 0 
  • PMM-2514 – pmm-admin annotate help – fix typos
  • PMM-2515 – pmm-admin annotate – more than 1 annotation 

How to get PMM

PMM is available for installation using three methods:

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management 1.11.0 Is Now Available appeared first on Percona Database Performance Blog.

Apr
20
2018
--

The Final Countdown: Are You Ready for Percona Live 2018?

Are you ready for Percona Live

Are you ready for Percona Live 2018It’s hard to believe Percona Live 2018 starts on Monday! We’re looking forward to seeing everyone in Santa Clara next week! Here are some quick highlights to remember:

  • In addition to all the amazing sessions and keynotes we’ve announced, we’ll be hosting the MySQL Community Awards and the Lightning Talks on Monday during the Opening Reception.
  • We’ve also got a great lineup of demos in the exhibit hall all day Tuesday and Wednesday – be sure to stop by and learn more about open source database products and tools.
  • On Monday, we have a special China Track now available from Alibaba Cloud, PingCAP and Shannon Systems. We’ve just put a $20.00 ticket on sale for that track, and if you have already purchased any of our other tickets, you are also welcome to attend those four sessions.
  • Don’t forget to make your reservation at the Community Dinner. It’s a great opportunity to socialize with everyone and Pythian is always a wonderful host!

Thanks to everyone who is sponsoring, presenting and attending! The community is who makes this event successful and so much fun to be a part of!

The post The Final Countdown: Are You Ready for Percona Live 2018? appeared first on Percona Database Performance Blog.

Apr
20
2018
--

Percona Monitoring and Management (PMM) 1.10.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

Percona Monitoring and ManagementWe focused mainly on two features in 1.10.0, but there are also several notable improvements worth highlighting:

  • Annotations – Record and display Application Events as Annotations using pmm-admin annotate
  • Grafana 5.0 – Improved visualization effects
  • Switching between Dashboards – Restored functionality to preserve host when switching dashboards
  • New Percona XtraDB Cluster Overview graphs – Added Galera Replication Latency graphs on Percona XtraDB Cluster Overview dashboard with consistent colors

The issues in the release include four new features & improvements, and eight bugs fixed.

Annotations

Application events are one of the contributors to changes in database performance characteristics, and in this release PMM now supports receiving events and displaying them as Annotations using the new command pmm-admin annotate. A recent Percona survey reveals that Database and DevOps Engineers highly value visibility into the Application layer.  By displaying Application Events on top of your PMM graphs, Engineers can now correlate Application Events (common cases: Application Deploys, Outages, and Upgrades) against Database and System level metric changes.

Usage

For example, you have just completed an Application deployment to version 1.2, which is relevant to UI only, so you want to set tags for the version and interface impacted:

pmm-admin annotate "Application deploy v1.2" --tags "UI, v1.2"

Using the optional --tags allows you to filter which Annotations are displayed on the dashboard via a toggle option.  Read more about Annotations utilization in the Documentation.

Grafana 5.0

We’re extremely pleased to see Grafana ship 5.0 and we were fortunate enough to be at Grafanacon, including Percona’s very own Dimitri Vanoverbeke (Dim0) who presented What we Learned Integrating Grafana and Prometheus!

 

 

Included in Grafana 5.0 are a number of dramatic improvements, which in future Percona Monitoring and Management releases we plan to extend our usage of each feature, and the one we like best is the virtually unlimited way you can size and shape graphs.  No longer are you bound by panel constraints to keep all objects at the same fixed height!  This improvement indirectly addresses the visualization error in PMM Server where some graphs would appear to be on two lines and ended up wasting screen space.

Switching between Dashboards

PMM now allows you to navigate between dashboards while maintaining the same host under observation, so that for example you can start on MySQL Overview looking at host serverA, switch to MySQL InnoDB Advanced dashboard and continue looking at serverA, thus saving you a few clicks in the interface.

New Percona XtraDB Cluster Galera Replication Latency Graphs

We have added new Percona XtraDB Cluster Replication Latency graphs on our Percona XtraDB Cluster Galera Cluster Overview dashboard so that you can compare latency across all members in a cluster in one view.

Issues in this release

New Features & Improvements

  • PMM-2330Application Annotations DOC Update
  • PMM-2332Grafana 5 DOC Update
  • PMM-2293Add Galera Replication Latency Graph to Dashboard PXC/Galera Cluster Overview RC Ready
  • PMM-2295Improve color selection on Dashboard PXC/Galera Cluster Overview RC Ready

Bugs fixed

  • PMM-2311Fix misalignment in Query Analytics Metrics table RC Ready
  • PMM-2341Typo in text on password page of OVF RC Ready
  • PMM-2359Trim leading and trailing whitespaces for all fields on AWS/OVF Installation wizard RC Ready
  • PMM-2360Include a “What’s new?” link for Update widget RC Ready
  • PMM-2346Arithmetic on InnoDB AHI Graphs are invalid DOC Update
  • PMM-2364QPS are wrong in QAN RC Ready
  • PMM-2388Query Analytics does not render fingerprint section in some cases DOC Update
  • PMM-2371Pass host when switching between Dashboards

How to get PMM

PMM is available for installation using three methods:

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management (PMM) 1.10.0 Is Now Available appeared first on Percona Database Performance Blog.

Apr
19
2018
--

Sysbench-tpcc Supports PostgreSQL (No, Really This Time)

Sysbench-tpcc Supports PostgreSQL

Sysbench-tpcc Supports PostgreSQLThis time, we really mean it when we say sysbench-tpcc supports PostgreSQL.

When I initially announced sysbench-tpcc, I mentioned it potentially could run against PostgreSQL, but it was more like wishful thinking than reality. The reality was that even though both databases speak SQL, the difference in dialects was too big and the queries written for MySQL could not run without modification on PostgreSQL.

Well, we introduced needed changes, and now you can use sysbench-tpcc with PostgreSQL. Just try the latest commit to https://github.com/Percona-Lab/sysbench-tpcc.

If you’re interested, here is a quick overview of what changes we had to make:

  1. It appears that PostgreSQL does not support the 
    tinyint

     and

    datetime

     data types. We had to use smallint and

    timestamp

     fields, even if using

    smallint

     makes the database size bigger.

  2. PostgreSQL does not have a simple equivalent for MySQL’s
    SHOW TABLES

    . The best replacement we found is

    select * from pg_catalog.pg_tables where schemaname != 'information_schema' and schemaname != 'pg_catalog'

    .

  3. PostgreSQL does not have a way to disable Foreign Key checks like MySQL:
    SET FOREIGN_KEY_CHECKS=0

    . With PostgreSQL, we needed to create and load tables in a very specific order to avoid Foreign Keys violations.

  4. PostgreSQL requires you to have a unique index name per the whole database, white MySQL requires it only per table. So instead of using:
    CREATE INDEX idx_customer ON customer1 (c_w_id,c_d_id,c_last,c_first)
    CREATE INDEX idx_customer ON customer2 (c_w_id,c_d_id,c_last,c_first)

    We need to use:

    CREATE INDEX idx_customer1 ON customer1 (c_w_id,c_d_id,c_last,c_first)
    CREATE INDEX idx_customer2 ON customer2 (c_w_id,c_d_id,c_last,c_first)
  5. PostgreSQL does not have a 
    STRAIGHT_JOIN

     hint, so we had to remove this from queries. But it is worth mentioning we use

    STRAIGHT_JOIN

     mostly as a hack to force MySQL to use a correct execution plan for one of the queries.

  6. PostgreSQL is very strict on GROUP BY queries. All fields that are not in the GROUP BY clause must use an aggregation function. So PostgreSQL complained on queries like
    SELECT d_w_id,sum(d_ytd)-w_ytd diff FROM district,warehouse WHERE d_w_id=w_id AND w_id=1 GROUP BY d_w_id

     even when we know that only single value for w_ytd is possible. We had to rewrite this query as

    SELECT d_w_id,SUM(d_ytd)-MAX(w_ytd) diff FROM district,warehouse WHERE d_w_id=w_id AND w_id=1 GROUP BY d_w_id

    .

So you can see there was some work involved when we try to migrate even a simple application from MySQL to PostgreSQL.

Hopefully, now sysbench-tpcc supports PostgreSQL, it is a useful tool to evaluate a PostgreSQL performance. If you find that we did not optimally execute some transaction, please let us know!

The post Sysbench-tpcc Supports PostgreSQL (No, Really This Time) appeared first on Percona Database Performance Blog.

Apr
18
2018
--

Why Analyze Raw MySQL Query Logs?

Raw MySQL Query Logs

Raw MySQL Query LogsIn this blog post, I’ll examine when looking at raw MySQL query logs can be more useful than working with tools that only have summary data.

In my previous blog post, I wrote about analyzing MySQL Slow Query Logs with ClickHouse and ClickTail. One of the follow-up questions I got is when do you want to do that compared to just using tools like Percona Monitoring and Management or VividCortex, which provide a beautiful interface for detailed analyses (rather than spartan SQL interface).    

MySQL Logs

A lot of folks are confused about what query logs MySQL has, and what you can use them for. First, MySQL has a “General Query Log”. As the name implies, this is a general-purpose query log. You would think this is the first log you should use, but it is, in fact, pretty useless:

2018-03-31T15:38:44.521650Z      2356 Query SELECT c FROM sbtest1 WHERE id=164802
2018-03-31T15:38:44.521790Z      2356 Query SELECT c FROM sbtest1 WHERE id BETWEEN 95241 AND 95340
2018-03-31T15:38:44.522168Z      2356 Query SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN 1 AND 100
2018-03-31T15:38:44.522500Z      2356 Query SELECT c FROM sbtest1 WHERE id BETWEEN 304556 AND 304655 ORDER BY c
2018-03-31T15:38:44.522941Z      2356 Query SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 924 AND 1023 ORDER BY c
2018-03-31T15:38:44.523525Z      2356 Query UPDATE sbtest1 SET k=k+1 WHERE id=514

As you can see, it only has very limited information about queries: no query execution time or which user is running the query. This type of log is helpful if you want to see very clean, basic information on what queries your application is really running. It can also help debug MySQL crashes because, unlike other log formats, the query is written to this log file before MySQL attempts to execute the query.

The MySQL Slow Log is, in my opinion, much more useful (especially with Percona Server Slow Query Log Extensions). Again as the name implies, you would think it is only used for slow queries (and by default, it is). However, you can set long_query_time to 0 (with a few other options) to get all queries here with lots of rich information about query execution:

# Time: 2018-03-31T15:48:55.795145Z
# User@Host: sbtest[sbtest] @ localhost []  Id: 2332
# Schema: sbtest  Last_errno: 0 Killed: 0
# Query_time: 0.000143  Lock_time: 0.000047 Rows_sent: 1  Rows_examined: 1 Rows_affected: 0
# Bytes_sent: 188  Tmp_tables: 0 Tmp_disk_tables: 0  Tmp_table_sizes: 0
# QC_Hit: No  Full_scan: No Full_join: No  Tmp_table: No Tmp_table_on_disk: No
# Filesort: No  Filesort_on_disk: No  Merge_passes: 0
#   InnoDB_IO_r_ops: 0  InnoDB_IO_r_bytes: 0  InnoDB_IO_r_wait: 0.000000
#   InnoDB_rec_lock_wait: 0.000000  InnoDB_queue_wait: 0.000000
#   InnoDB_pages_distinct: 0
# Log_slow_rate_type: query  Log_slow_rate_limit: 10
SET timestamp=1522511335;
SELECT c FROM sbtest1 WHERE id=2428336;

Finally, there is the MySQL Audit Log, which is part of the MySQL Enterprise offering and format-compatible Percona Server for MySQL Audit Log Plugin. This is designed for auditing access to the server, and as such it has matched details in the log. Unlike the first two log formats, it is designed first and foremost to be machine-readable and supports JSON, XML and CVS output formats:

{"audit_record":{"name":"Query","record":"743017006_2018-03-31T01:03:12","timestamp":"2018-03-31T15:53:42 UTC","command_class":"select","connection_id":"2394","status":0,"sqltext":"SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN 3 AND 102","user":"sbtest[sbtest] @ localhost []","host":"localhost","os_user":"","ip":"","db":"sbtest"}}
{"audit_record":{"name":"Query","record":"743017007_2018-03-31T01:03:12","timestamp":"2018-03-31T15:53:42 UTC","command_class":"select","connection_id":"2394","status":0,"sqltext":"SELECT c FROM sbtest1 WHERE id BETWEEN 2812021 AND 2812120 ORDER BY c","user":"sbtest[sbtest] @ localhost []","host":"localhost","os_user":"","ip":"","db":"sbtest"}}
{"audit_record":{"name":"Query","record":"743017008_2018-03-31T01:03:12","timestamp":"2018-03-31T15:53:42 UTC","command_class":"select","connection_id":"2394","status":0,"sqltext":"SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 1 AND 100 ORDER BY c","user":"sbtest[sbtest] @ localhost []","host":"localhost","os_user":"","ip":"","db":"sbtest"}}

As you can see, there are substantial differences in the purposes of the different MySQL log formats, along with the information they provide.

Why analyze raw MySQL query logs

In my opinion, there are two main reasons to look directly at raw log files without aggregation (you might find others):

  • Auditing, where the Audit Log is useful (Vadim recently blogged about it)
  • Advanced MySQL/application debugging, where an aggregated summary might not allow you to drill down to the fullest level of detail

When you’re debugging using MySQL logs, the Slow Query Log, set to log all queries with no sampling, is the most useful. Of course, this can cause significant additional overhead in many workloads, so it is best to do it in a development environment (if you can repeat the situation you’re looking to analyze). At the very least, don’t do it during peak time.

For Percona Server for MySQL, these options ensure it logs all queries to the query log with no sampling:

log_output=file
slow_query_log=ON
long_query_time=0
log_slow_rate_limit=1
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1

Now that we have full queries, we can easily use Linux command line tools like grep and others to look into what is going on. However, many times this isn’t always convenient. This is where loading logs into storage that you can conveniently query is a good solution.

Let’s look into some specific and interesting cases.

Were any queries killed?

SELECT
   _time,
   query,
   query_time
FROM mysql_slow_log
WHERE killed > 0
????????????????_time???query?????????????????????????????????query_time??
? 2018-04-02 19:02:56 ? select benchmark(10000000000,"1+1") ?  10.640794 ?
??????????????????????????????????????????????????????????????????????????
1 rows in set. Elapsed: 0.242 sec. Processed 929.14 million rows, 1.86 GB (3.84                                         billion rows/s., 7.67 GB/s.)

Yes. A query got killed after running for 10 seconds.

Did any query fail? With what error codes?

SELECT
   error_num,
   min(_time),
   max(_time),
   count(*)
FROM mysql_slow_log
GROUP BY error_num
??error_num????????????min(_time)????????????max(_time)?????count()??
?         0 ? 2018-04-02 18:59:49 ? 2018-04-07 19:39:27 ? 925428375 ?
?      1160 ? 2018-04-02 19:02:56 ? 2018-04-02 19:02:56 ?         1 ?
?      1213 ? 2018-04-02 19:00:00 ? 2018-04-07 19:18:14 ?   3709520 ?
?      1054 ? 2018-04-07 19:38:14 ? 2018-04-07 19:38:14 ?         1 ?
?????????????????????????????????????????????????????????????????????
4 rows in set. Elapsed: 2.391 sec. Processed 929.14 million rows, 7.43 GB (388.64 million rows/s., 3.11 GB/s.)

You can resolve error codes with the 

perror

 command:

root@rocky:~# perror 1054
MySQL error code 1054 (ER_BAD_FIELD_ERROR): Unknown column '%-.192s' in '%-.192s'

This command has many uses. You can use it to hunt down application issues (like in this example of a missing column — likely due to bad or old code). It can also help you to spot SQL injection attempts that often cause queries with bad syntax, and troubleshoot deadlocks or foreign key violations.

Are there any nasty, long transactions?

SELECT
   transaction_id,
   max(_time) - min(_time) AS run_time,
   count(*) AS num_queries,
   sum(rows_affected) AS rows_changed
FROM mysql_slow_log
WHERE transaction_id != ''
GROUP BY transaction_id
ORDER BY rows_changed DESC
LIMIT 10
??transaction_id???run_time???num_queries???rows_changed??
? 17E070082      ? 0        ?      1      ? 9999         ?
? 17934C73C      ? 2        ?      6      ? 4            ?
? 178B6D346      ? 0        ?      6      ? 4            ?
? 17C909086      ? 2        ?      6      ? 4            ?
? 17B45EFAD      ? 5        ?      6      ? 4            ?
? 17ABAB840      ? 0        ?      6      ? 4            ?
? 17A36AD3F      ? 3        ?      6      ? 4            ?
? 178E037A5      ? 1        ?      6      ? 4            ?
? 17D1549C9      ? 0        ?      6      ? 4            ?
? 1799639F2      ? 1        ?      6      ? 4            ?
??????????????????????????????????????????????????????????
10 rows in set. Elapsed: 15.574 sec. Processed 930.58 million rows, 18.23 GB (59.75 million rows/s., 1.17 GB/s.)

Finding transactions that modify a lot of rows, like transaction 17E070082 above, can be very helpful to ensure you control MySQL replication slave lag. It is also critical if you’re looking to migrate to MySQL Group Replication or Percona XtraDB Cluster.

What statements were executed in a long transaction?

SELECT
   _time,
   _ms,
   query
FROM mysql_slow_log
WHERE transaction_id = '17E070082'
ORDER BY
   _time ASC,
   _ms ASC
LIMIT 10
????????????????_time??????_ms???query??????????????????????????????????
? 2018-04-07 20:08:43 ? 890693 ? update sbtest1 set k=0 where id<10000 ?
????????????????????????????????????????????????????????????????????????
1 rows in set. Elapsed: 2.361 sec. Processed 931.04 million rows, 10.79 GB (394.27 million rows/s., 4.57 GB/s.)

I used transaction 17E070082 from the previous query above (which modified 9999 rows). Note that this schema improves compression by storing the seconds and microseconds parts of the timestamp in different columns.

Were any queries dumping large numbers of rows from the database?

SELECT
   _time,
   query,
   rows_sent,
   bytes_sent
FROM mysql_slow_log
WHERE rows_sent > 10000
????????????????_time???query??????????????????????????????????????????????rows_sent???bytes_sent??
? 2018-04-07 20:21:08 ? SELECT /*!40001 SQL_NO_CACHE */ * FROM `sbtest1` ?  10000000 ? 1976260712 ?
???????????????????????????????????????????????????????????????????????????????????????????????????
1 rows in set. Elapsed: 0.294 sec. Processed 932.19 million rows, 3.73 GB (3.18 billion rows/s., 12.71 GB/s.)

Did someone Update a record?

SELECT
   _time,
   query
FROM mysql_slow_log
WHERE (rows_affected > 0) AND (query LIKE '%id=3301689%')
LIMIT 1
????????????????_time???query??????????????????????????????????????
? 2018-04-02 19:04:48 ? UPDATE sbtest1 SET k=k+1 WHERE id=3301689 ?
???????????????????????????????????????????????????????????????????
1 rows in set. Elapsed: 0.046 sec. Processed 2.29 million rows, 161.60 MB (49.57 million rows/s., 3.49 GB/s.)

Note that I’m cheating here by assuming we know an update used a primary key, but it is practically helpful in a lot of cases.

These are just some of the examples of what you can find out by querying raw slow query logs. They contain a ton of information about query execution (especially in Percona Server for MySQL) that allows you to use them both for performance analysis and some security and auditing purposes.

The post Why Analyze Raw MySQL Query Logs? appeared first on Percona Database Performance Blog.

Apr
11
2018
--

Calling All Polyglots: Percona Live 2018 Keynote Schedule Now Available!

Percona Live 2018 Keynotes

Percona Live 2018 KeynotesWe’ve posted the Percona Live 2018 keynote addresses for the seventh annual Percona Live Open Source Database Conference 2018, taking place April 23-25, 2018 at the Santa Clara Convention Center in Santa Clara, CA. 

This year’s keynotes explore topics ranging from how cloud and open source database adoption accelerates business growth, to leading-edge emerging technologies, to the importance of MySQL 8.0, to the growing popularity of PostgreSQL.

We’re excited by the great lineup of speakers, including our friends at Alibaba Cloud, Grafana, Microsoft, Oracle, Upwork and VividCortex, the innovative leaders on the Cool Technologies panel, and Brendan Gregg from Netflix, who will discuss how to get the most out of your database on a Linux OS, using his experiences at Netflix to highlight examples.  

With the theme of “Championing Open Source Databases,” the conference will feature multiple tracks, including MySQL, MongoDB, Cloud, PostgreSQL, Containers and Automation, Monitoring and Ops, and Database Security. Once again, Percona will be offering a low-cost database 101 track for beginning users who want to learn how to use and operate open source databases.

The Percona Live 2018 keynotes include:

Tuesday, April 24, 2018

  • Open Source for the Modern Business – Peter Zaitsev of Percona will discuss how open source database adoption continues to grow in enterprise organizations, the expectations and definitions of what constitutes success continue to change. A single technology for everything is no longer an option; welcome to the polyglot world. The talk will include several compelling open source projects and trends of interest to the open source database community and will be followed by a round of lightning talks taking a closer look at some of those projects.
  • Cool Technologies Showcase – Four industry leaders will introduce key emerging industry developments. Andy Pavlo of Carnegie Mellon University will discuss the requirements for enabling autonomous database optimizations. Nikolay Samokhvalov of PostgreSQL.org will discuss new PostgreSQL tools. Sugu Sougoumarane of PlanetScale Data will explore how Vitess became a high-performance, scalable and available MySQL clustering cloud solution in line with today’s NewSQL storage systems. Shuhao Wu of Shopify explains how to use Ghostferry as a data migration tool for incompatible cloud platforms.
  • State of the Dolphin 8.0 – Tomas Ulin of Oracle will discuss the focus, strategy, investments and innovations that are evolving MySQL to power next-generation web, mobile, cloud and embedded applications – and why MySQL 8.0 is the most significant MySQL release in its history.
  • Linux Performance 2018 – Brendan Gregg of Netflix will summarize recent performance features to help users get the most out of their Linux systems, whether they are databases or application servers. Topics include the KPTI patches for Meltdown, eBPF for performance observability, Kyber for disk I/O scheduling, BBR for TCP congestion control, and more.

Wednesday, April 25, 2018

  • Panel Discussion: Database Evolution in the Cloud – An expert panel of industry leaders, including Lixun Peng of Alibaba, Sunil Kamath of Microsoft, and Baron Schwartz of VividCortex, will discuss the rapid changes occurring with databases deployed in the cloud and what that means for the future of databases, management and monitoring and the role of the DBA and developer.
  • Future Perfect: The New Shape of the Data Tier – Baron Schwartz of VividCortex will discuss the impact of macro trends such as cloud computing, microservices, containerization, and serverless applications. He will explore where these trends are headed, touching on topics such as whether we are about to see basic administrative tasks become more automated, the role of open source and free software, and whether databases as we know them today are headed for extinction.
  • MongoDB at Upwork – Scott Simpson of Upwork, the largest freelancing website for connecting clients and freelancers, will discuss how MongoDB is used at Upwork, how the company chose the database, and how Percona helps make the company successful.

We will also present the Percona Live 2018 Community Awards and Lightning Talks on Monday, April 23, 2018, during the Opening Night Reception. Don’t miss the first day of tutorials and Opening Night Reception!

Register for the conference on the Percona Live Open Source Database Conference 2018 website.

Sponsorships

Limited Sponsorship opportunities for Percona Live 2018 Open Source Database Conference are still available, and offer the opportunity to interact with the DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event. Contact live@percona.com for sponsorship details.

  • Diamond Sponsors – Percona, VividCortex
  • Platinum – Alibaba Cloud, Microsoft
  • Gold Sponsors – Facebook, Grafana
  • Bronze Sponsors – Altinity, BlazingDB, Box, Dynimize, ObjectRocket, Pingcap, Shannon Systems, SolarWinds, TimescaleDB, TwinDB, Yelp
  • Contributing Sponsors – cPanel, Github, Google Cloud, NaviCat
  • Media Sponsors – Database Trends & Applications, Datanami, EnterpriseTech, HPCWire, ODBMS.org, Packt

The post Calling All Polyglots: Percona Live 2018 Keynote Schedule Now Available! appeared first on Percona Database Performance Blog.

Apr
05
2018
--

Percona Live 2018 Featured Talk: The Accidental DBA with Jenni Snyder

Jenni Snyder Yelp Percona Live 2018 (2)

Percona Live 2018 Featured TalkWelcome to another interview blog for the rapidly-approaching Percona Live 2018. Each post in this series highlights a Percona Live 2018 featured talk at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post highlights Jenni Snyder, Engineering Manager – Operations at Yelp. Her tutorial talk is titled The Accidental DBA. Open source relational databases like MySQL and PostgreSQL power some of the world’s largest websites. They can be used out of the box with few adjustments, and rarely require a dedicated Database Administrator (DBA) right away. This means that System Administrators, Site Reliability Engineers or Developers are usually the first to respond to some of the more interesting issues that can arise as you scale your databases. In our conversation, we discussed how people become “accidental” DBAs:

Percona: Who are you, and how did you get into databases? What was your path to your current responsibilities?

I’m Jenni Snyder and currently work as Manager of the Database Reliability Engineering (DRE) Team at Yelp. And, I got into databases by accident. After graduating from college, my first job was as a software engineer. As I remember it, I got a desk in the corner, and under that desk was another computer. I hit a power strip switch as I went to plug in my workstation on my first day. This cut power to the other machine, which turned out to be the database host for their development environment (running Sybase). My new boss came over and walked me through starting it back up.

After that, it became pretty clear to me that I preferred systems administration over development and was put in charge of our database migration scripts. I figured out query and server tuning and later configured and deployed a more complicated environment using Oracle. I got my first official MySQL Database Administrator (DBA) job about four years later.

My degrees are in Sociology as well as Computer Science, so I have always been interested in social networks and media. I have worked for Tribe.net and the Cisco Media Solutions Group, and have been with Yelp now for almost seven years.

Percona: Your tutorial is titled The Accidental DBA. What do you mean by this term, and how did you arrive at it?

Jenni Snyder Yelp Percona Live 2018 (2)I explained this a bit above: few people seek out a career in database administration early in their career. One of the benefits of MySQL and other open source relational databases is that they’re relatively easy to get started with. You don’t need a DBA, DRE, or another owner right away. However, with success comes growth, and any open source databases with default configuration can quickly become slow if not tuned for your workload.

As a result, I think that many DBAs and DREs end up in their position by being the right person at the right time, being curious, and loving it.

Percona: What are some of the most important things an “Accidental DBA” needs to know right away?

I’d say that they are how to make sure that MySQL is running, interpreting any errors found in the client or server logs, and how to monitor MySQL.

This is going to be a very interactive talk and I’m hoping for lots of questions and a discussion. Everyone’s experience is different, and if I don’t include something in my slides, that doesn’t mean we can’t cover it!

Percona: What are three important database management lessons a good DBA needs to master?

    1. Put information in the hands of your developers so that they can learn about the database themselves

    2. Use automation early and often

    3. Use open source tools and contribute back to the community

Percona: Why should people attend your tutorial? What do you hope people will take away from it?

People should come if they’re interested in a broad overview of running MySQL. They should want to learn where they can make the most impact while tuning MySQL, how to avoid common problems, and discover some great open source tools that will make their jobs easier.

Percona: What are you looking forward to at Percona Live (besides your talk)?

Unfortunately, Percona chose to schedule Shlomi Noach’s Orchestrator High Availability Tutorial at the time of my talk, so I’m going to miss out on the number one tutorial I wanted to see!

Want to find out more about this Percona Live 2018 featured talk, and becoming an accidental DBA? Register for Percona Live 2018, and see Jenni’s tutorial talk The Accidental DBA. Register now to get the best price! Use the discount code SeeMeSpeakPL18 for 10% off.

Percona Live Open Source Database Conference 2018 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference will be April 23-25, 2018 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

The post Percona Live 2018 Featured Talk: The Accidental DBA with Jenni Snyder appeared first on Percona Database Performance Blog.

Apr
04
2018
--

Percona Monitoring and Management 1.9.0 Is Now Available

Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

Percona Monitoring and ManagementThere are a number of significant updates in 1.9.0 that we hope you will like, some of the key highlights include:

  • Faster loading of the index page: We have enabled performance optimizations using gzip and HTTP2.
  • AWS improvements: We have added metrics from CloudWatch RDS to 6 dashboards, as well as changed our AWS add instance workflow, and made some changes to credentials handling.
  • Percona Snapshot Server: If you are a Percona customer you can now securely share your dashboards with Percona Engineers.
  • Exporting PMM Server logs: Retrieve logs from PMM Server for troubleshooting using single button-click, avoiding the need to log in manually to the docker container.
  • Low RAM support: We have reduced the memory requirement so PMM Server will run on systems with 512MB
  • Dashboard improvements: We have changed MongoDB instance identification for MongoDB graphs, and set maximum graph Y-axis on Prometheus Exporter Status dashboard

AWS Improvements

CloudWatch RDS metrics

Since we are already consuming Amazon Cloudwatch metrics and persisting them in Prometheus, we have improved six node-specific dashboards to now display Amazon RDS node-level metrics:

  • Cross_Server (Network Traffic)
  • Disk Performance (Disk Latency)
  • Home Dashboard (Network IO)
  • MySQL Overview (Disk Latency, Network traffic)
  • Summary Dashboard (Network Traffic)
  • System Overview (Network Traffic)

AWS Add Instance changes

We have changed our AWS add instance interface and workflow to be more clear on information needed to add an Amazon Aurora MySQL or Amazon RDS MySQL instance. We have provided some clarity on how to locate your AWS credentials.

AWS Settings

We have improved our documentation to highlight connectivity best practices, and authentication options – IAM Role or IAM User Access Key.

Enabling Enhanced Monitoring

Credentials Screen

Low RAM Support

You can now run PMM Server on instances with memory as low as 512MB RAM, which means you can deploy to the free tier of many cloud providers if you want to experiment with PMM. Our memory calculation is now:

METRICS_MEMORY_MULTIPLIED=$(( (${MEMORY_AVAIABLE} - 256*1024*1024) / 100 * 40 ))
if [[ $METRICS_MEMORY_MULTIPLIED < $((128*1024*1024)) ]]; then
    METRICS_MEMORY_MULTIPLIED=$((128*1024*1024))
fi

Percona Snapshot Server

Snapshots are a way of sharing PMM dashboards via a link to individuals who do not normally have access to your PMM Server. If you are a Percona customer you can now securely share your dashboards with Percona Engineers. We have replaced the button for sharing to the Grafana publicly hosted platform onto one administered by Percona. Your dashboard will be written to Percona snapshots and only Percona Engineers will be able to retrieve the data. We will be expiring old snapshots automatically at 90 days, but when sharing you will have the option to configure a shorter retention period.

Export of PMM Server Logs

In this release, the logs from PMM Server can be exported using single button-click, avoiding the need to log in manually to the docker container. This simplifies the troubleshooting process of a PMM Server, and especially for Percona customers, this feature will provide a more consistent data gathering task that you will perform on behalf of requests from Percona Engineers.

Faster Loading of the Index Page

In version 1.8.0, the index page was redesigned to reveal more useful information about the performance of your hosts as well an immediate access to essential components of PMM, however the index page had to load much data dynamically resulting in a noticeably longer load time. In this release we enabled gzip and HTTP2 to improve the load time of the index page. The following screenshots demonstrate the results of our tests on webpagetest.org where we reduce page load time by half. We will continue to look for opportunities to improve the performance of the index page and expect that when we upgrade to Prometheus 2 we will see another improvement.

The load time of the index page of PMM version 1.8.0

The load time of the index page of PMM version 1.9.0

Issues in this release¶

New Features

  • PMM-781: Plot new PXC 5.7.17, 5.7.18 status variables on new graphs for PXC Galera, PXC Overview dashboards
  • PMM-1274: Export PMM Server logs as zip file to the browser
  • PMM-2058: Percona Snapshot Server

Improvements

  • PMM-1587: Use mongodb_up variable for the MongoDB Overview dashboard to identify if a host is MongoDB.
  • PMM-1788: AWS Credentials form changes
  • PMM-1823: AWS Install wizard improvements
  • PMM-2010: System dashboards update to be compatible with RDS nodes
  • PMM-2118: Update grafana config for metric series that will not go above 1.0
  • PMM-2215: PMM Web speed improvements
  • PMM-2216: PMM can now be started on systems without memory limit capabilities in the kernel
  • PMM-2217: PMM Server can now run in Docker with 512 Mb memory
  • PMM-2252: Better handling of variables in the navigation menu

Bug fixes

  • PMM-605: pt-mysql-summary requires additional configuration
  • PMM-941: ParseSocketFromNetstat finds an incorrect socket
  • PMM-948: Wrong load reported by QAN due to mis-alignment of time intervals
  • PMM-1486: MySQL passwords containing the dollar sign ($) were not processed properly.
  • PMM-1905: In QAN, the Explain command could fail in some cases.
  • PMM-2090: Minor formatting issues in QAN
  • PMM-2214: Setting Send real query examples for Query Analytic OFF still shows the real query in example.
  • PMM-2221: no Rate of Scrapes for MySQL & MySQL Errors
  • PMM-2224: Exporter CPU Usage glitches
  • PMM-2227: Auto Refresh for dashboards
  • PMM-2243: Long host names in Grafana dashboards are not displayed correctly
  • PMM-2257: PXC/galera cluster overview Flow control paused time has a percentage glitch
  • PMM-2282: No data is displayed on dashboards for OVA images
  • PMM-2296: The mysql:metrics service will not start on Ubuntu LTS 16.04

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management 1.9.0 Is Now Available appeared first on Percona Database Performance Blog.

Apr
02
2018
--

Plot MySQL Data in Real Time Using Percona Monitoring and Management (PMM)

Plot MySQL Data in Real Time

In this blog post, we’ll show that you can plot MySQL data in real time using Percona Monitoring and Management (PMM).

In my previous blog post, I showed how we could load into any metrics, benchmarks into MySQL and visualize it with PMM. But that’s not all! We can even visualize most any kind of data from MySQL in real time. I am falling in love with the MySQL plugin for Grafana — it just makes things so easy and smooth.

This graph shows us the number of visitors to a website in real time (refreshing in every 5 seconds).

We have the following table in MySQL:

CREATE TABLE `page_stats` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `visitors` int(11) unsigned DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `time` (`time`)
) ENGINE=InnoDB AUTO_INCREMENT=9232 DEFAULT CHARSET=latin1

We store the number of visitors every second. I am not saying you have to update this table hundreds or thousands of times, it depends on how many visitors you have. You could use the example of Redis to store and increase this counter and save it into MySQL every second. Here are my metrics:

mysql> select * from page_stats order by id desc limit 10;
+------+---------------------+----------+
| id   | time                | visitors |
+------+---------------------+----------+
| 9446 | 2018-02-27 21:44:12 |      744 |
| 9445 | 2018-02-27 21:44:11 |      703 |
| 9444 | 2018-02-27 21:44:10 |      791 |
| 9443 | 2018-02-27 21:44:09 |      734 |
| 9442 | 2018-02-27 21:44:08 |      632 |
| 9441 | 2018-02-27 21:44:07 |      646 |
| 9440 | 2018-02-27 21:44:06 |      656 |
| 9439 | 2018-02-27 21:44:05 |      678 |
| 9438 | 2018-02-27 21:44:04 |      673 |
| 9437 | 2018-02-27 21:44:03 |      660 |
+------+---------------------+----------+

We can easily add my MySQL query to Grafana, and it will visualize it for us:

You might ask “what is $__timeFilter?” I discussed that in the previous post, but let me copy the manual here as well:

Time series:
- return column named time_sec (UTC in seconds), use UNIX_TIMESTAMP(column)
- return column named value for the time point value
- return column named metric to represent the series name
Table:
- return any set of columns
Macros:
- $__time(column) -> UNIX_TIMESTAMP(column) as time_sec
- $__timeFilter(column) ->  UNIX_TIMESTAMP(time_date_time) ? 1492750877 AND UNIX_TIMESTAMP(time_date_time) ? 1492750877
- $__unixEpochFilter(column) ->  time_unix_epoch > 1492750877 AND time_unix_epoch < 1492750877
- $__timeGroup(column,'5m') -> (extract(epoch from "dateColumn")/extract(epoch from '5m'::interval))::int
Or build your own conditionals using these macros which just return the values:
- $__timeFrom() ->  FROM_UNIXTIME(1492750877)
- $__timeTo() ->  FROM_UNIXTIME(1492750877)
- $__unixEpochFrom() ->  1492750877
- $__unixEpochTo() ->  1492750877

What can I visualize?

It’s true! Basically, if you can write a query, you can graph it. For example, let’s count all the visitors in every minute. Here is the query:

select
      UNIX_TIMESTAMP(ps.time) as time_sec,
      sum(visitors) as value,
      'visitors' as metric
   from
   page_stats as ps
   WHERE $__timeFilter(time)
   GROUP BY DATE_FORMAT(`time`, '%Y-%m-%d %H:%i')
    ORDER BY ps.time ASC;

And it gives us the following graph:

See, it’s easy! ?

Conclusion

There is no more excuse why you can not visualize your data! Percona Monitoring and Management lets you plot MySQL data in real time. You do not have to move it anywhere or change anything! Just grant read access from PMM, and you can start to create your own graphs!

The post Plot MySQL Data in Real Time Using Percona Monitoring and Management (PMM) appeared first on Percona Database Performance Blog.

Mar
29
2018
--

Analyze MySQL Audit Logs with ClickHouse and ClickTail

ClickHouse and ClickTail

MySQL Audit LogsIn this blog post, I’ll look at how you can analyze MySQL audit logs (Percona Server for MySQL) with ClickHouse and ClickTail.

Audit logs are available with a free plugin for Percona Server for MySQL (https://www.percona.com/doc/percona-server/LATEST/management/audit_log_plugin.html). Besides providing insights about activity on your server, you might need the logs for compliance purposes.

However, on an active server, the logs can get very large. Under a sysbench-tpcc workload, for example, I was able to generate 24GB worth of logs just within one hour.

So we are going to use the ClickTail tool, which Peter Zaitsev mentioned in Analyze Your Raw MySQL Query Logs with ClickHouse and the Altinity team describes in the ClickTail Introduction.

Clicktail extracts all fields available in Percona Server for MySQL’s audit log in JSON format, as you can see in Schema. I used the command:

clicktail --dataset='clicktail.mysql_audit_log' --parser=mysqlaudit --file=/mnt/nvmi/mysql/audit.log --backfill

In my setup, ClickTail imported records at the rate of 1.5 to 2 million records/minute. Once we have ClickTail setup, we can do some work on audit logs. Below are some examples of queries.

Check if some queries were run with errors:

SELECT
    status AS c1,
    count(*)
FROM mysql_audit_log
GROUP BY c1
????c1????count()??
?    0 ? 46197504 ?
? 1160 ?        1 ?
? 1193 ?     1274 ?
? 1064 ?     5096 ?
???????????????????
4 rows in set. Elapsed: 0.018 sec. Processed 46.20 million rows, 184.82 MB (2.51 billion rows/s., 10.03 GB/s.)

First, it is very impressive to see the performance of 2.5 billion row/s analyzed. And second, there are really some queries with non-zero (errors) statuses.

We can dig into and check what exactly caused an 1193 error (MySQL Error Code: 1193. Unknown system variable):

SELECT *
FROM mysql_audit_log
WHERE status = 1193
LIMIT 1
????????????????_time????????_date???_ms???command_class???connection_id???db???host????????ip???name????os_user???os_login???os_version???mysql_version???priv_user???proxy_user???record?????????????????????????sqltext??????????????????????????????status???user????????????????????????startup_optionsi??
? 2018-03-12 20:34:49 ? 2018-03-12 ?   0 ? select        ?          1097 ?    ? localhost ?    ? Query ?         ?          ?            ?               ?           ?            ? 39782055_2018-03-12T20:21:21 ? SELECT @@query_response_time_stats ?   1193 ? root[root] @ localhost [] ?                  ?
???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????

So this was

SELECT @@query_response_time_stats

, which I believe comes from the Percona Monitoring and Management (PMM) MySQL Metrics exporter.

Similarly, we can check what queries types were run on MySQL:

SELECT
    command_class,
    count(*)
FROM mysql_audit_log
GROUP BY command_class
??command_class???????????count()??
?                      ?    15882 ?
? show_storage_engines ?     1274 ?
? select               ? 26944474 ?
? error                ?     5096 ?
? show_slave_status    ?     1274 ?
? begin                ?  1242555 ?
? update               ?  9163866 ?
? show_tables          ?      204 ?
? show_status          ?     6366 ?
? insert_select        ?      170 ?
? delete               ?   539058 ?
? commit               ?  1237074 ?
? create_db            ?        2 ?
? show_engine_status   ?     1274 ?
? show_variables       ?      450 ?
? set_option           ?     8102 ?
? create_table         ?      180 ?
? rollback             ?     5394 ?
? create_index         ?      120 ?
? insert               ?  7031060 ?
???????????????????????????????????
20 rows in set. Elapsed: 0.120 sec. Processed 46.20 million rows, 691.84 MB (385.17 million rows/s., 5.77 GB/s.)

There are more fields available, like:

db String,
host String,
ip String,

to understand who accessed a MySQL instance, and from where.

If you ever need to do some advanced work with MySQL audit logs, consider doing it with ClickHouse and ClickTail!

The post Analyze MySQL Audit Logs with ClickHouse and ClickTail appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com