Mar
11
2015
--

Easy query metrics with MySQL Performance Schema

The MySQL Performance Schema exposes so much data that it’s not trivial to learn, configure, and use. With recently released Percona Agent 1.0.11 you can get query metrics – like min, max, and average query execution time – with a few clicks:

Configure PCT Query Analytics for Performance Schema

Click “Apply” and about two minutes later you’ll have query metrics from Performance Schema, collected and sent every minute.

Percona Cloud Tools (PCT) and Percona Agent handle all the details. You’ll need MySQL (or Percona Server) 5.6 and Percona Agent 1.0.11 or newer. One caveat at the moment: it only works for local MySQL instances (so not Amazon RDS). This limitation should be fixed soon; we’re already working on it.

Why use Performance Schema? We prefer Query Analytics with MySQL slow logs (especially Percona Server slow logs which expose more metrics) because slow logs expose the most metrics compared to other sources, but sometimes the slow log just isn’t an option, so Performance Schema is the next best choice, but the choice means tradeoffs. For example, Performance Schema does not expose actual query examples (just fingerprints), so EXPLAIN does not work.

For those who’ve been following PCT development, you know that Percona Agent 1.0.5 first introduced support for Performance Schema. What’s new in 1.0.11 is everything – we completely rewrote this part of the agent. It’s so much better that it’s now the required minimum version for using Query Analytics with Performance Schema. Upgrading is really easy: just run the single command line you used to install the agent and it will auto-update.

MySQL Performance Schema exposes a lot of data and insights into the server, but query metrics are perhaps the most important because the primary job of your MySQL database is to execute queries. That’s why Percona Cloud Tools makes Query Analytics with Performance Schema (and slow logs) so easy: to help you focus on the essential and leave the details to the tools.

Percona Cloud Tools is in beta, so it’s still free to sign up and free to use all the tools and features.

The post Easy query metrics with MySQL Performance Schema appeared first on MySQL Performance Blog.

Feb
02
2015
--

Faster fingerprints and Go packages for MySQL

I’m happy to announce Go packages for MySQL. Particularly exciting is a new query fingerprint function which is very fast and efficient, but I’ll talk about that later. First, go-mysql is currently three simple Go packages for parsing and aggregating MySQL slow logs. If you’ve been following Percona development, you’ve no doubt heard of Percona Cloud Tools (PCT), a somewhat new performance management web service for MySQL.

One tool in PCT is “Query Analytics” which continuously analyzes query metrics from the slow log. The slow log provides the most metrics and therefore the most performance insight into MySQL. percona-agent, the open-source agent for PCT, uses go-mysql to parse and analyze the slow log, so the code has both heavy formal testing and heavy real-world testing. If you’re working with Go, MySQL, and MySQL slow logs, we invite you to try go-mysql.

Last October we implemented a completely new query fingerprint function. (See “Fingerprints” in the pt-query-digest doc for a background on query fingerprints.) Since mydumpslow, the very first slow log parser circa 2000, fingerprints have been accomplished with regular expressions. This approach is normally fine, but percona-agent needs to be faster and more efficient than normal to reduce the cost of observation. Regex patterns are like little state machines. One regex can be very fast, but several are required to produce a good fingerprint. Therefore, the regex approach requires processing the same query several times to produce a fingerprint. Even worse: a regex can backtrack which means a single logical pass through the query can result in several physical passes. In short: regular expressions are a quick and easy solution, but they are very inefficient.

Several years ago, a former colleague suggested a different approach: a single pass, purpose-built, character-level state machine. The resulting code is rather complicated, but the resulting performance is a tremendous improvement: 3-5x faster in informal benchmarks on my machine, and it handles more edge cases. In simplest terms: the new fingerprint function does more with less, which makes percona-agent and Query Analytics better.

Check out  github.com/percona/go-mysql, browse the percona-agent source code if you wish, and try Percona Cloud Tools for free.

The post Faster fingerprints and Go packages for MySQL appeared first on MySQL Performance Blog.

Apr
23
2014
--

Percona Live 2014 behind; MySQL ahead

I started using MySQL 11 years ago.  That’s not too long compared to other people in the industry, but nonetheless here’s my perspective on the state of the MySQL industry after attending Percona Live MySQL Conference & Expo 2104.

In short, the attitude around MySQL has changed from “Does it work?” to “Is it fast and reliable?” to “How do we manage it?” To further generalize, these periods correspond roughly to the original MySQL AB team, Percona and Oracle, and the last period is the current period so key players are emerging, like WebScaleSQL.

Does it work?

Peter Zaitsev said in one of his keynote talks that MySQL used to be considered a toy.  Today that assessment is wrong, or at least very difficult to defend.  The proof is that nearly all of the largest and most successful web companies today use it (Facebook, Twitter, Google, Tumblr, Box, etc.), and myriad other web and traditional companies use it too.  MySQL works, but it’s not a panacea and successful companies realize this.  I’ll talk more about this at this later.

Is it fast and reliable?

The have been rough spots, like MySQL 5.0 and the MySQL-Sun-Oracle transition, but MySQL is past these.  The history is, of course, more nuanced but generally speaking those rough spots gave rise to Percona.  Fast and reliable has been at the heart of Percona Server even before it was Percona Server (i.e. when it was still a collection of patches).  Other projects and companies were created during this time, but in my biased opinion Percona held the fort.  When MySQL became an Oracle product, the collective MySQL conscience waited to see if they would kill or revive it.  They have revived it.  MySQL 5.6 is great, and 5.7 is looking good so far too.  The work Percona did and still does combined with Oracle’s work has made MySQL a product you can bet the business on.  In other words: MySQL won’t fail you.  Moreover, the work at companies like Fusion-io proves that the state of the art apropos performance is alive and well, as highlighted by Nisha Talagala’s excellent keynote “The Evolution of MySQL in the All-Flash Datacenter.”

How do we manage it?

MySQL has become business.  Let me put it another way that’s difficult to say because I consider myself a hacker but I think it’s true nonetheless (and I’ve heard others say it too): MySQL isn’t cool any more.  “Cool” in the context of technology a weird euphemism for “new and unstable but useful and promising”.  MySQL was all these in past years, but now it’s mature, proven to be stable and useful, and it has delivered on the promise of being a free, open-source RDBMS that’s stable and useful.  As a business product, the concern is manageability: deploying, scaling, monitoring, maintaining, etc.  These are not new concerns; the difference today is focus: in the past these mattered less because we still had core usability and performance issues, but today MySQL usability and performance are solved problems.  Mark Callaghan’s PLMCE 2012 keynote was aptly titled: “What Comes Next?”  In 2012 he saw that MySQL at core was stable, so he turned his attention to things around it which can be pain points, like migrating shards and row compression.  In other words, his message was not “here’s what we still need to fix in MySQL”, it was “here’s what we need to manage MySQL sanely.”  He reiterated this message in a recent blog post, “Modern databases“:

“We have much needed work on write-optimized database algorithms – Tokutek, LevelDB, RocksDB, HBase, Cassandra. We also get reports of amazing performance. I think there is too much focus on peak performance and not enough on predictable performance and manageability.”

In my humble opinion, this is the current state of the MySQL industry: learning, developing, and establishing how to manage MySQL.  Although the new WebScaleSQL collaboration is focused prima facie on performance at scale, as Mark said in his blog post, “Predictable performance is part of manageability.”  There are many other companies and projects for managing various aspects of MySQL, like ClusterControl for MySQL Galera by Severalnines and Propagator by Outbrain (both were are PLMCE this year).

Earlier I said “MySQL works, but it’s not a panacea and successful companies realize this.”  Successful companies like Dyn (who presented this year) use MySQL and other technologies.  It’s important to realize that MySQL is one part of a business.  The other parts are Hadoop, Redis, memcached, etc.  OpenStack and other cloud platforms are increasingly mentioned, too.  Therefore, managing MySQL is only half the story.  The other half is understanding MySQL’s place in and interaction with other business technologies.

In summary, for me Percona Live MySQL Conference & Expo 2014 highlighted how MySQL has become one castle in the kingdom whereas 10 years ago it was an outpost on the frontier.  People no longer ask “is MySQL fast and reliable?” Instead they ask, “how can we manage 100 MySQL instances and a handful of other technologies with 2 full-time DBAs?”  The MySQL industry will continue to add features and improve performance, but we have shifted from doing that in the service of making a stable product to making a manageable product.

The post Percona Live 2014 behind; MySQL ahead appeared first on MySQL Performance Blog.

Jul
29
2013
--

Your troubles are over, pt-heartbeat 2.1

Poor pt-heartbeat 2.1: it was perfectly fine through Percona Toolkit 2.1.7, but it’s had a rough life since. Its troubles are finally over as of Percona Toolkit 2.1.10. At a conference recently, a user joked to me that “Percona Toolkit is awesome, but you really broke pt-heartbeat.” It’s true, and if you haven’t heard or experienced the story, here it is for the record.

Since the dawn of time, pt-heartbeat had computed slave lag like:

t1=time (Perl) --> replicate --> t2=time (Perl); lag = t2 - t1

Surprisingly, that worked for many years despite what should have been an obvious problem: different time zones: t1 could be in PST and t2 in EST and even if replication took effectively zero seconds, lag would be reported because 00:00 PST (t1) is 03:00 EST (t2).

This problem was not reported until bug 886059 in PT 2.1.8. The solution jumped out at us: use only UTC timestamps. We did, and this is when the troubles of pt-heartbeat 2.1 began.

The change in 2.1.8 broke pt-heartbeat in two ways. First, only using UTC timestamps, made 2.1.8 backwards-incompatible with 2.1.7 and earlier versions. We’re usually very attentive to maintaining backwards-compatibility within minor versions, but our failing in this case was the mistaken assumption that everyone running 2.1 would upgrade all their servers at once. This is not true: people run mixes of minor versions, relying on developers to maintain backwards-compatibility within the minor version. So those running 2.1.8 with 2.1.7 or earlier versions of pt-heartbeat ran into the very problem we were trying to fix: different time zones, because 2.1.8 used only UTC whereas 2.1.7 and earlier versions did not.

The second break was switching from Perl time to MySQL time which caused a loss of precision. Most MySQL servers have only 1-second timestamps, like 2013-06-28 00:00:01, but Perl time (when using the Time::HiRes module) gives us microsecond timestamps like 2013-06-28 00:00:01.123456. Microsecond timestamps are required to achieve what the tool’s docs state:

pt-heartbeat has a maximum resolution of 0.01 second.

So those running 2.1.8 ran into a complete loss of high-precision slave lag readings; this made the tool pretty much useless because MySQL already has 1-second slave lag resolution: seconds_behind_master (even though the way it computes that value is different than how pt-heartbeat computes slave lag).

We fixed those two bugs in PT 2.1.9, bug 1099665 and bug 1103221 respectively, by adding the –utc option. Now pt-heartbeat 2.1.9 was back to its original 2.1.7 self: microsecond timestamps from Perl were used again, and users could specify –utc to use UTC timestamps. Unfortunately, the troubles of pt-heartbeat 2.1 did not end there.

Certain MySQL functions are “time zone aware”, like UNIX_TIMESTAMP() for which the docs state:

The server interprets date as a value in the current time zone and converts it to an internal value in UTC.

So although we wrote UTC timestamps, we read them back in whatever the system time zone was. The fix: SET time_zone=’+0:00′ to truly make everything UTC. This was bug 1163372: “pt-heartbeat –utc –check always returns 0″. Given this change and those before it, bug 1137556, “pt-heartbeat docs don’t account for –utc”, was perhaps inevitable. Not only had we been breaking and fixing pt-heartbeat’s handling of time, but reading the docs would have given the wrong information, too.

Finally, those bugs have been fixed in Percona Toolkit 2.1.10 and I think it’s ok to say now: pt-heartbeat 2.1, your troubles are over. 2.1.10 will probably be the last release in the 2.1 series; users should upgrade to 2.2. pt-heartbeat 2.2.4 and later have all the fixes of pt-heartbeat 2.1.10.

Sorry pt-heartbeat 2.1, and sorry users of pt-heartbeat 2.1. We’ve learned from this tragic tale, and we hope never to repeat it.

The post Your troubles are over, pt-heartbeat 2.1 appeared first on MySQL Performance Blog.

May
06
2013
--

Percona Toolkit 2.2.2 released; bug fixes include pt-heartbeat & pt-archiver

During the Percona Live MySQL Conference & Expo 2013 the week before last, we quietly released Percona Toolkit 2.2.2 with a few bug fixes:

  • pt-archiver –bulk-insert may corrupt data
  • pt-heartbeat –utc –check always returns 0
  • pt-query-digest 2.2 prints unwanted debug info on tcpdump parsing errors
  • pt-query-digest 2.2 prints too many string values
  • Some tools don’t have –help or –version
  • Some tools use @@hostname without /*!50038*/
  • pt-stalk prints the wrong variable name in verbose mode when –function is used
  • And more

Percona ToolkitThe first two bugs are important. Those using pt-archiver –bulk-insert with UTF-8 data should definitely upgrade. Those using pt-heartbeat –utc should also definitely upgrade.

Users may note the revival of the –show-all option in pt-query-digest. This had been removed in 2.2.1, but resulted in too much output in certain cases.

A new –recursion-method was added to pt-table-checksum: cluster. This method attempts to auto-discover cluster nodes, alleviating the need to specify cluster node DSNs in a DSN table (–recursion-method=dsn).

See https://launchpad.net/percona-toolkit/+milestone/2.2.2 for the full list of bugs fixed in Percona Toolkit 2.2.2.

Visit http://www.percona.com/software/percona-toolkit to download Percona Toolkit 2.2.2; it’s also in our main repos.

The post Percona Toolkit 2.2.2 released; bug fixes include pt-heartbeat & pt-archiver appeared first on MySQL Performance Blog.

Mar
25
2013
--

MySQL 5.6 Compatible Percona Toolkit 2.2 Released

Percona ToolkitA new Percona Toolkit series has been released: Percona Toolkit 2.2 for MySQL 5.6. Several months in the making and many new features, changes, and improvements make this a great new series. It replaces the 2.1 series for which we plan to do only one more bug fix release (as 2.1.10) then retire. 2.2 is largely backward-compatible with 2.1, but some tools were changed significantly (e.g. pt-query-digest, pt-upgrade, and pt-online-schema-change).  Here are some highlights:

Official support for MySQL 5.6 and Percona XtraDB Cluster

We started beta support for MySQL 5.6 in 2.1.8 when 5.6 was still beta. Now that MySQL 5.6 is GA, so is our support for it. Check out the Percona Toolkit supported platforms and versions. When you upgrade to MySQL 5.6, be sure to upgrade to Percona Toolkit 2.2, too.

We also started beta support for Percona XtraDB Cluster in 2.1.8, but now that support is official in 2.2 because we have had many months to work with PXC and figure out which tools work with it and how. There’s still one noticeable omission: pt-table-sync. It’s still unclear if or how one would sync a cluster that, in theory, doesn’t become out-of-sync. As Percona XtraDB Cluster develops, Percona Toolkit will continue to evolve to support it.

pt-online-schema-change (pt-osc) is much more resilient

pt-online-schema-change 2.1 has been a great success, and people have been using it for evermore difficult and challenging tasks. Consequently, we needed to make it “try harder”, even though it already tried pretty hard to keep working despite recoverable errors and such. Whereas pt-osc 2.1 only retries certain operations, pt-osc 2.2 retries every critical operation, and its tries and wait time between tries for all operations are configurable. Also, we removed –lock-wait-timeout which set innodb_lock_wait_timeout because that now conflicts, or is at least confused with, lock_wait_timeout (introduced in MySQL 5.5) for metadata locks. Now –set-vars is used to set both of these (or any) system variables. For a quick intro to metadata locks and how they may affect you, see Ovais’s article.

In short: pt-online-schema-change 2.2 is far more resilient out of the box. It’s also aware of metadata locks now, whereas 2.1 was not really aware of them. And it’s highly configurable, so you can make the tool try very hard to keep working.

pt-upgrade is brand-new

pt-upgrade was written once long ago and not updated since.  Now that we have four base versions of MySQL (5.0, 5.1, 5.5, and 5.6), plus at least four major forks (Percona Server, MariaDB, Percona XtraDB Cluster, and MariaDB Galera Cluster), upgrades are fashionable, so to speak. Problem is: “original” pt-upgrade was too noisy and too complex. pt-upgrade 2.2 is far simpler and far easier to use. It’s basically what you expect from such a tool. Moreover, it has a really helpful new feature: “reference results”, i.e. saved results from running queries on a server. Granted, this can take a lot of disk space, but it allows you to “run now, compare later.”

If you’re thinking about upgrading, give pt-upgrade a try. It also reads every type of log now (slow, general, binary, and tcpdump), so you shouldn’t have a problem finding queries to run and compare.

pt-query-digest is simpler

pt-query-digest 2.2 has fewer options now. Basically, we re-focused it on its primary objective: analyzing MySQL query logs. So the ability to parse memcached, Postgres, Apache, and other logs was removed. We also removed several options that probably nobody ever used, and changed/renamed other options to be more logical. The result is a simpler, more focused tool, i.e. less overwhelming.

Also, pt-query-digest 2.2 can save results in JSON format (–output=json). This feature is still in development while we determine the optimal JSON structure.

Version check is on by default

In 2.1.4, released September/October 2012, we introduced a feature called “version check” into most tools. It’s like a lot of software that automatically checks for updates, but it’s also more: it’s a free service from Percona that advises when certain programs (Percona Toolkit tools, MySQL, Perl, etc.) are either out of date or are known bad versions. For example, there are two versions of the DBD::mysql Perl module that have problems. And there are certain versions of MySQL that have critical bugs. Version check will warn you about these if your system is running them.

What’s new in 2.2 is that, whereas this feature (specifically, the option in tools: –version-check) was off by default, now it’s on by default. If the IO::Socket::SSL Perl module is installed (easily available through your package manager), it will use a secure (https) connection over the web, else it will use a standard (http) connection.

pt-stalk and pt-mysql-summary have built-in MySQL options

No more “pt-stalk — -h db1 -u me”. pt-stalk 2.2 and pt-mysql-summary 2.2 have all the standard MySQL options built-in, like other tools: –user, –host, –port, –password, –socket, –defaults-file. So now the command line is what you expect: pt-stalk -h dhb1 -u me.

pt-stalk –no-stalk is no longer magical

Originally, pt-stalk –no-stalk was meant to simulate pt-collect, i.e. collect once and exit. To do that, the tool magically set some options and clobbered others, resulting in no way to do repeated collections at intervals. Now –no-stalk means only that: don’t stalk, just collect, respecting –interval and –iterations as usual. So to collect once and exit: pt-stalk –no-stalk –iterations 1.

pt-fk-error-logger and pt-deadlock-logger are standardized

Similar to the pt-stalk –no-stalk changes, pt-fk-error-logger and pt-deadlock-logger received mini overhauls in 2.2 to make their run-related options (–run-time, –interval, –iterations) standard. If you hadn’t noticed, one tool would run forever by default, while the other would run once and exit. And each treated their run-related options a little differently. This magic is gone now: both tools run forever by default, so specify –iterations or –run-time to limit how long they run.


There are more changes in addition to those highlights.  For example, three tools were removed, and there were several bug fixes. See https://launchpad.net/percona-toolkit/+milestone/2.2.1 for the full list.  If you upgrade from 2.1 to 2.2, be sure to re-read tools’ documentation to see what has changed.

As the first release in a new series, 2.2 features are not yet finalized. In other words, we may change things like the pt-query-digest –output json format in future releases after receiving real-world feedback.

In summary, Percona Toolkit 2.2 for MySQL is an exciting release with many new and helpful features. Users are encouraged to begin upgrading, particularly given that, except for the forthcoming 2.1.10 release, no more work will be done on 2.1 (unless you’re a Percona customer with a support contract or other agreement).

The post MySQL 5.6 Compatible Percona Toolkit 2.2 Released appeared first on MySQL Performance Blog.

Feb
28
2013
--

MySQL 5.5 lock_wait_timeout: patience is a virtue, and a locked server

MySQL 5.5 lock_wait_timeout: patience is a virtue, and a locked server

MySQL 5.5 lock_wait_timeout: patience is a virtue, and a locked server

Like Ovais said in Implications of Metadata Locking Changes in MySQL 5.5, the hot topic these days is MySQL 5.6, but there was an important metadata locking change in MySQL 5.5.  As I began to dig into the Percona Toolkit bug he reported concerning this change apropos pt-online-schema-change, I discovered something about lock_wait_timeout that shocked me.  From the MySQL 5.5 docs for lock_wait_timeout:

This variable specifies the timeout in seconds for attempts to acquire metadata locks. The permissible values range from 1 to 31536000 (1 year). The default is 31536000.

Translation: by default, MySQL 5.5 will “meta-block” for 1 year!  In my humble opinion, this is a bug, especially given the various subtle and sometimes quiet ways that metadata locking can lock the server as Ovais describes in his blog post.  The default for innodb_lock_wait_timeout, by comparison, is 50 seconds.  That’s reasonable, but 31536000 is not.  I would only set a timeout, or any kind of wait or interval value, to such a high value to play a practical joke on someone.

Second, and finally, if: “This variable specifies the timeout in seconds for attempts to acquire metadata locks,” then why isn’t it called metadata_lock_wait_timeout?

In conclusion: be careful when upgrading to MySQL 5.5 because the solstices may pass before your DLL statement completes.

The post MySQL 5.5 lock_wait_timeout: patience is a virtue, and a locked server appeared first on MySQL Performance Blog.

Feb
22
2013
--

CentOS 5.8 users: your UTF-8 data is in peril with Perl MySQL

CentOS 5.8 and earlier use Perl module DBD::mysql v3.0007 which has a bug that causes Perl not to flag UTF-8 data as being UTF-8.  Presuming that the MySQL table/column is using UTF-8, and the Perl MySQL connection is also using UTF-8, then a correct system returns:

PV = 0x9573840 "\343\203\213 \303\250"\0 [UTF8 "\x{30cb} \x{e8}"]

That’s a Devel::Peek inside a Perl scalar variable which clearly shows that Perl has recognized and flagged the data at UTF-8.  With DBD::mysql v3.0007 however, an incorrect system returns:

PV = 0x92df9a8 "\343\203\213 \303\250"\0

Notice that it’s the same byte sequence (in octal), but there’s no UTF-8 flag.  As far as Perl is concerned, this is Latin1 data.

What does this mean for you?  In general, it means that Perl could corrupt the data by treating UTF-8 data as Latin1 data.  If the program doesn’t alter the data, then the problem is “overlooked” and compensated for by the fact that MySQL knows that the data is UTF-8 and treats it as such.  We have found, however, that a program can modify the data without corrupting it, but this is risky and really only works by luck, so you shouldn’t rely on it.

I’d like to clarifying two things.  First, DBD::mysql v3.0007 was released in September 2006, but this very old problem still exists today because CentOS 5 is still a popular Linux distro.  So this isn’t “breaking news”, and Perl and DBD::mysql have handled UTF-8 correctly for nearly the last decade.   Second, just a reminder: all Percona Toolkit tools that connect to MySQL have a –charset option and an “A” DSN option for setting the character set.

In conclusion, if you

  1. Run CentOS 5
  2. Have UTF-8 data in MySQL
  3. Use Perl to access that data
  4. Have not upgraded DBD::mysql (perl-DBD-MySQL)

then your UTF-8 data is in peril with Perl.  The solution is simple: upgrade to any newer version of DBD::mysql (except 4.014).

The post CentOS 5.8 users: your UTF-8 data is in peril with Perl MySQL appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com