Calling all MySQL DBAs: How do you use Percona Toolkit?

Percona Toolkit is one of our most mature open source applications. Derived from Maatkit and Aspersa, Percona Toolkit has evolved significantly over the years. The software now contains 32 tools, over 4,000 tests, and has been downloaded over 250,000 times. Anyone who manages a database – from DBAs to system administrators to even software developers – benefits from Percona Toolkit’s ability to perform a variety of MySQL server and system tasks that are too difficult or complex to perform manually.

We continue to make Percona Toolkit better each month. Over the last 9 months alone Percona has had 6 releases and resolved nearly 50 issues.


While Percona team members in Support, Consulting, and Managed Services are big drivers of identifying bugs and new features (driven mostly by Percona customer needs), the community of Percona Toolkit users plays a significant role in making the open source software what it is today.

We’d like to learn how we can make Percona Toolkit even better for your needs. Please take a brief survey so we can learn how you actually use the software. As a thank you for taking the survey, we are randomly giving away five $50 gift cards to participants. It’s a small token but one that we hope you’ll appreciate.

Recent additions to Percona Toolkit have included better Percona XtraDB Cluster support as well as multiple fixes and improvements to pt-online-schema-change, pt-kill, pt-query-digest, pt-stalk, and preparation for the MySQL 5.7 GA. Help us continue to improve Percona Toolkit by taking part in our survey. If you use Percona Toolkit and are attending Percona Live next month, please keep a look out for me. I’d like to hear about your experiences.

The post Calling all MySQL DBAs: How do you use Percona Toolkit? appeared first on MySQL Performance Blog.


Percona Toolkit 2.1.9 bug raffle

Percona Toolkit 2.1.9 bug raffle

Percona Toolkit 2.1.9 bug raffle

Since we’re very busy working on Percona Toolkit 2.2 and other projects, I thought 2.1.8 (the current latest Percona Toolkit release) would be the last release in that series, but it introduced a new bug in pt-heartbeat (despite all the tool’s tests) that I’d like to fix.

A single bug fix is probably underkill for a full release (unless it’s a hotfix), so for 2.1.9 let’s fix whatever you want–our first* ever “bug raffle.”

“Space is limited”, so we can only take only take on a few bugs (let’s say 10 at most), depending on their complexity and ability to be reproduced.  If you have a bug** you wanted fixed in 2.1.9, please give its link in a comment.

I’ll go first:

* Maybe not the first: I have a vague memory of doing something similar for Maatkit.
** An actual bug, not a feature request.

The post Percona Toolkit 2.1.9 bug raffle appeared first on MySQL Performance Blog.


How to generate per-database traffic statistics using mk-query-digest

We often encounter customers who have partitioned their applications among a number of databases within the same instance of MySQL (think application service providers who have a separate database per customer organization … or wordpress-mu type of apps). For example, take the following single MySQL instance with multiple (identical) databases:

| Database |
| db1      |
| db2      |
| db3      |
| db4      |
| mysql    |

Separating the data in this manner is a great setup for being able to scale by simply migrating a subset of the databases to a different physical host when the existing host begins to get overloaded. But MySQL doesn’t allow us to examine statistics on a per-database basis.

Enter Maatkit.

There is an often-ignored gem in Maatkit‘s mk-query-digest, and that is the –group-by argument. This can be used to aggregate information by tables, hosts, users, or databases (full documentation is available via perldoc).

%> perl mk-query-digest --limit 100% --group-by db slow.log
# Rank Query ID Response time Calls R/Call Item
# ==== ======== ============= ===== ====== ====
#    1 0x       6000 60.0%    6000  0.5124 db3
#    2 0x       2000 20.0%    2000  0.0112 db1
#    3 0x       1500 15.0%    1500  0.1665 db2
#    4 0x        500  5.0%     500  0.0022 db4

So here, we can see that the majority (60%, to be exact) of execution time is spent in db3. If the server is reaching it’s capacity and the next most useful performance optimization is to migrate a database to a different server, you know exactly which database to move (db3) and how much room that will give you on the original host (60% growth) and on the new host (40% growth), which may have a direct bearing on your hardware selection.

Let Baron know how awesome you think this is by getting him a gift from his Amazon Wish List!

Entry posted by Ryan Lowe |
No comment

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by