pt-pmp is a profiler tool that creates and summarizes full stack traces of processes on Linux. It was inspired by http://poormansprofiler.org and helped Percona Support resolve many performance issues. In this blog post, I will present an improved pt-pmp that can collect stack traces with minimal impact on the production environment. TLDR; Starting from Percona Toolkit […]
19
2024
eu-stack Support and Other Improvements in pt-pmp of Percona Toolkit
05
2017
Evaluation of PMP Profiling Tools
In this blog post, we’ll look at some of the available PMP profiling tools.
While debugging or analyzing issues with Percona Server for MySQL, we often need a quick understanding of what’s happening on the server. Percona experts frequently use the pt-pmp tool from Percona Toolkit (inspired by http://poormansprofiler.org).
The
pt-pmp
tool collects application stack traces GDB and then post-processes them. From this you get a condensed, ordered list of the stack traces. The list helps you understand where the application spent most of the time: either running something or waiting for something.
Getting a profile with
pt-pmp
is handy, but it has a cost: it’s quite intrusive. In order to get stack traces, GDB has to attach to each thread of your application, which results in interruptions. Under high loads, these stops can be quite significant (up to 15-30-60 secs). This means that the
pt-pmp
approach is not really usable in production.
Below I’ll describe how to reduce GDB overhead, and also what other tools can be used instead of GDB to get stack traces.
- GDB
By default, the symbol resolution process in GDB is very slow. As a result, getting stack traces with GDB is quite intrusive (especially under high loads).There are two options available that can help notably reduce GDB tracing overhead:- Use readnever patch. RHEL and other distros based on it include GDB with the readnever patch applied. This patch allows you to avoid unnecessary symbol resolving with the
--readnever
option. As a result you get up to 10 times better speed.
- Use gdb_index. This feature was added to address symbol resolving issue by creating and embedding a special index into the binaries. This index is quite compact: I’ve created and embedded gdb_index for Percona server binary (it increases the size around 7-8MB). The addition of the gdb_index speeds up obtaining stack traces/resolving symbols two to three times.
# to check if index already exists: readelf -S | grep gdb_index # to generate index: gdb -batch mysqld -ex "save gdb-index /tmp" -ex "quit" # to embed index: objcopy --add-section .gdb_index=tmp/mysqld.gdb-index --set-section-flags .gdb_index=readonly mysqld mysqld
- Use readnever patch. RHEL and other distros based on it include GDB with the readnever patch applied. This patch allows you to avoid unnecessary symbol resolving with the
- eu-stack (elfutils)
The eu-stack from the elfutils package prints the stack for each thread in a process or core file.Symbol resolving also is not very optimized in eu-stack. By default, if you run it under load it will take even more time than GDB. But eu-stack allows you to skip resolving completely, so it can get stack frames quickly and then resolve them without any impact on the workload later. - Quickstack
Quickstack is a tool from Facebook that gets stack traces with minimal overheads.
Now let’s compare all the above profilers. We will measure the amount of time it needs to take all the stack traces from Percona Server for MySQL under a high load (sysbench OLTP_RW with 512 threads).
The results show that eu-stack (without resolving) got all stack traces in less than a second, and that Quickstack and GDB (with the readnever patch) got very close results. For other profilers, the time was around two to five times higher. This is quite unacceptable for profiling (especially in production).
There is one more note regarding the
pt-pmp
tool. The current version only supports GDB as the profiler. However, there is a development version of this tool that supports GDB, Quickstack, eu-stack and eu-stack with offline symbol resolving. It also allows you to look at stack traces for specific threads (tids). So for instance, in the case of Percona Server for MySQL, we can analyze just the purge, cleaner or IO threads.
Below are the command lines used in testing:
# gdb & gdb+gdb_index time gdb -ex "set pagination 0" -ex "thread apply all bt" -batch -p `pidof mysqld` > /dev/null # gdb+readnever time gdb --readnever -ex "set pagination 0" -ex "thread apply all bt" -batch -p `pidof mysqld` > /dev/null # eu-stack time eu-stack -s -m -p `pidof mysqld` > /dev/null # eu-stack without resolving time eu-stack -q -p `pidof mysqld` > /dev/null # quickstack - 1 sample time quickstack -c 1 -p `pidof mysqld` > /dev/null # quickstack - 1000 samples time quickstack -c 1000 -p `pidof mysqld` > /dev/null
06
2015
MySQL QA Episode 3: How to use the debugging tool GDB
Welcome to MySQL QA Episode 3: “Debugging: GDB, Backtraces, Frames and Library Dependencies”
In this episode you’ll learn how to use debugging tool GDB. The following debugging topics are covered:
1. GDB Introduction
2. Backtrace, Stack trace
3. Frames
4. Commands & Logging
5. Variables
6. Library dependencies
7. c++filt
8. Handy references
– GDB Cheat sheet (page #2): https://goo.gl/rrmB9i
– From Crash to testcase: https://goo.gl/3aSvVW
Also expands on live debugging & more. In HD quality (set your player to 720p!)
The post MySQL QA Episode 3: How to use the debugging tool GDB appeared first on MySQL Performance Blog.
07
2014
Optimizing MySQL Performance: Choosing the Right Tool for the Job
Next Wednesday, I will present a webinar about MySQL performance profiling tools that every MySQL DBA should know.
Application performance is a key aspect of ensuring a good experience for your end users. But finding and fixing performance bottlenecks is difficult in the complex systems that define today’s web applications. Having a method and knowing how to use the tools available can significantly reduce the amount of time between problems manifesting and fixes being deployed.
In the webinar, titled “Optimizing MySQL Performance: Choosing the Right Tool for the Job,” we’ll start with the basic top, iostat, and vmstat then move onto advanced tools like GDB, Oprofile, and Strace.
I’m looking forward to this webinar and invite you to join us April 16th at 10 a.m. Pacific time. You can learn more and also register here to reserve your spot. I also invite you to submit questions ahead of time by leaving them in the comments section below. Thanks for reading and see you next Wednesday!
The post Optimizing MySQL Performance: Choosing the Right Tool for the Job appeared first on MySQL Performance Blog.