Analyzing SQL Queries with Percona Toolkit, Feb. 25-28, 9-11 a.m. PST
Next week I’m teaching an online Percona Training class, called Analyzing SQL Queries with Percona Toolkit. This is a guided tour of best practices for pt-query-digest, the best tool for evaluating where your database response time is being spent.
This month we saw the GA release of MySQL 5.6, and I wanted to check if any improvement has been made to the slow query log in table format. Users of some MySQL appliances like Amazon RDS must use table logs, since they can’t access the file-based logs on the server.
I read the logging code in MySQL 5.6.10 source, in file sql/log.cc. I discovered that they have refactored the way they write to file-based logs versus table-based logs, but one particular feature I was looking for has not changed.
When writing slow query information to the log file, it outputs the query time and lock time as floating-point numerics to microsecond scale:
sql/log.cc, MYSQL_QUERY_LOG::write():
1945 /* For slow query log */
1946 sprintf(query_time_buff, "%.6f", ulonglong2double(query_utime)/1000000.0);
1947 sprintf(lock_time_buff, "%.6f", ulonglong2double(lock_utime)/1000000.0);
However, just like in prior releases of MySQL, when writing slow query information to the log table, query times and lock times are truncated to to integers:
sql/log.cc, Log_to_csv_event_handler::log_slow():
610 longlong query_time= (longlong) (query_utime/1000000);
611 longlong lock_time= (longlong) (lock_utime/1000000);
All the times recorded in slow query log tables are rounded down to the nearest whole second, meaning any query that executes in less than 1.0 seconds counts as 0 seconds. This will skew your average query time report.
Still, if you use Amazon RDS or a similar instance without filesystem access, you may want to do a report on the query log in spite of this. You could still get a report of the most frequent queries, even if the response time is so imprecise. But pt-query-digest and similar tools don’t read the query log table—they only read the query log file.
To account for this, you can use a client script to dump the contents of the query log table into a flat file in the format of a conventional query log file, which you can then use as input to pt-query-digest. I’ve seen several implementations of this type of script in the past, but here’s mine:
cat <<'GO' | mysql --raw --skip-column-names --quick --silent --no-auto-rehash --compress $*
SELECT CONCAT(
'# Time: ', DATE_FORMAT(start_time, '%y%m%d %H:%i:%s'), CHAR(10),
'# User@Host: ', user_host, CHAR(10),
'# Query_time: ', TIME_TO_SEC(query_time),
' Lock_time: ', TIME_TO_SEC(lock_time),
' Rows_sent: ', rows_sent,
' Rows_examined: ', rows_examined, CHAR(10),
'SET timestamp=', UNIX_TIMESTAMP(start_time), ';', CHAR(10),
IF(FIND_IN_SET(sql_text, 'Sleep,Quit,Init DB,Query,Field List,Create DB,Drop DB,Refresh,Shutdown,Statistics,Processlist,Connect,Kill,Debug,Ping,Time,Delayed insert,Change user,Binlog Dump,Table Dump,Connect Out,Register Slave,Prepare,Execute,Long Data,Close stmt,Reset stmt,Set option,Fetch,Daemon,Error'),
CONCAT('# administrator command: ', sql_text), sql_text),
';'
) AS `# slow-log`
FROM `mysql`.`slow_log`;
GO
echo "#"
I host the script above in my bk-tools github project. It’s called export-slow-log-table. I distribute it under the terms of the GNU Public License v3.
Do you want to learn more about best practices for manipulating query logs and running reports on them? Register to join me February 25-28, 9AM-11AM Pacific Time, for my online class Analyzing SQL Queries with Percona Toolkit.
The post Analyzing Slow Query Table in MySQL 5.6 appeared first on MySQL Performance Blog.