Jun
07
2013
--

Private, custom Percona MySQL Training

Photo of Alexei Rodriguz

Alexei Rodriguez presenting at PLMCE13

During the Percona Live MySQL Community Event this year we had the pleasure of Alexei Rodriguez discussing why Evernote uses Percona’s Private Custom MySQL Training. Alexei is VP of operations there and you can watch a video of his presentation on YouTube.

Alexei said he values having Percona tailor training to their needs, the expert delivery by our practitioner-trainers, and the ability to speak freely about operational issues during private training.

If you would like to talk with our team about private custom training, please contact Sally Martinez at (855) 55TRAIN or
(925) 271-5054, or via e-mail at sally.martinez@percona.com.

The post Private, custom Percona MySQL Training appeared first on MySQL Performance Blog.

Feb
18
2013
--

Analyzing Slow Query Table in MySQL 5.6

February 25-28, 2013 9AM - 11AM Pacific

Analyzing SQL Queries with Percona Toolkit, Feb. 25-28, 9-11 a.m. PST

Next week I’m teaching an online Percona Training class, called Analyzing SQL Queries with Percona Toolkit.  This is a guided tour of best practices for pt-query-digest, the best tool for evaluating where your database response time is being spent.

This month we saw the GA release of MySQL 5.6, and I wanted to check if any improvement has been made to the slow query log in table format.  Users of some MySQL appliances like Amazon RDS must use table logs, since they can’t access the file-based logs on the server.

I read the logging code in MySQL 5.6.10 source, in file sql/log.cc.  I discovered  that they have refactored the way they write to file-based logs versus table-based logs, but one particular feature I was looking for has not changed.

When writing slow query information to the log file, it outputs the query time and lock time as floating-point numerics to microsecond scale:

sql/log.cc, MYSQL_QUERY_LOG::write():

1945   /* For slow query log */
1946   sprintf(query_time_buff, "%.6f", ulonglong2double(query_utime)/1000000.0);
1947   sprintf(lock_time_buff, "%.6f", ulonglong2double(lock_utime)/1000000.0);

However, just like in prior releases of MySQL, when writing slow query information to the log table, query times and lock times are truncated to to integers:

sql/log.cc, Log_to_csv_event_handler::log_slow():

610   longlong query_time= (longlong) (query_utime/1000000);
 611   longlong lock_time= (longlong) (lock_utime/1000000);

All the times recorded in slow query log tables are rounded down to the nearest whole second, meaning any query that executes in less than 1.0 seconds counts as 0 seconds.  This will skew your average query time report.

Still, if you use Amazon RDS or a similar instance without filesystem access, you may want to do a report on the query log in spite of this.  You could still get a report of the most frequent queries, even if the response time is so imprecise.  But pt-query-digest and similar tools don’t read the query log table—they only read the query log file.

To account for this, you can use a client script to dump the contents of the query log table into a flat file in the format of a conventional query log file, which you can then use as input to pt-query-digest.  I’ve seen several implementations of this type of script in the past, but here’s mine:

cat <<'GO' | mysql --raw --skip-column-names --quick --silent --no-auto-rehash --compress $*
SELECT CONCAT(
'# Time: ', DATE_FORMAT(start_time, '%y%m%d %H:%i:%s'), CHAR(10),
'# User@Host: ', user_host, CHAR(10),
'# Query_time: ', TIME_TO_SEC(query_time),
' Lock_time: ', TIME_TO_SEC(lock_time),
' Rows_sent: ', rows_sent,
' Rows_examined: ', rows_examined, CHAR(10),
'SET timestamp=', UNIX_TIMESTAMP(start_time), ';', CHAR(10),
IF(FIND_IN_SET(sql_text, 'Sleep,Quit,Init DB,Query,Field List,Create DB,Drop DB,Refresh,Shutdown,Statistics,Processlist,Connect,Kill,Debug,Ping,Time,Delayed insert,Change user,Binlog Dump,Table Dump,Connect Out,Register Slave,Prepare,Execute,Long Data,Close stmt,Reset stmt,Set option,Fetch,Daemon,Error'),
CONCAT('# administrator command: ', sql_text), sql_text),
';'
) AS `# slow-log`
FROM `mysql`.`slow_log`;
GO
echo "#"

I host the script above in my bk-tools github project. It’s called export-slow-log-table.  I distribute it under the terms of the GNU Public License v3.

Do you want to learn more about best practices for manipulating query logs and running reports on them?  Register to join me February 25-28, 9AM-11AM Pacific Time, for my online class Analyzing SQL Queries with Percona Toolkit.

The post Analyzing Slow Query Table in MySQL 5.6 appeared first on MySQL Performance Blog.

Jan
02
2013
--

MySQL Training from Percona: January – March 2013

Now that we are in the New Year, it is time to settle back into work and make plans for 2013. As part of your professional development planning, consider Percona MySQL Training.

Percona will be holding the following MySQL Training classes in the first quarter:

  • January
    • Live Virtual Training – DBA Training for MySQL: January 7-10, 2013
    • Chicago, Illinois, USA : January 14-17, 2013
    • London, UK: January 14-17, 2013
  • February
    • Frankfurt, DE: February 4-7, 2013
    • San Francisco, CA, USA: February 4-7, 2013
  • March
    • New York, NY, USA: March 11-14, 2013

To view these training events, and others, go to percona.com/training.

As a thank-you for making 2012 a fantastic year for Percona Training, we are offering 10% off of any public training class that is purchased in January. The class does not need to be taken in January. You will receive the discount when you check out of our online store by entering coupon code MPB10J.

If you have a team that you would like to train, it may be more cost effective to bring us to you. If you would like to be contacted about our custom training, to go our Contact Me form and send us your request.

The post MySQL Training from Percona: January – March 2013 appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com