Aug
09
2013
--

The top 5 proactive measures to minimize MySQL downtime

The top 5 proactive measures to minimize MySQL downtimeI’m happy to announce that the recording for my recent webinar “5 Proactive Measures to Minimize MySQL Downtime” is now available, along with the slides. They can both be found here.

My webinar focused on the top 5 operational measures that prevent or reduce downtime — along with the related business impact in a significant number of customer emergency scenarios.

As a senior consultant on Percona’s 24×7 Emergency Consulting team, I’ve helped resolve a myriad of client emergencies related to MySQL downtime and know that every emergency situation is unique and has its own set of challenges. However, when cases are systematically studied and analyzed, patterns of what typically causes MySQL downtime and how it can be best avoided emerge. And that’s what my webinar focused on. Thanks to everyone who attended, asked questions, and sent me thank you emails after the fact.

If you were not able to attend but are interested in the material, be sure to watch the recording, as the slides include only a small part of the information presented. If you have questions, please leave them in the comments section below and I’ll answer them as best I can.

The post The top 5 proactive measures to minimize MySQL downtime appeared first on MySQL Performance Blog.

Jul
10
2013
--

MySQL Webinar: 5 proactive measures to help minimize MySQL downtime, July 17

MySQL Webinar: 5 proactive measures to help minimize MySQL downtime, July 17th. Failure is an unavoidable part of IT life, but it does not always have to imply MySQL service downtime. In the years I have been working at Percona, I have been involved in many customer emergency cases, and I have seen a good number of problems that could have been avoided or alleviated if relatively simple proactive measures were taken.

On July 17th 2013, at 10 a.m. PDT, I will be delivering a webinar focused on presenting the Top 5 such measures, with the objective of helping you assess your current situation, and implement any needed changes to avoid production downtime in the future. The webinar is titled: “5 Proactive Measures to Minimize MySQL Downtime.”

Later that day, I will be delivering the same presentation, but live, and in Spanish, at the Montevideo MySQL Meetup group.  My colleague Martin Arrieta will be presenting and doing a hands-on demo of Percona XtraDB Cluster after my talk there.

MySQL Performance Blog readers in or near Montevideo that day are welcome to join us!

You can sign up for the webinar here, and find more information about the meetup here.

The post MySQL Webinar: 5 proactive measures to help minimize MySQL downtime, July 17 appeared first on MySQL Performance Blog.

Dec
19
2012
--

Percona Toolkit Webinar followup Q&A

First, a thank you to everyone who attended the webinar Today, I appreciate your time and nice comments. As promised, here are answers to questions that couldn’t be answered during the talk:

 

Q: How do you install the tools?

The manual has full details, but it’s important to know that the latest release for every tool is also available for direct download in a url like this: http://www.percona.com/get/<tool-name>. I use this a lot when I just need a specific tool and I’m working on a server without the toolkit installed.

Q: How to make the tools connect to MySQL

The slides are now available here, and slide 7 provides a basic introduction to Data Source Names, with more details available in the manual.

 

Q: pt-table-checksum can’t connect to my slave server

As part of its safeguards, pt-table-checksum monitors lag on replicas, and pauses as needed to let them catch up (it also pauses if replication stops on a replica). In order to do this, it must connect to all replicas off the master server, and it tries to use the same credentials used in the DSN specified for the master. This is why I recommended having consistent credentials across hosts, but if your setup is different, you can control the way the tool discovers and connects to slaves with the –recursion-method option.

 

Q: In the case of slave behind master, can you explain how to run pt-table-checksum and pt-table-sync from slave?

This was replied during the webinar, but I think it is important enough to repeat it here: Both these tools should be run from the master. Writing to slaves is the #1 cause I have seen for replication problems, so it only makes sense that the tools are designed to write to the master.

Q: How about pt-table-checksum with databases greater than 1TB?

It will work just fine, though it will obviously take more time! The tool processes the database one table at a time, and processes each table one chunk at a time, checksumming each chunk with a single query. It monitors master load to suspend itself if there is impact, and has other safeguards (as mentioned in a previous reply), so it is safe to use even for very large data sets.

Q: Can I see a live example of pt-table-sync?

Here‘s a Vagrant box my colleague Marcos Albe and myself prepared for our Percona Toolkit Tutorial at the last Percona Live New York, and it includes an example of pt-table-checksum and pt-table-sync, among others.

 

Q: Which options were used to get this output?

Unfortunately, I don’t know about which specific output this question is, so I’ll review all the slides which include output from the tools just in case:

  • Slide 37 is the default report output by pt-query-digest. You can see the command line used to generate it at the top, it’s just pt-query-digest master/data/telecaster-slow.log. As mentioned during the webinar, the tool auto detects the log type so typically you will invoke it this way.
  • Slide 38 is just the detail for the first query found. This is part of the same output from which the previous screenshot was taken
  • Slide 41 is obtained by using –report-all and –review with pt-query-digest. For example, with the sandbox server I used for the examples, it would be pt-query-digest –review S=/tmp/mysql_sandbox5527.sock,u=msandbox,p=msandbox,D=percona,t=reviews –report-all master/data/telecaster-slow.log
  • Slide 45 is the default output for pt-upgrade, obtained with the invocation on slide 44
  • Slide 48 is the default output for pt-online-schema-change, ran as detailed on slide 47
  • Slide 57 is the detail screen for one capture as shown by pt-sift. This is printed after one capture is selected from the timestamp list (the timestamp list is the view you get when first running pt-sift).

Q: For pt-online-schema-change, does renaming the table take a long time, and what about the transaction in the meanwhile?

The rename table itself should be fast, but as it needs to lock the table, long running transactions that lock (part of) the table can make the tool abort with a ‘Lock wait timeout exceeded’ error. The table and triggers created during the process are dropped, and the original table is left unmodified.

The same attendee asked about any performance differences when using the tool with or without innodb_file_per_table, and the first thing that comes to my mind is that when using a single shared tablespace, using this on very large tables can make your ibdata file grow a lot. As is known, this space won’t be recovered when the auxiliary table is dropped unless you’re using innodb_file_per_table.

 

Q: Does pt-online-schema-change work for fast index drops and creates?

Yes, though you may want to test, as even for large data set sizes, it may be faster to run the alter table command directly (specially so when dropping an index, based on my experience).

 

Q: I see lot of info on cacti are they going to be all translated to nagios plugins as all nagios plugin doesn’t output performance data?

The Percona Monitoring Plugins include cacti templates and nagios plugins, and each should be used for its appropriate use case. The cacti templates provide very good performance metrics.

 

Q: Does pt-sift read pt-stalk’s output?

Yes, pt-sift does not collect any diagnosis data. It is only a way to get a bird’s eye view of a capture (or a set of captures) obtained with pt-stalk. If the latest release is invoked without arguments, it looks for captures under /var/lib/pt-stalk/

 

Q: Is there a way with pt-table-sync to determine when the differences are large enough to favor manual synchronization?

 

This is a very good question, and in the end, it is the DBA’s judgement call. As I explained in the webinar, once you have data differences between two (or more) servers, if using statement based replication, it is possible that new differences are introduced while the sync is happening. In other words: bad data usually leads to more bad data! My preference is to use pt-table-sync if the damage is contained (i.e. one or a few tables, in rows that probably won’t be updated short term), or if the data set size, storage engine, network speed, or other factors make cloning a server too difficult, and just clone the host otherwise. If you’re using Innodb only, streaming a backup with xtrabackup could be a good and safe alternative.

 

 

Thanks again to all those who attended, if you have further questions, just reply here or drop me an email.

The post Percona Toolkit Webinar followup Q&A appeared first on MySQL Performance Blog.

Dec
12
2012
--

Webinar: 10 Percona Toolkit tools every MySQL DBA should know about

On December 19th at 1 PM Eastern / 10 AM Pacific, I will be giving a
webinar entitled “10 Percona Toolkit Tools Every MySQL DBA Should Know
About”
.

As a Consultant, I make heavy use of Percona Toolkit tools while
solving customers’ problems, and I have used this experience to select
what I consider to be the top 10 tools everyone working with MySQL
should know about. I will present each of the selected tools using
examples based on real-world cases.

You can sign up by clicking the “Register” button here. Hope to see
you next week!

Dec
09
2010
--

Recordings for webinars are now online

We’ve just published the recordings for our webinars; Migrating MyISAM to InnoDB and Introduction to Percona Server, XtraDB and Xtrabackup.  To listen, click on (Watch: Recorded Webinar) on each of these links.

We’re planning for future webinars next year.  If you have any suggestions, feel free to leave a comment.


Entry posted by Morgan Tocker |
5 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Dec
01
2010
--

Doing Re-run of Migrating MyISAM to Innodb Webinar

We had some technical difficulties with Webex today which prevented from all registered attendees to be able to join webinar. I personally would be very upset having booked my time to attend event, pre-registered to ensure there is a space available and when unable to join at the time of event (which is frankly exactly what happened to me).

As result we have scheduled Migrating MyISAM to Innodb webinar to run again on Tuesday, December 7 at 9AM Pacific (Same Time). Here is direct link if you would like to register.

Note we also Webinar on Percona Server, Xtrabackup and XtraDB next week which is great if you would like to learn more about any of these technologies.


Entry posted by Peter Zaitsev |
9 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Nov
26
2010
--

Webinar: Introduction to Percona Server, XtraDB and Xtrabackup

We’ve had such a large number of signups to our Migrating MyISAM to InnoDB webinar, that we decided to hold one more before the year is out:

December, 8th 9AM PST: Introduction to Percona Server, XtraDB and Xtrabackup.

As the title suggests: this presentation is an introduction.  We will be running through the main aspects that change, and where you can expect to get the most benefit.

This will be a technical talk, but perhaps less technical than some of our other posts. I recommend inviting a few friends & colleagues that are already familiar with MySQL, but perhaps not regular readers of our blog.

(Time for Q&A will follow the main presentation).


Entry posted by Morgan Tocker |
2 comments

Add to: delicious | digg | reddit | netscape | Google Bookmarks

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com