In this blog, we will provide answers to the Q & A for the MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling webinar.
First, we want to thank everybody for attending the June 7, 2017 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that we were unable to answer during the webinar:
How does Percona XtraDB cluster work with AWS for MySQL clustering?
Percona XtraDB Cluster works especially well in cloud environments, including Amazon EC2. Since Percona XtraDB Cluster only requires one network round trip per transaction for write transactions commits, and keeps all reads local, allows it to deploy high performance multi AZ and even multi region clusters. The fact that each Percona XtraDB Cluster node contains all the data allows it to avoid reliance on the EBS storage. You can run Percona XtraDB Cluster on NVMe storage based i3 EC2 nodes to achieve high performance even with very IO-intensive workloads. Automatic provisioning and cluster self healing allows you to easily scale the cluster. We have simple tutorial on how to deploy Percona XtraDB Cluster on AWS – check it out here.
How do you approach master-master model? Are there enough reasons to use the model to implement multi-site scaling?
There are two distinct multi-master modes in existence. A synchronous Master-Master solution, like the one offered by Percona XtraDB Cluster (virtually synchronous to be exact), guarantees there are no data conflicts as you connect to the nodes located at different sites. The downside of this model is that writes can be expensive. As such, it works well in environments with low latency between the different sites, or when high latency for updates can be tolerated. Percona XtraDB Cluster is greatly optimized in that it requires only one network roundtrip to complete a commit transaction. This significantly reduces the added latency compared to many other solutions.
In contrast, asynchronous Master-Master means you can perform writes locally, without waiting on a network round trip. It comes with the downside of possible data conflicts. In MySQL, it can be implemented using MySQL Replication. MySQL Replication only detects conflicts at this point, however, and stops if it detects a conflict. It has no good built-in conflict resolution. Ensuring conflicts do not happen on the application level is hard and error prone, and only recommended in rare cases. Most applications out there do not use Active Master-Master, but rather design an architecture where each database replication set operates with a only a single writable node.
Do the Percona tools work in the cloud, like in Amazon Aurora?
We try to make Percona software in the cloud when it makes sense. For example, Percona Toolkit and Percona Monitoring Management support Amazon RDS and Amazon Aurora. Percona XtraBackup does not, as it requires physical access to the database files (Amazon RDS and Aurora don’t provide that). Having said that, Amazon recently updated its Aurora migration documentation to include the use of XtraBackup. Amazon Aurora supports backups taken by Percona XtraBackup as a way to import data.
What is the fastest way to verify and validate backups created by XtraBackup for databases around 2-3TB?
In the big picture, you test backups by doing some sort of restore and validation. This can be done manually, but is much better if automated. There are three levels of such validation:
- Basic Validation. Run –apply-log and ensure it completes successfully. Start the MySQL instance and run some basic queries to ensure it works. Often running some queries to see that recent data is present is a good idea.
- Consistency Validation. Additionally, run Check Table on all tables to ensure there is no corruption. This way, tables and indexes data structures are validated.
- Full Validation. Restore the backup and connect the restored backup as a MySQL slave (possibly to one of the existing slaves). Let it catch up and then run pt-table-checksum to validate consistency and ensure that the data in backup matches what is in the source.
Running a checktable on databases on AWS IO optimized instances takes up to eight hours. Any other suggestions on how to replace checktable in validation?”
Without knowing the table size, it is hard for me to assess whether eight hours is reasonable for your environment. However, generally speaking you should not run a Full Validation on every backup. Full Validation first and foremost validates the backup and restore pipeline. If you’re not seeing issues, doing it once per month is plenty. You want to do lighter checks on a daily and weekly basis.
What approach would you recommend for a data warehouse needing about 80,000IOPS, currently on FusionIO bare metal? Which cloud solution would be my best bet?
This is complicated question. To answer it properly requires more information. We need to know what type of operations your database performs. Working with a Percona Consultant to do an A&D for your environment would give you best answer. In general though, EBS (even with a large number of provisioned IOPs) would not match FusionIO in IO request latency. I3 high IO instances with NVMe storage is closer match. If budget is not a concern, you can look into X1 instances. These can have up to 2TB of memory and often allow getting all (or a large portion) of the database in memory for even higher performance.
Thanks for attending the MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling webinar! Post any more MySQL in the cloud comments below.