When HPE acquired Nimble Storage in March for a cool billion dollars, it knew it was getting some nifty flash storage technology. But it also got Nimble’s InfoSight artificial intelligence capabilities that not only monitored the underlying storage arrays, but all of the adjacent datacenter technology. Today, the company announced it has enhanced that technology to provide… Read More
21
2017
HPE adds recommendations to AI tech from Nimble acquisition
21
2015
Better high availability: MySQL and Percona XtraDB Cluster with good application design
High Availability
Have you ever wondered if your application should be able to work in read-only mode? How important is that question?
MySQL seems to be the most popular database solution for web-based products. Most typical Internet application workloads consist of many reads, with usually few writes. There are exceptions of course – MMO games for instance – but often the number of reads is much bigger then writes. So when your database infrastructure looses its ability to accept writes, either because traditional MySQL replication topology lost its master or Galera cluster lost its quorum, why would you want to the application to declare total downtime? During this scenario, imagine all the users who are just browsing the application (not contributing content): they don’t care if the database cannot accept new data. People also prefer to have access to an application, even if it’s functionality is notably reduced, rather then see the 500 error pages. In some disaster scenarios it is a seriously time-consuming task to perform PITR or recover some valuable data: it is better to at least have the possibility of user read access to a recent backup.
My advice: design your application with the possible read-only partial outage in mind, and test how the application works in that mode during it’s development life cycle. I think it will pay off greatly and increase the perception of availability of your product. As an example, check out some of the big open source projects’ implementation of this concept, like MediaWiki or Drupal (and also some commercial products).
PXC
Having said that, I want to highlight a pretty new (and IMHO) important improvement in this regard, introduced in the Galera replication since PXC version 5.6.24. It was already mentioned by my colleague Stéphane in his blog post earlier this year.
Focus on data consistency
As you probably know, one of Galera’s key advantages is their great data consistency care and data-centric approach. No matter where you write in the cluster, all nodes must have the same data. This is important when you realize what happens when a data inconsistency is detected between the nodes. Inconsistent nodes, which cannot apply a writeset due to missing rows or duplicate unique key values for instance, will have to abort and perform an emergency shutdown. This happens in order to remove contaminated members from the cluster, and not spread the data “illness” further. If it does happen that the majority of nodes perform an emergency abort, the remaining minority may loose the cluster quorum and will stop serving further client’s requests. So the price for data consistency protection is availability.
Anti-split-brain
Sometimes a node or node cluster members loose connectivity to others, in a way that >50% of nodes can no longer communicate. Connectivity is lost all of a sudden, without a proper “goodbye” message from the “dissappeared” nodes. These nodes don’t know what the reason was for the lost connection – were the peers killed? or may be networks were split? In that situation, nodes declare a non-Primary cluster state and go into SQL-disabled mode. This is because a member of a cluster without a quorum (majority), hence not acting as Primary Component, is not trusted as it may have inconsistent or old data. Because of this state, it won’t allow the clients to access it.
This is for two reasons. First and unquestionably, we don’t want to allow writes when there is a risk of network split, where the other part of the cluster still forms the Primary Component and keeps operating. We may also want to disallow reads of a stall data, however, when there is a possibility that the other part of the infrastructure already has a lot of newer information.
In the standard MySQL replication process there are no such precautions – if replication is broken in master-master topology both masters can still accept writes, and they can also read anything from the slaves regardless of how much they may be lagging or if they are connected to their masters at all. In Galera though, even if too much lag in the applying queue is detected (similar to replication lag concept), the cluster will pause writes using a Flow Control mechanism. If replication is broken as described above, it will even stop the reads.
Dirty reads from Galera cluster
This behavior may seem too strict, especially if you just migrated from MySQL replication to PXC, and you just accept that database “slave” nodes can serve the read traffic even if they are separated from the “master.” Or if your application does not rely on writes but mostly on access to existing content. In that case, you can either enable the new wsrep_dirty_reads variable dynamically (per session only if needed), or setup your cluster to run this option by default by placing wsrep_dirty_reads = ON in the my.cnf (global values are acceptable in the config file is available since PXC 5.6.26).
The Galera topology below is something we very often see at customer sites, where WAN locations are configured to communicate via VPN:
I think this failure scenario is a perfect usage case for wsrep_dirty_reads – where none of the cluster parts are able to work at full functionality alone, but could successfully keep serving read queries to the clients.
So let’s quickly see how the cluster member behaves with the wsrep_dirty_reads option disabled and enabled (for the test I blocked network communication on port 4567):
percona3 mysql> show status like 'wsrep_cluster_status'; +----------------------+-------------+ | Variable_name | Value | +----------------------+-------------+ | wsrep_cluster_status | non-Primary | +----------------------+-------------+ 1 row in set (0.00 sec) percona3 mysql> show variables like '%dirty_reads'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | wsrep_dirty_reads | OFF | +-------------------+-------+ 1 row in set (0.01 sec) percona3 mysql> select * from test.g1; ERROR 1047 (08S01): WSREP has not yet prepared node for application use
And when enabled:
percona2 mysql> show status like 'wsrep_cluster_status'; +----------------------+-------------+ | Variable_name | Value | +----------------------+-------------+ | wsrep_cluster_status | non-Primary | +----------------------+-------------+ 1 row in set (0.00 sec) percona2 mysql> show variables like '%dirty_reads'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | wsrep_dirty_reads | ON | +-------------------+-------+ 1 row in set (0.00 sec) percona2 mysql> select * from test.g1; +----+-------+ | id | a | +----+-------+ | 1 | dasda | | 2 | dasda | +----+-------+ 2 rows in set (0.00 sec) percona2 mysql> insert into test.g1 set a="bb"; ERROR 1047 (08S01): WSREP has not yet prepared node for application use
MySQL
In traditional replication, you are probably using the slaves for reads anyway. So if the master crashes, and for some reason a failover toolkit like MHA or PRM is not configured or also fails, in order to keep the application working you should direct new connections meant for the master to one of the slaves. If you use a loadbalancer, maybe just have the slaves as backups for the master in the write pool. This may help to achieve a better user experience during the downtime, where everyone can at least use existing information. As noted above, however, the application must be prepared to work that way.
There are caveats to this implementation, as the “read_only” mode that is usually used on slaves is not 100% read-only. This is due to the exception for “super” users. In this case, the new super_read_only variable comes to the rescue (available in Percona Server 5.6) as well as stock MySQL 5.7. With this feature, there is no risk that after pointing database connections to one of the slaves, some special users will change the data.
If a disaster is severe enough, it may be necessary to recover data from a huge SQL dump, and it’s often hard to find enough spare servers to serve the traffic with an old binary snapshot. It’s worth noting that InnoDB has a special read-only mode, meant to be used in a read-only medium, that is lightweight compared to full InnoDB mode.
Useful links
If you are looking for more information about Galera/PXC availability problems and recovery tips, these earlier blog posts may be interesting:
Percona XtraDB Cluster (PXC): How many nodes do you need?
Percona XtraDB Cluster: Quorum and Availability of the cluster
Galera replication – how to recover a PXC cluster
23
2015
Percona XtraDB Cluster (PXC): How many nodes do you need?
A question I often hear when customers want to set up a production PXC cluster is: “How many nodes should we use?”
Three nodes is the most common deployment, but when are more nodes needed? They also ask: “Do we always need to use an even number of nodes?”
This is what we’ll clarify in this post.
This is all about quorum
I explained in a previous post that a quorum vote is held each time one node becomes unreachable. With this vote, the remaining nodes will estimate whether it is safe to keep on serving queries. If quorum is not reached, all remaining nodes will set themselves in a state where they cannot process any query (even reads).
To get the right size for you cluster, the only question you should answer is: how many nodes can simultaneously fail while leaving the cluster operational?
- If the answer is 1 node, then you need 3 nodes: when 1 node fails, the two remaining nodes have quorum.
- If the answer is 2 nodes, then you need 5 nodes.
- If the answer is 3 nodes, then you need 7 nodes.
- And so on and so forth.
Remember that group communication is not free, so the more nodes in the cluster, the more expensive group communication will be. That’s why it would be a bad idea to have a cluster with 15 nodes for instance. In general we recommend that you talk to us if you think you need more than 10 nodes.
What about an even number of nodes?
The recommendation above always specifies odd number of nodes, so is there anything bad with an even number of nodes? Let’s take a 4-node cluster and see what happens if nodes fail:
- If 1 node fails, 3 nodes are remaining: they have quorum.
- If 2 nodes fail, 2 nodes are remaining: they no longer have quorum (remember 50% is NOT quorum).
Conclusion: availability of a 4-node cluster is no better than the availability of a 3-node cluster, so why bother with a 4th node?
The next question is: is a 4-node cluster less available than a 3-node cluster? Many people think so, specifically after reading this sentence from the manual:
Clusters that have an even number of nodes risk split-brain conditions.
Many people read this as “as soon as one node fails, this is a split-brain condition and the whole cluster stop working”. This is not correct! In a 4-node cluster, you can lose 1 node without any problem, exactly like in a 3-node cluster. This is not better but not worse.
By the way the manual is not wrong! The sentence makes sense with its context.
There could actually reasons why you might want to have an even number of nodes, but we will discuss that topic in the next section.
Quorum with multiple data centers
To provide more availability, spreading nodes in several datacenters is a common practice: if power fails in one DC, nodes are available elsewhere. The typical implementation is 3 nodes in 2 DCs:
Notice that while this setup can handle any single node failure, it can’t handle all single DC failures: if we lose DC1, 2 nodes leave the cluster and the remaining node has not quorum. You can try with 4, 5 or any number of nodes and it will be easy to convince yourself that in all cases, losing one DC can make the whole cluster stop operating.
If you want to be resilient to a single DC failure, you must have 3 DCs, for instance like this:
Other considerations
Sometimes other factors will make you choose a higher number of nodes. For instance, look at these requirements:
- All traffic is directed to a single node.
- The application should be able to fail over to another node in the same datacenter if possible.
- The cluster must keep operating even if one datacenter fails.
The following architecture is an option (and yes, it has an even number of nodes!):
Conclusion
Regarding availability, it is easy to estimate the number of nodes you need for your PXC cluster. But node failures are not the only aspect to consider: Resilience to a datacenter failure can, for instance, influence the number of nodes you will be using.
The post Percona XtraDB Cluster (PXC): How many nodes do you need? appeared first on MySQL Performance Blog.