A little autumn cleaning underway at LinkedIn. The company is quietly retiring InMaps, a tool that let you map out how your LinkedIn network looks. A notice on the InMaps homepage notes that today, September 1, is the last day for you to use it, and that it is being discontinued so that LinkedIn “can focus on developing new ways to visualize your professional network.” As of… Read More
According to Net Applications, Windows XP’s global desktop market share fell from 24.82 percent in July, to 23.89 percent in August. That’s just under one percentage point in a full month. At that pace, it will take more than two years for Windows XP’s market share to fully dissipate. Read More
href="http://www.percona.com/blog/wp-content/uploads/2013/02/XtraDB-Cluster.jpg"> class="alignright size-full wp-image-12836" src="http://www.percona.com/blog/wp-content/uploads/2013/02/XtraDB-Cluster.jpg" alt="Percona XtraDB Cluster 5.6.20-25.7" width="237" height="82" />Percona is glad to announce the new release of href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster 5.6 on September 1st 2014. Binaries are available from href="http://www.percona.com/downloads/">downloads area or from our href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html">software repositories.
Based on Percona Server href="http://www.percona.com/doc/percona-server/5.6/release-notes/Percona-Server-5.6.20-68.0.html">5.6.20-68.0 including all the bug fixes in it, rel="nofollow" href="https://github.com/codership/galera/issues?milestone=1&page=1&state=closed" rel="nofollow">Galera Replicator 3.7, and on rel="nofollow" href="https://launchpad.net/wsrep-group/+milestone/5.6.20-25.6" rel="nofollow">Codership wsrep API 25.7, Percona XtraDB Cluster 5.6.20-25.7 is now the current General Availability release. All of Percona‘s software is open-source and free, and all the details of the release can be found in the rel="nofollow" href="https://launchpad.net/percona-xtradb-cluster/+milestone/5.6.20-25.7" rel="nofollow">5.6.20-25.7 milestone at Launchpad.
- New session variable wsrep_sync_wait has been implemented to control causality check. The old session variable wsrep_causal_reads is deprecated but is kept for backward compatibility ( rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1277053">#1277053).
- rel="nofollow" rel="nofollow" href="http://freedesktop.org/wiki/Software/systemd/">systemd integration with RHEL/CentOS 7 is now available for Percona XtraDB Cluster from our testing repository ( rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1342223">#1342223).
START TRANSACTION WITH CONSISTENT SNAPSHOT,
mydumperwith disabled binlog would lead to a server crash. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1353644">#1353644.
percona-xtradb-cluster-garbd-3.xpackage was installed incorrectly on Debian/Ubuntu. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1360633">#1360633.
- Fixed netcat in SST script for CentOS 7 nmap-ncat. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1359767">#1359767.
TOisolation was run even when wsrep plugin was not loaded. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1358681">#1358681.
- The error from net read was not handled in native MySQL mode. This would cause duplicate key error if there was unfinished transaction at the time of shutdown, because it would be committed during the startup recovery. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1358264">#1358264.
- The netcat in garbd init script has been replaced with nmap for compatibility in CentOS 7. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1349384">#1349384.
SHOW STATUSwas generating debug output in the error log. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1347818">#1347818.
- Incorrect source string length could lead to server crash. This fix allows maximum of 3500 bytes of key material to be populated, longer keys will be truncated. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1347768">#1347768.
- wsrep consistency check is now enabled for
REPLACE ... SELECTas well. This was implemented because pt-table-checksum uses REPLACE .. SELECT during checksumming. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1343209">#1343209.
- Client connections were closed unconditionally before generating SST request. Fixed by avoiding closing connections when wsrep is initialized before storage engines. Bug fixed rel="nofollow" rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1258658">#1258658.
STATEMENTis now allowed to support pt-table-checksum. A warning (to not use it otherwise) is also added to error log.
Release notes for Percona XtraDB Cluster href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/release-notes/Percona-XtraDB-Cluster-5.6.20-25.7.html">5.6.20-25.7 are available in our href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/index.html">online documentation along with the href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html">installation and href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/upgrading_guide_55_56.html">upgrade instructions.
Help us improve our software quality by reporting any bugs you encounter using our rel="nofollow" href="https://bugs.launchpad.net/percona-xtradb-cluster/+filebug" rel="nofollow">bug tracking system. As always, thanks for your continued support of Percona!
The post rel="nofollow" href="http://www.percona.com/blog/2014/09/01/percona-xtradb-cluster-5-6-20-25-7-now-available/">Percona XtraDB Cluster 5.6.20-25.7 is now available appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.
If you know about Vandrico, it’s probably because of the company’s cool wearables database we featured earlier this year. That was just a byproduct of Vandrico’s research, however. What the company is really about is bringing wearables into the workplace. Read More
class="alignright" src="http://s2.percona.com/logo_percona_xtradbcluster_new.png" alt="" width="225" height="71" />
Galera replication for MySQL brings not only the new, great features to our ecosystem, but also introduces completely new maintenance techniques. Are you concerned about adding such new complexity to your MySQL environment? Perhaps that concern is unnecessarily.
I am going to present here some simple tips that hopefully will let fresh Galera users prevent headaches when there is the need to recover part or a whole cluster in certain situations. I used href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster (project based on Percona Server and Galera library + MySQL extensions from Codership) to prepare this post, but most if not all of the scenarios should also apply to any solution based on MySQL+Galera tandem you actually chose, whether these are rel="nofollow" href="http://galeracluster.com/downloads/#downloads" rel="nofollow">binaries from Codership, MariaDB Galera Cluster or maybe your own builds.
Unlike standard MySQL replication, a PXC cluster acts like one logical entity, which takes care about each node status and consistency as well as cluster status as a whole. This allows to maintain much better data integrity then you may expect from traditional asynchronous replication while allowing safe writes on multiple nodes in the same time. This is though for the price of more possible scenarios where database service will be stopped with no node being able to serve requests.
Lets assume the simplest case cluster of nodes A, B and C and few possible scenarios where some or all nodes are out of service. What may happen and what we have to do, to bring them (or whole cluster) back up.
class="alignright wp-image-25349 size-full" src="http://www.percona.com/blog/wp-content/uploads/2014/08/g1.png" alt="g1" width="215" height="164" />Node A is gracefully stopped. Likely for the purpose of maintenance, configuration change, etc.
/> In this case the other nodes receive “good bye” message from that node, hence the cluster size is reduced and some properties like quorum calculation or auto increment are automatically changed. Once we start the A node again, it will join the cluster based on it’s wsrep_cluster_address setting in my.cnf. This process is much different from normal replication – the joiner node won’t serve any requests until it is again fully synchronized with the cluster, so connecting to it’s peers isn’t enough, state transfer must succeed first. If the writeset cache ( href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/wsrep-provider-index.html#gcache.size">gcache.size), on nodes B and/or C has still all the transactions there were executed during the time this node was down, joining will be possible via (usually fast and light) rel="nofollow" href="http://galeracluster.com/documentation-webpages/statetransfer.html#incremental-state-transfer-ist" rel="nofollow">IST. Otherwise, full href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/manual/state_snapshot_transfer.html">SST will be needed, which in fact is full binary data snapshot copy. Hence it may be important here to determine the best donor, as shown in href="http://www.percona.com/blog/2014/01/08/finding-good-ist-donor-percona-xtradb-cluster-5-6/"> this article. If IST is impossible due to missing transactions in donor’s gcache, the fallback decision is made by the donor and SST is started automatically instead.
href="http://www.percona.com/blog/wp-content/uploads/2014/08/g2.png"> class="size-full wp-image-25348 alignright" src="http://www.percona.com/blog/wp-content/uploads/2014/08/g2.png" alt="g2" width="211" height="160" />Nodes A and B are gracefully stopped. Similar to previous case, cluster size is reduced to 1, hence even the single remaining node C forms a rel="nofollow" href="http://galeracluster.com/documentation-webpages/weightedquorum.html#primary-component" rel="nofollow">primary component and is serving client requests. To get the nodes back into the cluster, you just need to start them. However, the node C will be switched to “Donor/Desynced” state as it will have to provide state transfer to at least first joining node. It is still possible to read/write to it during that process, but it may be much slower, depending how large state transfers it needs to send. Also some load balancers may consider the donor node as not operational and remove it from the pool. So it is best to avoid situation when only one node is up.
Note though, if you restart A and then B in that order, you may want to make sure B won’t use A as state transfer donor, as A may not have all the needed writesets in it’s gcache. So just specify the C node as donor this way (“nodeC” name is the one you specify with wsrep_node_name variable):
service mysql start --wsrep_sst_donor=nodeC
href="http://www.percona.com/blog/wp-content/uploads/2014/08/g3.png"> class="size-full wp-image-25347 alignright" src="http://www.percona.com/blog/wp-content/uploads/2014/08/g3.png" alt="g3" width="211" height="157" />All three nodes are gracefully stopped. Cluster is deformed. In this case, the problem is how to initialize it again. Here, it is important to know, that during clean shutdown, a PXC node writes it’s last executed position into the grastate.dat file. By comparing the seqno number inside, you will see which node is the most advanced one (most likely the last one stopped). Cluster must be bootstrapped using this node, otherwise nodes that had more advanced position will have to perform full SST to join cluster initialized from the less advanced one (and some transactions will be lost). To bootstrap the first node, invoke the startup script like this:
service mysql bootstrap-pxc
service mysql start --wsrep_new_cluster
service mysql start --wsrep-cluster-address="gcomm://"
systemctl start firstname.lastname@example.org
In older PXC versions, to bootstrap cluster, you had to edit my.cnf and replace previous wsrep_cluster_address line with empty value like this: wsrep_cluster_address=gcomm:// and start mysql normally. More details to be found rel="nofollow" href="http://galeracluster.com/documentation-webpages/restartingcluster.html" rel="nofollow">here.
href="http://www.percona.com/blog/wp-content/uploads/2014/08/g4.png"> class="size-full wp-image-25346 alignright" src="http://www.percona.com/blog/wp-content/uploads/2014/08/g4.png" alt="g4" width="232" height="167" />Scenario 4
Node A disappears from the cluster. By disappear I mean power outage, hardware failure, kernel panic, mysqld crash, kill -9 on mysqld pid, OOMkiller, etc. Two remaining nodes notice the connection to A node is down and will be trying to re-connect to it. After some timeouts, both agree that node A is really down and remove it “officially” from the cluster. Quorum is saved ( 2 out of 3 nodes are up), so no service disruption happens. After restarting, A will join automatically the same way as in scenario 1.
href="http://www.percona.com/blog/wp-content/uploads/2014/08/g51.png"> class="wp-image-25364 size-full alignright" src="http://www.percona.com/blog/wp-content/uploads/2014/08/g51.png" alt="" width="244" height="171" />Nodes A and B disappear. The node C is not able to form the quorum alone, so the cluster is switching into a non-primary mode, in which MySQL refuses to serve any SQL query. In this state, mysqld process on C will be still running, you can connect to it, but any statement related to data fails with:
mysql> select * from test.t1;
/> ERROR 1047 (08S01): Unknown command
Actually reads will be possible for a moment until C decides that it cannot reach A and B, but immediately no new writes will be allowed thanks to the rel="nofollow" href="http://galeracluster.com/documentation-webpages/certificationbasedreplication.html" rel="nofollow">certification based replication in Galera. This is what we are going to see in the remaining node’s log:
140814 0:42:13 [Note] WSREP: commit failed for reason: 3
/> 140814 0:42:13 [Note] WSREP: conflict state: 0
/> 140814 0:42:13 [Note] WSREP: cluster conflict due to certification failure for threads:
/> 140814 0:42:13 [Note] WSREP: Victim thread:
/> THD: 7, mode: local, state: executing, conflict: cert failure, seqno: -1
/> SQL: insert into t values (1)
The single node C is then waiting for it’s peers to show up again, and in some cases if that happens, like when there was network outage and those nodes were up all the time, the cluster will be formed again automatically. Also if the nodes B and C were just network-severed from the first node, but they can still reach each other, they will keep functioning as they still form the quorum. If A and B were crashed ( due to data inconsistency, bug, etc. ) or off due to power outage, you need to do manual action to enable rel="nofollow" href="http://galeracluster.com/documentation-webpages/weightedquorum.html#primary-component" rel="nofollow">primary component on the C node, before you can bring A and B back. This way, we tell the C node “Hey, you can now form a new cluster alone, forget A and B!”. The command to do this is:
SET GLOBAL wsrep_provider_options='pc.bootstrap=true';
However, you should double check in order to be very sure the other nodes are really down before doing that! Otherwise, you will most likely end up with two clusters having different data.
href="http://www.percona.com/blog/wp-content/uploads/2014/08/g6.png"> class="size-full wp-image-25344 alignright" src="http://www.percona.com/blog/wp-content/uploads/2014/08/g6.png" alt="g6" width="244" height="186" />All nodes went down without proper shutdown procedure. Such situation may happen in case of datacenter power failure, hitting some MySQL or Galera bug leading to crash on all nodes, but also as a result of data consistency being compromised where cluster detects that each node has different data. In each of those cases, the grastate.dat file is not updated and does not contain valid sequence number (seqno). It may look like this:
/> # GALERA saved state
/> version: 2.1
/> uuid: 220dcdcb-1629-11e4-add3-aec059ad3734
/> seqno: -1
In this case, we are not sure if all nodes were consistent with each other, hence it is crucial to find the most advanced one in order to boostrap the cluster using it. Before starting mysql daemon on any node, you have to extract the last sequence number by checking it’s transactional state. You can do it this way:
[root@percona3 ~]# mysqld_safe --wsrep-recover
/> 140821 15:57:15 mysqld_safe Logging to '/var/lib/mysql/percona3_error.log'.
/> 140821 15:57:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
/> 140821 15:57:15 mysqld_safe WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.6bUIqM' --pid-file='/var/lib/mysql/percona3-recover.pid'
/> 140821 15:57:17 mysqld_safe WSREP: Recovered position 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:2
/> 140821 15:57:19 mysqld_safe mysqld from pid file /var/lib/mysql/percona3.pid ended
So the last committed transaction sequence number on this node was 2. Now you just need to bootstrap from the latest node first and then start the others.
However, the above procedure won’t be needed in the recent Galera versions (3.6+?), available since
href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/release-notes/Percona-XtraDB-Cluster-5.6.19-25.6.html">PXC 5.6.19. There is a new option –
pc.recovery (enabled by default), which saves the cluster state into a file named
gvwstate.dat on each member node. As the variable name says (pc – primary component), it saves only a cluster being in PRIMARY state. An example content of that file may look like this:
/> my_uuid: 76de8ad9-2aac-11e4-8089-d27fd06893b9
/> view_id: 3 6c821ecc-2aac-11e4-85a5-56fe513c651f 3
/> bootstrap: 0
/> member: 6c821ecc-2aac-11e4-85a5-56fe513c651f 0
/> member: 6d80ec1b-2aac-11e4-8d1e-b2b2f6caf018 0
/> member: 76de8ad9-2aac-11e4-8089-d27fd06893b9 0
We can see three node cluster above with all members being up. Thanks to this new feature, in the case of power outage in our datacenter, after power is back, the nodes will read the last state on startup and will try to restore primary component once all the members again start to see each other. This makes the PXC cluster to automatically recover from being powered down without any manual intervention! In the logs we will see:
style="color: #0000ff;">140823 15:28:55 [Note] WSREP: restore pc from disk successfully
/> 140823 15:29:59 [Note] WSREP: declaring 6c821ecc at tcp://192.168.90.3:4567 stable
/> 140823 15:29:59 [Note] WSREP: declaring 6d80ec1b at tcp://192.168.90.4:4567 stable
/> 140823 15:29:59 [Warning] WSREP: no nodes coming from prim view, prim not possible
/> 140823 15:29:59 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 2, memb_num = 3
/> 140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28]
/> 140823 15:29:59 [Note] WSREP: Received NON-PRIMARY.
/> 140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# -1: non-Primary, number of nodes: 3, my index: 2, protocol version -1
/> 140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
/> style="color: #0000ff;">140823 15:29:59 [Note] WSREP: promote to primary component
/> 140823 15:29:59 [Note] WSREP: save pc into disk
/> 140823 15:29:59 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = yes, my_idx = 2, memb_num = 3
/> 140823 15:29:59 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
/> 140823 15:29:59 [Note] WSREP: clear restored view
/> 140823 15:29:59 [Note] WSREP: Bootstrapped primary 00000000-0000-0000-0000-000000000000 found: 3.
/> 140823 15:29:59 [Note] WSREP: Quorum results:
/> version = 3,
/> component = PRIMARY,
/> conf_id = -1,
/> members = 3/3 (joined/total),
/> act_id = 11,
/> last_appl. = -1,
/> protocols = 0/6/2 (gcs/repl/appl),
/> group UUID = 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a
/> 140823 15:29:59 [Note] WSREP: Flow-control interval: [28, 28]
/> 140823 15:29:59 [Note] WSREP: Restored state OPEN -> JOINED (11)
/> 140823 15:29:59 [Note] WSREP: New cluster view: global state: 4b83bbe6-28bb-11e4-a885-4fc539d5eb6a:11, view# 0: Primary, number of nodes: 3, my index: 2, protocol version 2
/> 140823 15:29:59 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
/> 140823 15:29:59 [Note] WSREP: REPL Protocols: 6 (3, 2)
/> 140823 15:29:59 [Note] WSREP: Service thread queue flushed.
/> 140823 15:29:59 [Note] WSREP: Assign initial position for certification: 11, protocol version: 3
/> 140823 15:29:59 [Note] WSREP: Service thread queue flushed.
/> 140823 15:29:59 [Note] WSREP: Member 1.0 (percona3) synced with group.
/> 140823 15:29:59 [Note] WSREP: Member 2.0 (percona1) synced with group.
/> 140823 15:29:59 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 11)
/> 140823 15:29:59 [Note] WSREP: Synchronized with group, ready for connections
class="alignright wp-image-25343 size-full" src="http://www.percona.com/blog/wp-content/uploads/2014/08/g7.png" alt="g7" width="508" height="167" />Cluster lost it’s primary state due to split brain situation. For the purpose of this example, let’s assume we have the cluster formed from even number of nodes – six and three of them are in one location while another three in second location (datacenter) and network connectivity is broken between them. Of course the best practice is to avoid such topology: if you can’t have odd number of real nodes, at least you can use an additional
rel="nofollow" href="http://galeracluster.com/documentation-webpages/arbitrator.html" rel="nofollow">arbitrator (garbd) node or set higher
pc.weight to some nodes. But when split brain happens any way, so none of the separated groups can maintain the quorum – all nodes must stop serving requests and both parts of the cluster are just continuously trying to re-connect. If you want to restore the service even before the network link is restored, you can make one of the groups primary again using the same command like in scenario 5:
SET GLOBAL wsrep_provider_options='pc.bootstrap=true';
After that, you are able to work on the manually restored part of the cluster, and the second half should be able to automatically re-join using rel="nofollow" href="http://galeracluster.com/documentation-webpages/statetransfer.html#incremental-state-transfer-ist" rel="nofollow">incremental state transfer (IST) once the network link is restored. But beware: if you set the bootstrap option on both the separated parts, you will end up with two living cluster instances, with data likely diverging away from each other. Restoring network link in that case won’t make them to re-join until nodes are restarted and try to re-connect to members specified in configuration file. Then, as Galera replication model truly cares about data consistency – once the inconsistency will be detected, nodes that cannot execute row change statement due to different data – will perform emergency shutdown and the only way to bring them back to the cluster will be via full SST.
I hope I covered most of the possible failure scenarios of Galera-based clusters, and made the recovery procedures bit more clear.
The post rel="nofollow" href="http://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/">Galera replication – how to recover a PXC cluster appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.
I was talking with a href="http://www.percona.com/products/mysql-support" >Percona Support customer earlier this week who was looking for Galera data on Percona Cloud Tools. (Percona Cloud Tools, now in href="https://cloud.percona.com/" >free beta, is a hosted service providing access to query performance insights for all MySQL uses.)
The customer mentioned they were already keeping track of some Galera stats on title="Cacti" rel="nofollow" href="http://www.cacti.net/" rel="nofollow">Cacti, and given they were inclined to use href="https://cloud.percona.com/" >Percona Cloud Tools more and more, they wanted to know if it was already supporting title="Percona XtraDB Cluster" href="http://www.percona.com/software/percona-xtradb-cluster">Percona XtraDB Cluster. My answer was: “No, not yet: you can install agents in each node (the regular way in style="color: #000000;"> the first node, then manually on the other nodes… and when prompted say “No” to create MySQL user and provide the one you’re using already) and monitor them as autonomous MySQL servers – but the concept of cluster and specially the “Galera bits” has yet to be implemented there.”
Except I was wrong.
By “concept of cluster” I mean supporting the notion of style="text-decoration: underline;">group instances, which should allow a single cluster-wide view for metrics and query reports, such as the slow queries (which are recorded locally on the node where the query was run and thus observed in a detached way). This still needs to be implemented indeed, but it’s on the roadmap.
The “Galera bits” I mentioned above are the various “wsrep_” status variables. In fact, those refer to the Write Set REPlication patches (based in the title="wsrep: API for (a)synchronous writeset replication for transactional applications" rel="nofollow" href="https://launchpad.net/wsrep" rel="nofollow">wsrep API), a set of hooks applied to the InnoDB/XtraDB storage engine and other components of MySQL that modifies the way replication works (to put it in a very simplified way), which in turn are used by the title="Galera librari" rel="nofollow" href="https://launchpad.net/galera">Galera library to provide a “generic Synchronous Multi-Master replication plugin for transactional applications.” A patched version of Percona Server together with the Galera libray compose the base of Percona XtraDB Cluster.
id="attachment_25412" style="width: 160px" class="wp-caption aligncenter"> href="http://www.percona.com/blog/wp-content/uploads/2014/08/adding_wsrep_chart.jpg"> class="wp-image-25412 size-thumbnail" src="http://www.percona.com/blog/wp-content/uploads/2014/08/adding_wsrep_chart-150x150.jpg" alt="adding_wsrep_chart" width="150" height="150" /> class="wp-caption-text">Click on the picture to enlarge it
As I found out only now, Percona Cloud Tools style="text-decoration: underline;">does collect data from the various wsrep_ variables and it is available for use – it’s just not shown by default. A user only needs to add a dashboard/chart manually on PCT to see these metrics:
Now, I need to call that customer …
Monitoring the cluster
Since I’m touching this topic I thought it would be useful to include some additional information on monitoring a Galera (Percona XtraDB Cluster in particular) cluster, even though most of what I mention below has already been published in different posts here on the MySQL Performance Blog. There’s a succint title="Monitoring the cluster" href="http://www.percona.com/doc/percona-xtradb-cluster/5.6/manual/monitoring.html">documentation page bearing the same title of this section that indicates the main wsrep_ variables you should monitor to check the health status of the cluster and how well it’s coping with load along the time (performance). Remember you can get a grasp of the full set of variables at any time by issuing the following command from one (or each one) of the nodes:
mysql> SHOW GLOBAL STATUS LIKE "wsrep_%";
And for a broader and real time view of the wsrep_ status variables you can use Jay Janssen’s title="Jay Jansen's myq_gadgets on GitHub" rel="nofollow" href="https://github.com/jayjanssen/myq_gadgets/" rel="nofollow">myq_gadgets toolkit, which he title="Realtime stats to pay attention to in Percona XtraDB Cluster and Galera" href="http://www.percona.com/blog/2012/11/26/realtime-stats-to-pay-attention-to-in-percona-xtradb-cluster-and-galera/">modified a couple of years ago to account for Galera.
There’s also a specific title="Percona Galera/MySQL Monitoring Template for Cacti" href="http://www.percona.com/doc/percona-monitoring-plugins/1.1/cacti/galera-templates.html">Galera-template available in our title="http://www.percona.com/software/percona-monitoring-plugins" href="http://www.percona.com/software/percona-monitoring-plugins">Percona Monitoring Plugins (PMP) package that you can use in your title="Cacti" rel="nofollow" href="http://www.cacti.net/" rel="nofollow">Cacti server. That would cover the “how well the cluster copes with load along the time,” providing historical graphing. And while there isn’t a Galera specific plugin for Nagios in PMP, Jay title="Percona XtraDB Cluster/ Galera with Percona Monitoring Plugins" href="http://www.percona.com/blog/2013/10/31/percona-xtradb-cluster-galera-with-percona-monitoring-plugins/">explains in another post how you can customize style="color: #047ac6;" title="pmp-check-mysql-status" href="http://www.percona.com/doc/percona-monitoring-plugins/1.1/nagios/pmp-check-mysql-status.html" shape="rect">pmp-check-mysql-status to “check any status variable you like,” describing his approach to keep a cluster’s “health status” in check by setting alerts on specific stats, on a per node basis.
VividCortex title="VIVIDCORTEX NOW SUPPORTS GALERA GRAPHS" rel="nofollow" href="https://vividcortex.com/blog/2014/08/04/galera-graphs/" rel="nofollow">recently added a set of templates for Galera in their product and you can also rely on Severalnines’ title="Severalnines' ClusterControl" rel="nofollow" href="http://www.severalnines.com/clustercontrol" rel="nofollow">ClusterControl monitoring features to get that “global view” of your cluster that Percona Cloud Tools doesn’t yet provide. Even though ClusterControl is a complete solution for cluster deployment and management, focusing on the automation of the whole process, the monitoring part alone is particularly interesting as it encompasses cluster-wide information in a customized way, including the “Galera bits”. You may want to give it a try as the monitoring features are available in the Community version of the product (and if you’re a Percona customer with a Support contract covering Percona XtraDB Cluster, then you’re entitled to get title="MySQL High Availability Cluster Monitoring Tools" href="http://www.percona.com/products/mysql-support/cluster-monitoring-solutions" >support for it from us).
One thing I should note that differentiate the monitoring solutions from above is that while you can install Cacti, Nagios and ClusterControl as servers in your own infrastructure both Percona Cloud Tools and VividCortex are hosted, cloud-based services. Having said that, neither title="From zero to full visibility of MySQL in 3 minutes with Percona Cloud Tools" href="http://www.percona.com/blog/2014/05/28/zero-full-visibility-mysql-3-minutes-percona-cloud-tools/">one nor the title="Privacy and Security" rel="nofollow" href="https://support.vividcortex.com/security/" rel="nofollow">other upload sensitive data to the cloud and both provide options for query obfuscation.
Contrary to what I believed, Percona Cloud Tools does provide support for “Galera bits” (the wsrep_ status variables), even though it has yet to implement support for the notion of group instances, which will allow for cluster-wide view for metrics and query reports. You can also rely on the Galera template for Cacti provided by Percona Monitoring Plugins for historical graphing and some clever use of Nagios’ pmp-check-mysql-status for customized cluster alerts. VividCortex and ClusterControl also provide monitoring for Galera.
Percona Cloud Tools, now in free beta, is a hosted service providing access to query performance insights for all MySQL uses. After a brief setup, unlock new information about your database and how to improve your applications. href="https://cloud.percona.com/" >Sign up to request access to the beta today.
The post rel="nofollow" href="http://www.percona.com/blog/2014/08/29/galera-data-on-percona-cloud-tools-and-other-mysql-monitoring-tools/">Galera data on Percona Cloud Tools (and other MySQL monitoring tools) appeared first on rel="nofollow" href="http://www.percona.com/blog">MySQL Performance Blog.