This is the second part of my series on High Availability with mysqlnd_ms. In my first post, “Simple MySQL Master HA with mysqlnd_ms,” I showed a simple HA solution using asynchronous MySQL replication. This time we will see how to leverage an all-primary cluster where you can write to all nodes. In this post I used Percona XtraDB Cluster, but you should also be able to do the same with MySQL NDB Cluster or Tungsten Replicator.
To start with, here is the mysqlnd_ms configuration I used:
mysqlnd_ms_mm.ini
. All of these files are available from my Github repository. Below, I have three Percona XtraDB Cluster nodes, all defined as masters and no slaves. I’ve configured a roundrobin filter where all connections will happen on the first node, in this case
192.168.56.44
. In case the first node fails, the second node will be used and so forth until no more nodes are available. Another interesting configuration option here is the loop_before_master strategy whereby if connection or a statement to the current server fails, it will be retried silently on the remaining nodes before returning an error to the user, more on this below.
{
"primary": {
"master": {
"master_1": {
"host": "192.168.56.44",
"port": "3306"
},
"master_2": {
"host": "192.168.56.43",
"port": "3306"
},
"master_3": {
"host": "192.168.56.42",
"port": "3306"
}
},
"slave": { },
"filters": { "roundrobin": [ ] },
"failover": { "strategy": "loop_before_master", "remember_failed": true }
}
}
Similar to my previous post, I also used a custom INI file for PHP to use, this time aptly named
master-master.ini
:
mysqlnd_ms.enable = 1
mysqlnd_ms.disable_rw_split = 1
mysqlnd_ms.multi_master = 1
mysqlnd_ms.force_config_usage = 1
mysqlnd_ms.config_file = /home/revin/git/demo-me/phpugph201407/mysqlnd_ms_mm.ini
A new addition to this configuration is
mysqlnd_ms.multi_master
, when enabled it would allow you to use all nodes or just one and treat the others as passive. The PHP script I used this time is called
master-master.php
, it is largely similar to
master-slave-ng.php
with a few differences:
- There is no need for
/tmp/PRIMARY_HAS_FAILED
sentinel as all nodes were writable.
- There is no need for
/*ms=master*/
SQL hint when validating a connection from connect_mysql function since all nodes acts as master.
So here is a quick test, first with roundrobin filter, after 4 INSERTs, I shutdown
192.168.56.44
which sends my connection to the next server in the configuration,
192.168.56.43
. When I started back
192.168.56.44
again, the script resumed connections there. Pretty cool right?
[revin@forge phpugph201407]$ php -c master-master.ini master-master.php Last value 3564 from host 192.168.56.44 via TCP/IP and thread id 19 Last value 3565 from host 192.168.56.44 via TCP/IP and thread id 20 Last value 3566 from host 192.168.56.44 via TCP/IP and thread id 21 Last value 3567 from host 192.168.56.44 via TCP/IP and thread id 22 Warning: connect_mysql(): MySQL server has gone away in /home/revin/git/demo-me/phpugph201407/master-master.php on line 63 Warning: connect_mysql(): Error while reading greeting packet. PID=23464 in /home/revin/git/demo-me/phpugph201407/master-master.php on line 63 ERRROR: 192.168.56.43 via TCP/IP [2006] MySQL server has gone away on line 30 Last value 0 from host and thread id 0 Last value 3568 from host 192.168.56.43 via TCP/IP and thread id 1552 Last value 3569 from host 192.168.56.43 via TCP/IP and thread id 1553 [...] Last value 3584 from host 192.168.56.43 via TCP/IP and thread id 1568 Last value 3585 from host 192.168.56.44 via TCP/IP and thread id 18
Here’s another test using the random filter which allows you to write to all nodes, on my mysqlnd_ms_mm.ini above, I just changed
roundrobin
to
random
. As you can see, all three nodes were being used, of course in random, at the same time you will also see when I shutdown
192.168.56.44
around where the
connect_mysql
errors and then the server was used again near the bottom after a started it back up. Still pretty cool right?
[revin@forge phpugph201407]$ php -c master-master.ini master-master.php Last value 3590 from host 192.168.56.42 via TCP/IP and thread id 2060 Last value 3591 from host 192.168.56.43 via TCP/IP and thread id 1569 Last value 3592 from host 192.168.56.43 via TCP/IP and thread id 1570 Warning: connect_mysql(): MySQL server has gone away in /home/revin/git/demo-me/phpugph201407/master-master.php on line 63 Warning: connect_mysql(): Error while reading greeting packet. PID=23919 in /home/revin/git/demo-me/phpugph201407/master-master.php on line 63 ERRROR: 192.168.56.43 via TCP/IP [2006] MySQL server has gone away on line 30 Last value 0 from host and thread id 0 Last value 3593 from host 192.168.56.42 via TCP/IP and thread id 2061 Last value 3594 from host 192.168.56.42 via TCP/IP and thread id 2062 Last value 3595 from host 192.168.56.42 via TCP/IP and thread id 2063 Last value 3596 from host 192.168.56.42 via TCP/IP and thread id 2064 Last value 3597 from host 192.168.56.43 via TCP/IP and thread id 1576 Last value 3598 from host 192.168.56.43 via TCP/IP and thread id 1577 Last value 3599 from host 192.168.56.43 via TCP/IP and thread id 1578 Last value 3600 from host 192.168.56.43 via TCP/IP and thread id 1579 Last value 3601 from host 192.168.56.42 via TCP/IP and thread id 2065 Last value 3602 from host 192.168.56.43 via TCP/IP and thread id 1581 Last value 3603 from host 192.168.56.43 via TCP/IP and thread id 1582 Last value 3604 from host 192.168.56.42 via TCP/IP and thread id 2066 Last value 3605 from host 192.168.56.44 via TCP/IP and thread id 19 Last value 3606 from host 192.168.56.43 via TCP/IP and thread id 1583 Last value 3607 from host 192.168.56.44 via TCP/IP and thread id 21
So here are some issues I’ve observed during these tests:
-
remember_failed
during failover does not work as advertised. Supposedly, a failed node should not be used again for every connection request but in my test, this is not the case. See more from this bug. This means that if you have 2 out of 3 failed nodes in this scenario the overhead would be too big when testing both connections. Perhaps some sort of in memory shared TTL can be used to overcome this? I’m not sure.
- If you look closely around line 7 on my last output above the error displayed is kind of misleading. In particular it says
ERRROR: 192.168.56.43 via TCP/IP
, whereby it was not
192.168.56.43
that failed, it was
192.168.56.43
. This is because under the hood, immediately after failure the next node will be cycled to, this is especially true since we have loop_before_master configured. I sure do have a bug on the script that should capture the
host_info
properly, but this is something to always keep in mind so you don’t keep scratching your head.
So we’ve seen these two forms of possibilities and they definitely have use cases and advantages. On the other hand because of the issues we have found so far (I’ve reported 4 bugs on the PHP bugs database during the course of these tests including one crashing), I recommend to make sure you test seriously before putting this on production.
The post High Availability with mysqlnd_ms on Percona XtraDB Cluster appeared first on MySQL Performance Blog.