This is the third post in our MySQL Fabric series. If you missed the previous two, we started with an overall introduction, and then a discussion of MySQL Fabric’s high-availability (HA) features. MySQL Fabric was RC when we started this series, but it went GA recently. You can read the press release here, and see this blog post from Oracle’s Mats Kindahl for more details. In our previous post, we showed a simple HA setup managed with MySQL Fabric, including some basic failure scenarios. Today, we’ll present a similar scenario from an application developer’s point of view, using the Python Connector for the examples. If you’re following the examples on these posts, you’ll notice that the UUID for servers will be changing. That’s because we rebuild the environment between runs. Symbolic names stay the same though. That said, here’s our usual 3 node setup:
[vagrant@store ~]$ mysqlfabric group lookup_servers mycluster
Command :
{ success = True
return = [{'status': 'SECONDARY', 'server_uuid': '3084fcf2-df86-11e3-b46c-0800274fb806', 'mode': 'READ_ONLY', 'weight': 1.0, 'address': '192.168.70.101'}, {'status': 'SECONDARY', 'server_uuid': '35cc3529-df86-11e3-b46c-0800274fb806', 'mode': 'READ_ONLY', 'weight': 1.0, 'address': '192.168.70.102'}, {'status': 'PRIMARY', 'server_uuid': '3d3f6cda-df86-11e3-b46c-0800274fb806', 'mode': 'READ_WRITE', 'weight': 1.0, 'address': '192.168.70.103'}]
activities =
}
For our tests, we will be using this simple script:
import mysql.connector
from mysql.connector import fabric
from mysql.connector import errors
import time
config = {
'fabric': {
'host': '192.168.70.100',
'port': 8080,
'username': 'admin',
'password': 'admin',
'report_errors': True
},
'user': 'fabric',
'password': 'f4bric',
'database': 'test',
'autocommit': 'true'
}
fcnx = None
print "starting loop"
while 1:
if fcnx == None:
print "connecting"
fcnx = mysql.connector.connect(**config)
fcnx.set_property(group='mycluster', mode=fabric.MODE_READWRITE)
try:
print "will run query"
cur = fcnx.cursor()
cur.execute("select id, sleep(0.2) from test.test limit 1")
for (id) in cur:
print id
print "will sleep 1 second"
time.sleep(1)
except errors.DatabaseError:
print "sleeping 1 second and reconnecting"
time.sleep(1)
del fcnx
fcnx = mysql.connector.connect(**config)
fcnx.set_property(group='mycluster', mode=fabric.MODE_READWRITE)
fcnx.reset_cache()
try:
cur = fcnx.cursor()
cur.execute("select 1")
except errors.InterfaceError:
fcnx = mysql.connector.connect(**config)
fcnx.set_property(group='mycluster', mode=fabric.MODE_READWRITE)
fcnx.reset_cache()
This simple script requests a MODE_READWRITE connection and then issues selects in a loop. The reason it requests a RW connector is that it makes it easier for us to provoke a failure, as we have two SECONDARY nodes that could be used for queries if we requested a MODE_READONLY connection. The select includes a short sleep to make it easier to catch it in SHOW PROCESSLIST. In order to work, this script needs the test.test table to exist in the mycluster group. Running the following statements in the PRIMARY node will do it:
mysql> create database if not exists test;
mysql> create table if not exists test.test (id int unsigned not null auto_increment primary key) engine = innodb;
mysql> insert into test.test values (null);
Dealing with failure
With everything set up, we can start the script and then cause a PRIMARY failure. In this case, we’ll simulate a failure by shutting down mysqld on it:
mysql> select @@hostname;
+-------------+
| @@hostname |
+-------------+
| node3.local |
+-------------+
1 row in set (0.00 sec)
mysql> show processlist;
+----+--------+--------------------+------+------------------+------+-----------------------------------------------------------------------+----------------------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+----+--------+--------------------+------+------------------+------+-----------------------------------------------------------------------+----------------------------------------------+
| 5 | fabric | store:39929 | NULL | Sleep | 217 | | NULL |
| 6 | fabric | node1:37999 | NULL | Binlog Dump GTID | 217 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL |
| 7 | fabric | node2:49750 | NULL | Binlog Dump GTID | 216 | Master has sent all binlog to slave; waiting for binlog to be updated | NULL |
| 16 | root | localhost | NULL | Query | 0 | init | show processlist |
| 20 | fabric | 192.168.70.1:55889 | test | Query | 0 | User sleep | select id, sleep(0.2) from test.test limit 1 |
+----+--------+--------------------+------+------------------+------+-----------------------------------------------------------------------+----------------------------------------------+
5 rows in set (0.00 sec)
[vagrant@node3 ~]$ sudo service mysqld stop
Stopping mysqld: [ OK ]
While this happens, here’s the output from the script:
will sleep 1 second
will run query
(1, 0)
will sleep 1 second
will run query
(1, 0)
will sleep 1 second
will run query
(1, 0)
will sleep 1 second
will run query
sleeping 1 second and reconnecting
will run query
(1, 0)
will sleep 1 second
will run query
(1, 0)
will sleep 1 second
will run query
(1, 0)
The ‘sleeping 1 second and reconnecting’ line means the script got an exception while running a query (when the PRIMARY node was stopped, waited one second and then reconnected. The next lines confirm that everything went back to normal after the reconnection. The relevant piece of code that handles the reconnection is this:
fcnx = mysql.connector.connect(**config)
fcnx.set_property(group='mycluster', mode=fabric.MODE_READWRITE)
fcnx.reset_cache()
If fcnx.reset_cache() is not invoked, then the driver won’t connect to the xml-rpc server again, but will use it’s local cache of the group’s status instead. As the PRIMARY node is offline, this will cause the reconnect attempt to fail. By reseting the cache, we’re forcing the driver to connect to the xml-rpc server and fetch up to date group status information. If more failures happen and there is no PRIMARY (or candidate for promotion) node in the group, the following error is received:
will run query
(1, 0)
will sleep 1 second
will run query
sleeping 1 second and reconnecting
will run query
Traceback (most recent call last):
File "./reader_test.py", line 34, in
cur = fcnx.cursor()
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 1062, in cursor
self._connect()
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 1012, in _connect
exc))
mysql.connector.errors.InterfaceError: Error getting connection: No MySQL server available for group 'mycluster'
Running without MySQL Fabric
As we have discussed in previous posts, the XML-PRC server can become a single point of failure under certain circumstances. Specifically, there are at least two problem scenarios when this server is down:
- When a node goes down
- When new connection attempts are made
The first case is obvious enough. If MySQL Fabric is not running and a node fails, there won’t be any action, and clients will get an error whenever they send a query to the failed node. This is worse if the PRIMARY fails, as failover won’t happen and the cluster will be unavailable for writes. The second case means that while MySQL Fabric is not running, no new connections to the group can be established. This is because when connecting to a group, MySQL Fabric-aware clients first connect to the XML-RPC server to get a list of nodes and roles, and only then use their local cache for decisions. A way to mitigate this is to use connection pooling, which reduces the need for establishing new connections, and therefore minimises the chance of failure due to MySQL Fabric being down. This, of course, is assuming that something is monitoring MySQL Fabric ensuring some host provides the XML-PRC service. If that is not the case, failure will be delayed, but it will eventually happen anyway. Here is an example of what happens when MySQL Fabric is down and the PRIMARY node goes down:
Traceback (most recent call last):
File "./reader_test.py", line 35, in
cur.execute("select id, sleep(0.2) from test.test limit 1")
File "/Library/Python/2.7/site-packages/mysql/connector/cursor.py", line 491, in execute
self._handle_result(self._connection.cmd_query(stmt))
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 1144, in cmd_query
self.handle_mysql_error(exc)
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 1099, in handle_mysql_error
self.reset_cache()
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 832, in reset_cache
self._fabric.reset_cache(group=group)
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 369, in reset_cache
self.get_group_servers(group, use_cache=False)
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 478, in get_group_servers
inst = self.get_instance()
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 390, in get_instance
if not inst.is_connected:
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 772, in is_connected
self._proxy._some_nonexisting_method() # pylint: disable=W0212
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xmlrpclib.py", line 1578, in __request
verbose=self.__verbose
File "/Library/Python/2.7/site-packages/mysql/connector/fabric/connection.py", line 272, in request
raise InterfaceError("Connection with Fabric failed: " + msg)
mysql.connector.errors.InterfaceError: Connection with Fabric failed:
This happens when a new connection attempt is made after resetting the local cache.
Making sure MySQL Fabric stays up
As of this writing, it is the user’s responsibility to make sure MySQL Fabric is up and running. This means you can use whatever you feel comfortable with in terms of HA, like Pacemaker. While it does add some complexity to the setup, the XML-RPC server is very simple to manage and so a simple resource manager should work. For the backend, MySQL Fabric is storage engine agnostic, so an easy way to resolve this could be to use a small MySQL Cluster set up to ensure the backend is available. MySQL’s team blogged about such a set up here. We think the ndb approach is probably the simplest for providing HA at the MySQL Fabric store level, but believe that MySQL Fabric itself should provide or make it easy to achieve HA at the XML-RPC server level. If ndb is used as store, this means any node can take a write, which in turns means multiple XML-PRC instances should be able to write to the store concurrently. This means that in theory, improving this could be as easy as allowing Fabric-aware drivers to get a list of Fabric servers instead of a single IP and port to connect to.
What’s next
In the past two posts, we’ve presented MySQL Fabric’s HA features, seen how it handles failures at the node level, how to use MySQL databases with a MySQL Fabric-aware driver, and what remains unresolved for now. In our next post, we’ll review MySQL Fabric’s Sharding features.
The post High Availability with MySQL Fabric: Part II appeared first on MySQL Performance Blog.