Oct
21
2014
--

Percona XtraDB Cluster: How to run a 2-node cluster on a single server

I reckon there’s little sense in running 2 or more Percona XtraDB Cluster (PXC) nodes in a single physical server other than for educational and testing purposes – but doing so is still useful in those cases. The most popular way of achieving this seems to be with server virtualization, such as making use of Vagrant boxes. But in the same way you can have multiple instances of MySQL running in parallel on the OS level in the form of concurrent mysqld processes, so too can you have multiple Percona XtraDB Cluster nodes. And the way to achieve this is precisely the same: using dedicated datadirs and different ports for each node.

 

Which ports?

4 tcp ports are used by Pecona XtraDB Cluster:
  • the regular MySQL port (default 3306)
  • port for group (Galera) communication (default 4567)
  • port for State Transfer (default 4444)
  • port for Incremental State Transfer (default is: port for group communication (4567) + 1 = 4568)
Of course, when you have multiple instances in the same server default values won’t work for all of them so we need to define new ports  for the additional instances and make sure to have the local firewall open to them, if there is one active (iptables, selinux,…).

[{ loading … }]

Installing Percona XtraDB Cluster, configuring and starting the first node

My test server was a fresh CentOS 6.5 configured with Percona yum repository, from which I installed the latest Percona XtraDB Cluster (5.6.20-25.7.888.el6); note that you’ll need the EPEL repository as well to install socat, which is a dependency (see this bug). To avoid confusion, I’ve prevented the mysql service to start automatically:
chkconfig --level 3 mysql off
chkconfig --del mysql


I could have installed PXC from the tarball but I decided to do it from the repositories to have all dependencies covered by yum. This is how my initial /etc/my.cnf looked like (note the use of default values):

[mysqld]
datadir = /var/lib/mysql
port=3306
socket=/var/lib/mysql/mysql-node1.sock
pid-file=/var/lib/mysql/mysql-node1.pid
log-error=/var/lib/mysql/mysql-node1.err
binlog_format=ROW
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name = singlebox
wsrep_node_name = node1
wsrep_cluster_address=gcomm://
I’ve started by manually bootsrapping the cluster with this single node with the command:
$ mysqld_safe --defaults-file=/etc/my.cnf --wsrep-new-cluster


You should then be able to access this node through the local socket:

$ mysql -S /var/lib/mysql/mysql-node1.sock

 

Configuring and starting the second node

Then I created a similar configuration configuration file for the second instance, which I named /etc/my2.cnf, with the following modifications:
[mysqld]
datadir = /var/lib/mysql2
port=3307
socket=/var/lib/mysql2/mysql-node2.sock
pid-file=/var/lib/mysql2/mysql-node2.pid
log-error=/var/lib/mysql2/mysql-node2.err
binlog_format=ROW
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name = singlebox
wsrep_node_name = node2
wsrep_cluster_address=gcomm://127.0.0.1:4567,127.0.0.1:5020
wsrep_provider_options = "base_port=5020;"


Note the use of base_port: by having it defined, port 5020 is used for group communication and 5021 (the one above it) is reserved for IST (it’s the same as using gmcast.listen_addr=tcp://127.0.0.1:5021, just simpler).

You need to create and setup the right permissions to the datadir on this second instance, otherwise MySQL won’t be able to create some files (like .pid and .err), though you don’t need to run the mysql_install_db script:
$ chown -R mysql:mysql /var/lib/mysql2
You can then start this second instance with the following command:
$ mysqld_safe --defaults-file=/etc/my2.cnf
While it starts, watch the log to observe how this second node starts, communicates with the primary node and join the cluster. On a different terminal from the one you’ve started the instance, execute:
$ tail -f /var/log/mysql2/mysql-node2.err
Remember that at any time you can use mysqladmin to stop the nodes, you only need to provide the right socket as argument, like follows:
$ mysqladmin -S /var/lib/mysql/mysql-node1.sock shutdown
Finally, once you have the whole cluster up you should edit the my.cnf of the first node with a complete wsrep_cluster_addres, as show in /etc/my2.cnf above.

 

Using mysqld_multi

My last blog post was on running multiple instances of MySQL with myslqd_multi. It applies here as well, the only exception is that you need to make sure to use “wsrep_cluster_address=gcomm://” in the first node whenever you bootstrap the cluster – and pay attention to start it before the other nodes.
The only advantage I see in using mysqld_multi is facilitating the management (start/stop) of the nodes and concentrating all configuration in a single my.cnf file. In any case, you shouldn’t be running a PXC cluster in a single box for any purpose other than educational.

 

Adding a second Percona XtraDB Cluster node to a production server

What if you have a production cluster composed of multiple physical servers and you want to add a second node to one of them? It works the same way – you’ll just need to use the server’s IP address when configuring it instead of the loopback network interface. Here’s an example of a PXC cluster composed initially by three nodes: 192.168.70.1, 192.168.70.2, and 192.168.70.3. I’ve added a 4th node running on the server that is already hosting the 3rd – the wsrep_cluster_address line looks like as follows after the changes:
wsrep_cluster_address = gcomm://192.168.70.1,192.168.70.2,192.168.70.3:4567,192.168.70.3:5020

 

Additional ressources

We have a documentation page on “How to setup 3 node cluster on single box” that contains more details of what I’ve covered above with a slightly different approach.

 

The post Percona XtraDB Cluster: How to run a 2-node cluster on a single server appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com