Nov
20
2013
--

How to add VIPs to Percona XtraDB Cluster or MHA with Pacemaker

It is a rather frequent problem to have to manage Virtual IP addresses (VIPs) with a Percona XtraDB Cluster (PXC) or with MySQL master HA (MHA). In order to help solving these problems, I wrote a Pacemaker agent, mysql_monitor that is a simplified version of the mysql_prm agent. The mysql_monitor agent only monitors MySQL and set attributes according to the state of MySQL, the read-only variable, the slave status and/or the output of the clustercheck script for PXC. The agent can operate in 3 modes or cluster types: replication (default), pxc and read-only.

The simplest mode is read-only, only the state of the read_only variable is looked at. If the node has the read_only variable set to OFF, then the writer is set to 1 and reader attributes is set to 1 while if the node has read_only set to ON, the writer attributes will be set to 0 and the reader attribute set to 1.

In replication mode, the writer and reader attributes are set to 1 on the nodes where the read_only variable is set to OFF and on the nodes where the read_only variable is set to ON, the reader attribute is set according to the replication state.

Finally, in the PXC mode, both attributes are set according to the return code of the clustercheck script.

In all cases, if MySQL is not running, the reader and writer attributes are set to 0. The following table recaps the behavior in a more visual way, using readable for the reader attribute and writable for the write attribute:

Cluster type: Read-only Replication PXC
MySQL not OK readable = 0
writable = 0
Read-only = OFF readable = 1
writable = 1
readable = 1
writable = 1
N/A
Read-only = ON
Replication OK
readable = 1
writable = 0
readable = 1writable = 0 N/A
Read-only = ON
Replication Not OK
readable = 1
writable = 0
readable = 0writable = 0 N/A
Clustercheck OK N/A N/A readable = 1
writable = 1
Clustercheck not OK N/A N/A readable = 0
writable = 0

 

The agent can be found in the percona-pacemaker-agents github repository, more specifically here and the accompanying documentation here.

To get a feel how use it, here’s a sample pacemaker configuration for a PXC cluster:

primitive p_mysql_monit ocf:percona:mysql_monitor \
    params user="repl_user" password="WhatAPassword" pid="/var/lib/mysql/mysqld.pid" \
      socket="/var/run/mysqld/mysqld.sock" cluster_type="pxc" \
    op monitor interval="1s" timeout="30s" OCF_CHECK_LEVEL="1"
clone cl_mysql_monitor p_mysql_monit \
    meta clone-max="3" clone-node-max="1"
primitive writer_vip ocf:heartbeat:IPaddr2 \
    params ip="172.30.212.100" nic="eth1" \
    op monitor interval="10s"
primitive reader_vip_1 ocf:heartbeat:IPaddr2 \
    params ip="172.30.212.101" nic="eth1" \
    op monitor interval="10s"
primitive reader_vip_2 ocf:heartbeat:IPaddr2 \
    params ip="172.30.212.102" nic="eth1" \
    op monitor interval="10s"
location No-reader-vip-1-loc reader_vip_1 \
    rule $id="No-reader-vip-1-rule" -inf: readable eq 0
location No-reader-vip-2-loc reader_vip_2 \
    rule $id="No-reader-vip-2-rule" -inf: readable eq 0
location No-writer-vip-loc writer_vip \
    rule $id="No-writer-vip-rule" -inf: writable eq 0
colocation col_vip_dislike_each_other -200: reader_vip_1 reader_vip_2 writer_vip

The resulting cluster status with the attributes set looks like:

root@pacemaker-1:~# crm_mon -A1
============
Last updated: Tue Nov 19 17:06:18 2013
Last change: Tue Nov 19 16:40:51 2013 via cibadmin on pacemaker-1
Stack: openais
Current DC: pacemaker-3 - partition with quorum
Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
3 Nodes configured, 3 expected votes
6 Resources configured.
============
Online: [ pacemaker-1 pacemaker-2 pacemaker-3 ]
 Clone Set: cl_mysql_monitor [p_mysql_monit]
     Started: [ pacemaker-1 pacemaker-2 pacemaker-3 ]
 reader_vip_1   (ocf::heartbeat:IPaddr2):       Started pacemaker-1
 reader_vip_2   (ocf::heartbeat:IPaddr2):       Started pacemaker-2
 writer_vip     (ocf::heartbeat:IPaddr2):       Started pacemaker-3
Node Attributes:
* Node pacemaker-1:
    + readable                          : 1
    + writable                          : 1
* Node pacemaker-2:
    + readable                          : 1
    + writable                          : 1
* Node pacemaker-3:
    + readable                          : 1
    + writable                          : 1

Nothing too exciting so far, let’s desync one of the nodes (pacemaker-1), setting the variable wsrep_desync=1. This gives us the following status:

root@pacemaker-1:~# crm_mon -A1
============
Last updated: Tue Nov 19 17:10:21 2013
Last change: Tue Nov 19 16:40:51 2013 via cibadmin on pacemaker-1
Stack: openais
Current DC: pacemaker-3 - partition with quorum
Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
3 Nodes configured, 3 expected votes
6 Resources configured.
============
Online: [ pacemaker-1 pacemaker-2 pacemaker-3 ]
 Clone Set: cl_mysql_monitor [p_mysql_monit]
     Started: [ pacemaker-1 pacemaker-2 pacemaker-3 ]
 reader_vip_1   (ocf::heartbeat:IPaddr2):       Started pacemaker-2
 reader_vip_2   (ocf::heartbeat:IPaddr2):       Started pacemaker-3
 writer_vip     (ocf::heartbeat:IPaddr2):       Started pacemaker-3
Node Attributes:
* Node pacemaker-1:
    + readable                          : 0
    + writable                          : 0
* Node pacemaker-2:
    + readable                          : 1
    + writable                          : 1
* Node pacemaker-3:
    + readable                          : 1
    + writable                          : 1

… where, as expected, no VIPs are now on the desynced node, pacemaker-1. Using different VIPs for PXC is not ideal, the pacemaker IPAddr2 agent allows to create a clone set of an IP using the CLUSTERIP target of iptables. I’ll write a quick follow up to this post devoted to the use of CLUSTERIP.

The post How to add VIPs to Percona XtraDB Cluster or MHA with Pacemaker appeared first on MySQL Performance Blog.

May
28
2013
--

Choosing a MySQL HA Solution – MySQL Webinar: June 5

Choosing a MySQL HA Solution - MySQL Webinar: June 5Selecting the most appropriate solution for a MySQL HA infrastructure is as much a business and philosophical decision as it is a technical one, but often the choice is made without adequately considering all three perspectives.  When too much attention is paid to one of these aspects at the cost of the others, the resulting system may be over-engineered, poorly-performing, and/or various other flavors of suboptimal.

On Wednesday, June 5, at 10 a.m. PDT (1700 UTC), I will be presenting a webinar entitled, Choosing a MySQL HA Solution, in which we’ll explore the topic of MySQL HA from each of these perspectives.  The goal will be to motivate your thinking about HA in a holistic fashion and help guide you towards asking the right questions when considering a new or upgraded HA deployment.

This webinar will be both technical and non-technical in nature, beginning with a discussion of some general HA principles and some common misconceptions.  We will then explore some of the more well-known MySQL HA tools and technologies available today (largely grouped into those which use traditional MySQL replication, those which use some other MySQL-level replication, and those which replicate at some other layer of the system stack) and then conclude with some typical use cases where a given approach may be well-suited or particularly contraindicated.

If this topic interests you, then register today to reserve your spot.  I look forward to speaking with all of you next week.

The post Choosing a MySQL HA Solution – MySQL Webinar: June 5 appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com