Jun
24
2024
--

Understanding Basic Flow Control Activity in MySQL Group Replication: Part One

Understanding Basic Flow Control Activity in MySQL Group ReplicationFlow control is not a new term, and we have already heard it a lot of times in Percona XtraDB Cluster/Galera-based environments.  In very simple terms, it means the cluster node can’t keep up with the cluster write pace. The write rate is too high, or the nodes are oversaturated. Flow control helps avoid excessive […]

Jul
25
2014
--

Monitoring MySQL flow control in Percona XtraDB Cluster 5.6

Monitoring flow control in a Galera cluster is very important. If you do not, you will not understand why writes may sometimes be stalled. Percona XtraDB Cluster 5.6 provides 2 status variables for such monitoring: wsrep_flow_control_paused and wsrep_flow_control_paused_ns. Which one should you use?

What is flow control?

Flow control does not exist with regular MySQL replication, but only with Galera replication. It is simply the mechanism nodes are using when they are not able to keep up with the write load: to keep replication synchronous, the node that is starting to lag instructs the other nodes that writes should be paused for some time so it does not get too far behind.

If you are not familiar with this notion, you should read this blogpost.

Triggering flow control and graphing it

For this test, we’ll use a 3-node Percona XtraDB Cluster 5.6 cluster. On node 3, we will adjust gcs.fc_limit so that flow control is triggered very quickly and then we will lock the node:

pxc3> set global wsrep_provider_options="gcs.fc_limit=1";
pxc3> flush tables with read lock;

Now we will use sysbench to insert rows on node 1:

$ sysbench --test=oltp --oltp-table-size=50000 --mysql-user=root --mysql-socket=/tmp/pxc1.sock prepare

Because of flow control, writes will be stalled and sysbench will hang. So after some time, we will release the lock on node 3:

pxc3> unlock tables;

During the whole process, wsrep_flow_control_paused and wsrep_flow_control_paused_ns are recorded every second with mysqladmin ext -i1. We can then build a graph of the evolution of both variables:

wsrep_flow_control_pxc3

While we can clearly see when flow control was triggered on both graphs, it is much easier to know when flow control was stopped with wsrep_flow_control_paused_ns. It would be even more obvious if we have had several timeframes when flow control is in effect.

Conclusion

Monitoring a server is obviously necessary if you want to be able to catch issues. But you need to look at the right metrics. So don’t be scared if you are seeing that wsrep_flow_control_paused is not 0: it simply means that flow control has been triggered at some point since the server started up. If you want to know what is happening right now, prefer wsrep_flow_control_paused_ns.

The post Monitoring MySQL flow control in Percona XtraDB Cluster 5.6 appeared first on MySQL Performance Blog.

Oct
08
2013
--

Taking backups on Percona XtraDB Cluster (without stalls!)

I occasionally see customers who are taking backups from their PXC clusters that complain that the cluster “stalls” during the backup.  As I wrote about in a previous blog post, often these stalls are really just Flow Control.  But why would a backup cause Flow control?

Most backups I know of (even Percona XtraBackup) take a FLUSH TABLES WITH READ LOCK (FTWRL) at some point in the backup process.  This can be disabled in XtraBackup (in certain circumstances), but it is enabled by default.

If you go to your active cluster right now an execute a FTWRL (don’t actually do this in production!), you’ll see this message in your error log on that node:

131004 13:18:00 [Note] WSREP: Provider paused at 49548c16-2abd-11e3-a720-b26997b9b8c6:35442

This indicates that Galera is unable to apply writes on the local node.  This by itself is does not indicate Flow control, but flow control is likely if it lasts too long.  Once the lock is released, we get a message that Galera is at work again:

131004 13:18:09 [Note] WSREP: Provider resumed.

During this interval (9 seconds in this case), the wsrep_local_recv_queue was backing up on this node and could cause Flow control, depending on how the fc_limit and other settings are configured.  I talk about how to tune Flow control in my other post, but what we really want is for flow control to not be in effect for the duration of our backup for this one specific node.

Astute Galera users know that a Donor during SST does not trigger flow control, even though it may get far behind the rest of the cluster.  What if we could manually make a node act like a donor for the purposes of a backup?  Turns out we now can.

Starting with PXC 5.5.33, a new variable has been added called ‘wsrep_desync’.  This allows us to manually toggle a node into and out of the ‘Donor/Desynced’ state.   The Donor/Desynced state is nothing magical.  It really just turns off flow control, and allows the node to fall arbitrarily far behind the rest of the cluster, but only when it is forced to.  This could be caused by a FTWRL, but also anything that may cause the node to lag like heavy disk utilization.

So, I can set Desync like this:

node3 mysql> set global wsrep_desync=ON;

When I do that, I can see the node drop into the ‘Donor/Desynced’ state:

node3 mysql> show global status like 'wsrep%';
+----------------------------+-------------------------------------------------------+
| Variable_name              | Value                                                 |
+----------------------------+-------------------------------------------------------+
...
| wsrep_local_recv_queue     | 0                                                     |
| wsrep_local_recv_queue_avg | 0.000000                                              |
| wsrep_flow_control_paused  | 0.000000                                              |
| wsrep_flow_control_sent    | 0                                                     |
| wsrep_flow_control_recv    | 0                                                     |
...
| wsrep_local_state          | 2                                                     |
| wsrep_local_state_comment  | Donor/Desynced                                        |
...
| wsrep_cluster_size         | 3                                                     |
| wsrep_cluster_state_uuid   | 49548c16-2abd-11e3-a720-b26997b9b8c6                  |
| wsrep_cluster_status       | Primary                                               |
| wsrep_connected            | ON                                                    |
| wsrep_local_index          | 2                                                     |
| wsrep_provider_name        | Galera                                                |
| wsrep_provider_vendor      | Codership Oy <info@codership.com>                     |
| wsrep_provider_version     | 2.7(r157)                                             |
| wsrep_ready                | ON                                                    |
+----------------------------+-------------------------------------------------------+
40 rows in set (0.01 sec)

However, notice that my wsrep_local_recv_queue is still empty, and flow control is not apparently in effect.  myq_status agrees with this:

[root@node3 ~]# myq_status wsrep
mycluster / node3 / Galera 2.7(r157)
Wsrep    Cluster  Node     Queue   Ops     Bytes     Flow    Conflct PApply        Commit
    time P cnf  #  cmt sta  Up  Dn  Up  Dn   Up   Dn pau snt lcf bfa dst oooe oool wind
13:36:07 P  60  3 Dono T/T   0   0   0 15k    0  69M 0.0   0   0   0  61    0    0    1
13:36:08 P  60  3 Dono T/T   0   0   0  14    0  20K 0.0   0   0   0  65    0    0    1
13:36:09 P  60  3 Dono T/T   0   0   0  15    0  21K 0.0   0   0   0  71    0    0    1
13:36:10 P  60  3 Dono T/T   0   0   0  12    0  16K 0.0   0   0   0  75    0    0    0
13:36:11 P  60  3 Dono T/T   0   0   0  18    0  24K 0.0   0   0   0  81    0    0    0
13:36:12 P  60  3 Dono T/T   0   0   0  12    0  16K 0.0   0   0   0  84    0    0    1
13:36:13 P  60  3 Dono T/T   0   0   0   9    0  11K 0.0   0   0   0  88    0    0    1
13:36:14 P  60  3 Dono T/T   0   0   0   9    0  12K 0.0   0   0   0  91    0    0    0
13:36:16 P  60  3 Dono T/T   0   0   0  12    0  17K 0.0   0   0   0  95    0    0    1
13:36:17 P  60  3 Dono T/T   0   0   0   9    0  12K 0.0   0   0   0 100    0    0    1
13:36:18 P  60  3 Dono T/T   0   0   0   9    0  12K 0.0   0   0   0 101    0    0    1
13:36:19 P  60  3 Dono T/T   0   0   0  10    0  14K 0.0   0   0   0 106    0    0    1

Moving to Donor/Desynced state does not force the node to fall behind, it just allows it without triggering flow control.  Now, let’s take a FTWRL on node3 and observe:

node3 mysql> flush tables with read lock;
Query OK, 0 rows affected (0.02 sec)
[root@node3 ~]# myq_status wsrep
mycluster / node3 / Galera 2.7(r157)
Wsrep    Cluster  Node     Queue   Ops     Bytes     Flow    Conflct PApply        Commit
    time P cnf  #  cmt sta  Up  Dn  Up  Dn   Up   Dn pau snt lcf bfa dst oooe oool wind
13:37:32 P  60  3 Dono T/T   0   0   0 15k    0  70M 0.0   0   0   0 111    0    0    1
13:37:33 P  60  3 Dono T/T   0   0   0  16    0  22K 0.0   0   0   0 119    0    0    0
13:37:34 P  60  3 Dono T/T   0   0   0  12    0  16K 0.0   0   0   0 124    0    0    0
13:37:35 P  60  3 Dono T/T   0   0   0  11    0  13K 0.0   0   0   0 127    0    0    1
13:37:36 P  60  3 Dono T/T   0   0   0   9    0  12K 0.0   0   0   0 131    0    0    1
13:37:37 P  60  3 Dono T/T   0   0   0  15    0  20K 0.0   0   0   0 134    0    0    1
13:37:38 P  60  3 Dono T/T   0   0   0   5    0 7.3K 0.0   0   0   0 135    0    0    0
13:37:39 P  60  3 Dono T/T   0   0   0   9    0  13K 0.0   0   0   0 136    0    0    1
13:37:40 P  60  3 Dono T/T   0   0   0  12    0  16K 0.0   0   0   0 141    0    0    0
13:37:41 P  60  3 Dono T/T   0   0   0   8    0  10K 0.0   0   0   0 144    0    0    1
13:37:42 P  60  3 Dono T/T   0   9   0   4    0 4.8K 0.0   0   0   0 144    0    0    0
13:37:44 P  60  3 Dono T/T   0  18   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:45 P  60  3 Dono T/T   0  28   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:46 P  60  3 Dono T/T   0  45   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:47 P  60  3 Dono T/T   0  52   0   0    0    0 0.0   0   0   0 144    0    0    0
mycluster / node3 / Galera 2.7(r157)
Wsrep    Cluster  Node     Queue   Ops     Bytes     Flow    Conflct PApply        Commit
    time P cnf  #  cmt sta  Up  Dn  Up  Dn   Up   Dn pau snt lcf bfa dst oooe oool wind
13:37:48 P  60  3 Dono T/T   0  59   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:49 P  60  3 Dono T/T   0  66   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:50 P  60  3 Dono T/T   0  78   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:51 P  60  3 Dono T/T   0  96   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:52 P  60  3 Dono T/T   0 105   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:53 P  60  3 Dono T/T   0 115   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:54 P  60  3 Dono T/T   0 122   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:55 P  60  3 Dono T/T   0 133   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:56 P  60  3 Dono T/T   0 154   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:57 P  60  3 Dono T/T   0 165   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:58 P  60  3 Dono T/T   0 177   0   0    0    0 0.0   0   0   0 144    0    0    0
13:37:59 P  60  3 Dono T/T   0 183   0   0    0    0 0.0   0   0   0 144    0    0    0
13:38:00 P  60  3 Dono T/T   0 197   0   0    0    0 0.0   0   0   0 144    0    0    0
13:38:01 P  60  3 Dono T/T   0 206   0   0    0    0 0.0   0   0   0 144    0    0    0
13:38:03 P  60  3 Dono T/T   0 222   0   0    0    0 0.0   0   0   0 144    0    0    0

My FC settings on this node are the defaults:

node3 mysql> show global variables like 'wsrep_provider_options'\G
*************************** 1. row ***************************
Variable_name: wsrep_provider_options
        Value: ... gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; ...

and yet flow control has not kicked in in the cluster.

So if I am taking my backup here, I know flow control should not kick in because of this node.  FTWRL may not be the only reason replication may lag on this node, maybe just resource utilization taking the backup could also allow the queue to get high enough to cause FC.

Either way, once I’m done, I release the lock and I can immediately see the queue start to drop:

13:49:51 P  60  3 Dono T/T   0 30k   0   0    0    0 0.0   0   0   0  50    0    0    0
13:49:52 P  60  3 Dono T/T   0 30k   0   0    0    0 0.0   0   0   0  50    0    0    0
mycluster / node3 / Galera 2.7(r157)
Wsrep    Cluster  Node     Queue   Ops     Bytes     Flow    Conflct PApply        Commit
    time P cnf  #  cmt sta  Up  Dn  Up  Dn   Up   Dn pau snt lcf bfa dst oooe oool wind
13:49:53 P  60  3 Dono T/T   0 30k   0   0    0    0 0.0   0   0   0  50    0    0    0
13:49:54 P  60  3 Dono T/T   0 30k   0   0    0    0 0.0   0   0   0  50    0    0    0
13:49:55 P  60  3 Dono T/T   0 28k   0  2k    0 2.7M 0.0   0   0   0 394   84    2    3
13:49:57 P  60  3 Dono T/T   0 25k   0  3k    0 3.8M 0.0   0   0   0 493   94    2    4
13:49:58 P  60  3 Dono T/T   0 23k   0  3k    0 3.5M 0.0   0   0   0 529   94    3    3
13:49:59 P  60  3 Dono T/T   0 21k   0  2k    0 3.3M 0.0   0   0   0 544   93    4    3
13:50:01 P  60  3 Dono T/T   0 18k   0  3k    0 3.6M 0.0   0   0   0 569   94    2    4
13:50:02 P  60  3 Dono T/T   0 16k   0  2k    0 2.9M 0.0   0   0   0 583   90    5    3
13:50:03 P  60  3 Dono T/T   0 14k   0  2k    0 3.2M 0.0   0   0   0 621   87    3    3

But, what about wsrep_desync? Should I turn it off immediately, or wait for the queue to drop?

node3 mysql> unlock tables; set global wsrep_desync=OFF;
Query OK, 0 rows affected (0.02 sec)
Query OK, 0 rows affected, 1 warning (0.01 sec)
mycluster / node3 / Galera 2.7(r157)
Wsrep    Cluster  Node     Queue   Ops     Bytes     Flow    Conflct PApply        Commit
    time P cnf  #  cmt sta  Up  Dn  Up  Dn   Up   Dn pau snt lcf bfa dst oooe oool wind
13:50:55 P  60  3 Dono T/T   0 15k   0   0    0    0 0.0   0   0   0 647    0    0    0
13:50:56 P  60  3 Dono T/T   0 15k   0   0    0    0 0.0   0   0   0 647    0    0    0
13:50:57 P  60  3 Dono T/T   0 15k   0   0    0    0 0.0   0   0   0 647    0    0    0
13:50:58 P  60  3 Dono T/T   0 15k   0   0    0    0 0.0   0   0   0 647    0    0    0
13:50:59 P  60  3 Dono T/T   0 15k   0   0    0    0 0.0   0   0   0 647    0    0    0
13:51:00 P  60  3 Dono T/T   0 15k   0   0    0    0 0.0   0   0   0 647    0    0    0
13:51:01 P  60  3 Dono T/T   0 15k   0   0    0    0 0.0   0   0   0 647    0    0    0
13:51:02 P  60  3 Dono T/T   0 15k   0   0    0    0 0.0   0   0   0 647    0    0    0
13:51:03 P  60  3 Dono T/T   0 14k   0 997    0 1.3M 0.0   0   0   0 650   94    6    3
13:51:05 P  60  3 Dono T/T   0 12k   0  3k    0 3.7M 0.0   0   0   0 673   96    2    4
13:51:06 P  60  3 Dono T/T   0  9k   0  2k    0 3.2M 0.0   0   0   0 695  100    3    4
13:51:07 P  60  3 Dono T/T   0  7k   0  3k    0 3.4M 0.0   0   0   0 712   97    3    4
13:51:08 P  60  3 Dono T/T   0  4k   0  3k    0 3.5M 0.0   0   0   0 724   93    2    4
13:51:10 P  60  3 Dono T/T   0  2k   0  2k    0 3.0M 0.0   0   0   0 720  100    5    4
13:51:11 P  60  3 Join T/T   0  56   0  2k    0 2.9M 0.0   0   0   0 715   52    6    2
mycluster / node3 / Galera 2.7(r157)
Wsrep    Cluster  Node     Queue   Ops     Bytes     Flow    Conflct PApply        Commit
    time P cnf  #  cmt sta  Up  Dn  Up  Dn   Up   Dn pau snt lcf bfa dst oooe oool wind
13:51:12 P  60  3 Sync T/T   0   0   0 125    0 169K 0.0   0   0   0 133    0    0    1
13:51:13 P  60  3 Sync T/T   0   0   0  61    0  84K 0.0   0   0   0 124    0    0    1
13:51:14 P  60  3 Sync T/T   0   0   0  58    0  82K 0.0   0   0   0 132    0    0    1
13:51:15 P  60  3 Sync T/T   0   0   0  61    0  82K 0.0   0   0   0 149    0    0    1

No!  Turning off wsrep_desync keeps the node in Donor/Desynced state until it drops back down below the FC limit.  This you means you can turn it off right away and Galera will do the right thing by letting the node catch up first before moving it back to ‘Sync’ and allowing FC to be active again.

EDIT: Actually, turning off wsrep_desync will move the node to the JOINED state, note you can see that happen at time 13:51:11 above, though in reality it happened as soon as we toggled wsrep_desync and the event had to wait in the queue.  In this state the node will send flow control messages in a limited fashion to help it catch up faster.  If you want to let the node catch up naturally without causing flow control, then leave wsrep_desync ON until the wsrep_local_recv_queue is back down to 0.

This is all well and good, but how can your HA solution deal with a node in this state?  Well, it should do the same thing you do when a node is a regular Donor, take it out of rotation!  Every SST method currently has some FTWRL, and a node in a Donor/Desync state may (or may not) be far behind, so it’s safest to ensure these nodes are taking out of rotation.  Note that the clustercheck script that ships with PXC server gives you an option to report the node as ‘down’ when it is in a Donor state.  This should allow you to easily integrate this feature with HAproxy or similar HA solutions for Percona XtraDB Cluster.

In summary, this should make it easy write your backup scripts: just turn on wsrep_desync at the start and turn it off as soon as you are done and Galera will handle the rest.  Happy backups!

The post Taking backups on Percona XtraDB Cluster (without stalls!) appeared first on MySQL Performance Blog.

Jun
18
2013
--

MySQL Webinar: Percona XtraDB Cluster Operations, June 26

Percona XtraDB ClusterPercona XtraDB Cluster (PXC) was released over a year ago and since then there has been tremendous interest and adoption.  There’s plenty of talks that explain the fundamentals of PXC, but we’re starting to reach a threshold where it’s easier to find folks with PXC in production and such the need for more advanced talks has arisen.

As such, I wanted to shift gears from the standard introductory talk and focus instead more on some key questions/issues/pain-points for those with PXC in production already.   As such, I’m giving a webinar entitled Percona XtraDB Cluster Operations on June 26th 2013 from 10-11AM PDT.  Topics will include:

  • Backups from the cluster
  • Avoiding SST
  • Flow Control
  • What and How to Monitor
  • Tuning best practices

This webinar is not meant to necessarily be exhaustive, but to cover key topics that administrators of PXC will commonly ask about.  You can register for the webinar here.

The post MySQL Webinar: Percona XtraDB Cluster Operations, June 26 appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com