Mar
13
2015
--

Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7

My webinar “Multi-threaded Replication in MySQL 5.6 and 5.7″ on February 25 generated several excellent questions following the presentation (available here for playback along with the slides). I didn’t have time to answer many of the questions during the session and so in this post I answer all of them. Thanks to everyone who attended!

Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7Q: What do you expect from MTS with logical clock? Do you think performance would be good as with per database?
A: MTS with 5.6 is not usable if you have a single database. I do not have numbers, but this is quite frequent. With 5.7 everyone should be able to benefit from multi-threaded replication.

Q: When MySQL 5.6 was released, performance of MTS was lower, than in 5.5, for example. Is this addressed now?
A: I am not sure which specific issue or bug you are referring, but if your data is spread across several databases

Q: How does Percona XtraBackup work with MTS? What are the changes in mysqldump?
A: As long as you are using GTIDs, you can safely take a backup from a slave using multi-threaded replication: with XtraBackup, add the --slave-info option as usual when taking a backup from a slave and with mysqldump, use --master-data instead of --dump-slave.

Q: For checkpoint position, what if MTS thread apply Insert before creating table where it inserting data. How MTR checkpoint will keep track of these transactions applying by different thread on slave?
A: The worker threads track all execution gaps to make sure that out-of-order execution is safe and to be able to replay all events without forgetting any of them. So it is not possible that a worker thread will insert data in a table that has not been created yet.

Q: Can you use MTS with all binlog_format options?
A: Yes

Q: Is there any way to have the threads work so that no database contention happens?
A: The short answer is no: the goal of the worker threads is to execute the incoming transactions as fast as possible. If that results in database contention, you should probably decrease the number of worker threads.

Q: Why doesn’t multi-threaded replication perform well on a single DB?
A: With 5.6, parallelization is based on isolating the transactions going to each database. If you only have a single DB, no parallelization is possible. You should look at 5.7 and the logical clock algorithm.

Q: Are there any implications with regards to GTIDs and Multi-Threaded replication when running a Master-to-Master setup?
A: I cannot think of any, however I am not sure master-master replication is still very relevant when using GTIDs.

Q: Is there any inconvenience with memory or cache when using more workers than the number of databases?
A: If the number of workers is just a bit higher than the number of databases (like 5 workers for 3 databases), there should not be any issue. However with ridiculously high numbers (500 workers for 2 databases), there might be performance degradation. I have not tested such cases, so I cannot give a good answer there. However the idea is that the number of workers should be close to the number of databases and should exceed the number of cores on the server.

Q: Is there multi-threaded replication in MySQL 5.7?
A: Yes, multi-threaded replication is available in MySQL 5.7 and offers improvements compared to MySQL 5.6 (mainly the parallelization with logical clock).

Q: Have you used DIM_STAT to created load and measure SLAVE Lag? Any interesting take-a-ways from that effort?
A: I used sysbench to generate load and Seconds_Behind_Master from SHOW SLAVE STATUS to measure slave lag. That mainly shows that if your workload is a good fit for MTS (multiple databases and load shared evenly across ), performance benefits can be significant.

Q: Does multi-threaded replication also work with Percona XtraDB Cluster/Percona Server?
A: Percona Server 5.6 is based on MySQL 5.6, so you can use multi-threaded replication exactly as you would use it on MySQL 5.6.

On Percona XtraDB Cluster, it is a bit different: replication inside the cluster uses Galera replication, which has nothing to do with MySQL replication. Note that Galera has offered parallel replication from the beginning (parallel applying of the replicated writesets to be accurate). However if you are using asynchronous replicas, these replicas can benefit from multi-threaded replication if they are running on MySQL/Percona Server 5.6.

Q: What happens to cross db transactions? Do they not replicate?
A: These transactions will replicate, but they will have to wait until all preceding transactions have been executed. Stated differently, cross db transactions introduce serialization, so you should avoid them as much as possible if you want to benefit from parallel applying.

To be accurate, if you have db1, db2 and db3 and if you execute a transaction involving db1 and db2, transactions on db3 can still be applied in parallel. So if you have many databases, cross db transactions may not be that bad.

Q: When using MTS without GTIDs, is “Seconds_Behind_Master” from SHOW SLAVE STATUS valid?
A: Seconds_Behind_Master is based on Exec_Master_Log_Pos. And with MTS, Exec_Master_Log_Pos is not reliable as it indicates the position of the latest checkpoint and not the position of the latest executed transaction. However in practice, checkpoints will happen at least every 300ms by default, so Seconds_Behind_Master is still a good indication of the replication lag. Of course you should keep in mind the usual limitations, such as with multi-tiered replication (if the setup is A->B->C, C will report its lag against B, not against A) or when there is a replication error (then Seconds_Behind_Master is NULL).

Q: How can all the servers be realistically restarted at the same time? There could be a few sec intervals if you have multiple servers [That was when I explained how to enable GTID replication].
A: With MySQL 5.6, the requirements are pretty strict when it comes to enabling GTIDs: all servers must be restarted at the same point in time in the replication stream. As you mention, it is difficult if you have several servers, so the only viable solution is: stop the writes on the master, wait until replication has caught up on all slaves, stop all servers, change the configuration, restart all servers.

What it means is that there is a time range when all servers are down. This is a showstopper for many people, and that’s why Percona Server 5.6 now includes a patch from Facebook that allows an online migration to GTIDs and that’s why MySQL 5.7.6 also offers this option.

* * *

Thanks for all of the great questions – and I hope to see you next month at OpenStack Live and the Percona Live MySQL Conference and Expo 2015 (April 13-16) – both at the Santa Clara conference center in sunny Silicon Valley. Get more info here.

The post Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7 appeared first on MySQL Performance Blog.

Feb
20
2015
--

How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

This past week was marked by a series of personal findings related to the use of Global Transaction IDs (GTIDs) on Galera-based clusters such as Percona XtraDB Cluster (PXC). The main one being the fact that transactions touching MyISAM tables (and FLUSH PRIVILEGES!) issued on a giving node of the cluster are recorded on a GTID set bearing the node’s server_uuid as “source id” and added to the binary log (if the node has binlog enabled), thus being replicated to any async replicas connected to it. However, they won’t be replicated across the cluster (that is, all of this is by design, if wsrep_replicate_myisam is disabled, which it is by default).

My colleague Stéphane covered this story in one of his recent blog posts titled, “Percona XtraDB Cluster 5.6: a tale of 2 GTIDs,” explaining how those local (in reference to the node) transactions are saved in a different GTID set to the cluster’s main one and the impact this may cause when re-pointing an async slave to replicate from a different node.

GTIDs is a feature introduced in MySQL 5.6 that made replication management much easier and considering there’s a series of advantages in having an async replica attached to a PXC cluster, why hasn’t this popped out earlier to either of us? I guess there aren’t so many people using GTIDs with Galera-based clusters around yet so here’s a post to show you how to do it.

Initializing a PXC cluster configured with GTIDs

My testing environment for a 3-node cluster is composed of node1 (192.168.70.2), node2 (.3) and node3 (.4). All of them have the same PXC binaries installed:

$ rpm -qa |grep -i percona-xtradb-cluster
Percona-XtraDB-Cluster-56-5.6.21-25.8.938.el6.x86_64
Percona-XtraDB-Cluster-galera-3-3.8-1.3390.rhel6.x86_64
Percona-XtraDB-Cluster-server-56-5.6.21-25.8.938.el6.x86_64
Percona-XtraDB-Cluster-client-56-5.6.21-25.8.938.el6.x86_64
Percona-XtraDB-Cluster-shared-56-5.6.21-25.8.938.el6.x86_64

and are configured with an almost identical /etc/my.cnf (apart from wsrep_node_address and the prompt line):

[mysql]
prompt='mysql {node1} > '
[mysqld]
datadir = /var/lib/mysql
log_warnings=2
server_id=1
log_bin=percona-bin
log_slave_updates
binlog_format = ROW
enforce_gtid_consistency=1
gtid_mode=on
wsrep_cluster_name = my-three-node-cluster
wsrep_cluster_address = gcomm://192.168.70.2,192.168.70.3,192.168.70.4
wsrep_node_address = 192.168.70.2
wsrep_provider = /usr/lib64/libgalera_smm.so
wsrep_sst_method = xtrabackup-v2
wsrep_sst_auth = sst:secret
innodb_locks_unsafe_for_binlog = 1
innodb_autoinc_lock_mode = 2

server_id, log_bin, log_slave_updates and binlog_format are not needed for the cluster to operate but they are required to setup regular replication later on so I’ve added those to let the nodes ready to operate as masters.

We start with an empty, inexisting datadir on all nodes so I’ll use the mysql_install_db script to create a base datadir with all that is needed for MySQL to work on node1, which will be the reference node of the cluster:

[node1]$  mysql_install_db --user=mysql

We’re now ready to bootstrap the cluster from this reference node:

[node1]$ service mysql bootstrap-pxc

With that, we have an operational reference node:

mysql [node1] > select @@server_uuid;
+--------------------------------------+
| @@server_uuid                        |
+--------------------------------------+
| 834bca7f-b45e-11e4-a7b5-0800272e951c |
+--------------------------------------+
1 row in set (0.00 sec)


Note from above my.cnf settings that I’ve chosen xtrabackup-v2 as the State Snapshot Transfer (SST) method, which requires authentication (wsrep_sst_auth). For this reason, if we now try to start MySQL on node2 it will fail with its error log showing:

2015-02-14 16:58:26 24149 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup-v2 --role 'joiner' --addre
ss '192.168.70.3' --auth 'sst:secret' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --parent '24149' --binlog 'percona-bin' : 1 (Operation not permitted)
2015-02-14 16:58:26 24149 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
2015-02-14 16:58:26 24149 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
2015-02-14 16:58:26 24149 [ERROR] Aborting


The problem here is that the SST method requires MySQL authentication: even though the credentials are shared on the wsrep_sst_auth variable of all node’s my.cnf configuration file this only tells xtrabackup-v2 to require it, it doesn’t actually configure MySQL with it. That’s a step left for us:

mysql [node1] > GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret';
Query OK, 0 rows affected (0.02 sec)


And this constitutes our very first commited transaction, which goes into the cluster’s GTID set:

mysql [node1] > select @@global.gtid_executed;
+----------------------------------------+
| @@global.gtid_executed                 |
+----------------------------------------+
| 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1 |
+----------------------------------------+
1 row in set (0.00 sec)


This GRANT only needs to be issued once, in the reference node. Now you start MySQL on node2 and node3: they will use XtraBackup to make a backup of node1’s dataset, which will contain this GRANT, and restore it in their own datadir:

[node2]$ service mysql start


and:

[node3]$ service mysql start

OK, that’s done. But how do I attach an async replica to the cluster?

Just to make it clear, you cannot attach an async replica to the cluster: you need to choose a node that is member of the cluster and make it the master for the async replica. Considering all nodes should have the same data replicated it should be easy to change the async replica’s configuration and make it replicate from any other node from the cluster – and it is, though you may need to do some adjustments (more on this later).

The initial procedure is exactly the same one used with regular replication: you start by taking a backup of the master and restoring it on the replica. We’ll use XtraBackup again to perform the backup and we’ll start by having node2 as the master of our async replica (192.168.70.7). We could stream the backup from node2 directly to the async replica and later “prepare” it (by applying the logs, which needs to be done using the same version of Percona XtraBackup that you’ve used to take the backup), but to make things simple we’ll first take the backup on node2:

[node2]$ innobackupex /tmp
(...)
innobackupex: Using server version 5.6.21-70.1-56-log
innobackupex: Created backup directory /tmp/2015-02-14_17-53-22
(...)
150214 17:53:26 innobackupex: completed OK!

then “prepare” it:

[node2]$ innobackupex --apply-log /tmp/2015-02-14_17-53-22
(...)
150214 17:56:10 innobackupex: Starting the apply-log operation
(...)
150214 17:56:14 innobackupex: completed OK!

and from our async replica we’ll copy the backup from node2 using rsync over ssh (you could use scp or any other mathod to copy the files from one server to the other):

[replica]$ rsync -av -e 'ssh -l root' 192.168.70.3:/tmp/2015-02-14_17-53-22/ /var/lib/mysql

We now need to change the ownership of those files to the ‘mysql‘ user:

[replica]$ chown mysql:mysql -R /var/lib/mysql

and take note of the “replication coordinates” – but related to GTID. We have those in the xtrabackup_info file:

[replica]$ cat /var/lib/mysql/xtrabackup_info
(...)
binlog_pos = GTID of the last change '7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1'
(...)

In our case we took the backup very early in this database’s lifecycle (seeing by the GTID # in there) but in practice that’s seldom the case. We can now start MySQL:

[replica]$ service MySQL start

Before we can point our async replica to replicate from node2 we need to create a replication user with the right set of privileges there:

mysql [node2] > GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO repl@'192.168.70.%' IDENTIFIED BY 'p4ssword';
Query OK, 0 rows affected (0.06 sec)

Now we can use CHANGE MASTER TO on our async replica to point it to node2 using the above credentials:

mysql [replica] > CHANGE MASTER TO MASTER_HOST='192.168.70.3', MASTER_USER='repl', MASTER_PASSWORD='p4ssword', MASTER_AUTO_POSITION = 1;
Query OK, 0 rows affected, 2 warnings (0.05 sec)

If we START SLAVE now we’ll run into error 1236: we need first to set the replication coordinates from when the backup was taken. With GTID replication, this is done in a different manner: instead of providing replicate coordinates in the CHANGE MASTER TO command (where we’ve used MASTER_AUTO_POSITION=1 instead) we do it by redefining the gtid_purged global variable with the GTID sets we got from the xtrabackup_info file (in this example there’s only one set):

mysql [replica] > SET @@global.gtid_purged='7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1';
Query OK, 0 rows affected (0.02 sec)

We can now START SLAVE and check its status:

mysql [replica] > START SLAVE; SHOW SLAVE STATUSG
Query OK, 0 rows affected (0.00 sec)
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.70.3
Master_User: repl
(...)
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
(...)
Master_Server_Id: 2
Master_UUID: 8a157d9c-b465-11e4-aafa-0800272e951c
(...)
Retrieved_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:2
Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-2
Auto_Position: 1
1 row in set (0.00 sec)

Note that Executed_Gtid above is showing a second transaction in the cluster’s GTID set: this is related to the GRANT statement we issued on node2 to setup the replication account and it means it is now configured on all nodes members of the cluster (and this async replica as well). With that in place we can easily point our async replica to replicate from a different node, such as node3:

mysql [replica] > STOP SLAVE; CHANGE MASTER TO MASTER_HOST='192.168.70.4'; START SLAVE;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.01 sec)

Let’s verify this is indeed the case:

mysql [replica] > SHOW SLAVE STATUSG
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.70.4
Master_User: repl
(...)
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
(...)
Master_Server_Id: 3
Master_UUID: aa9acb85-b465-11e4-ab09-0800272e951c
(...)
Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-2
Auto_Position: 1

Nice! What about the caveats you were talking about in the other blog post?

The problem arises when you start to have other GTID sets on the gtid_executed variable of the nodes. This happens when a DML transaction involving a MyISAM table is issued on the node while having wsrep_replicate_myisam disabled: instead of having that transaction going to the cluster’s main GTID set it will go to a new one, bearing the node’s server_uuid as source id. In fact, that’s the behavior you find on regular MySQL replication configured with GTIDs. Here’s an example:

mysql [node1] > CREATE TABLE test.fernando1 (id int) ENGINE=MYISAM;
Query OK, 0 rows affected (0.03 sec)

Creating a MyISAM table per si is not an issue, because this is a DDL, so the transaction will go to the cluster’s GTID set and replicated to the other nodes:

mysql [node1] > select @@global.gtid_executedG
*************************** 1. row ***************************
@@global.gtid_executed: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3
1 row in set (0.00 sec)

But doing an INSERT in such a table is a DML so the transaction will go to a different GTID set:

mysql [node1] > select @@global.gtid_executedG
*************************** 1. row ***************************
@@global.gtid_executed: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,
834bca7f-b45e-11e4-a7b5-0800272e951c:1
1 row in set (0.00 sec)

Now if we move our async replica to node1 it might just works:

mysql [replica] > STOP SLAVE; CHANGE MASTER TO MASTER_HOST='192.168.70.2'; START SLAVE;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.02 sec)
Query OK, 0 rows affected (0.01 sec)
mysql [replica] > show slave statusG
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.70.2
(...)
Retrieved_Gtid_Set: 834bca7f-b45e-11e4-a7b5-0800272e951c:1
Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,
834bca7f-b45e-11e4-a7b5-0800272e951c:1
Auto_Position: 1
1 row in set (0.00 sec)

and that is because transaction ‘834bca7f-b45e-11e4-a7b5-0800272e951c:1could still be found on the binary logs of node1. Let’s do something slightly different now, but on node2:

mysql [node2] > INSERT INTO test.fernando1 values (2);
Query OK, 1 row affected (0.01 sec)
mysql [node2] > select @@global.gtid_executedG
*************************** 1. row ***************************
@@global.gtid_executed: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,
8a157d9c-b465-11e4-aafa-0800272e951c:1
1 row in set (0.00 sec)
mysql [node2] > SHOW MASTER STATUSG
*************************** 1. row ***************************
File: percona-bin.000008
Position: 923
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,
8a157d9c-b465-11e4-aafa-0800272e951c:1
1 row in set (0.00 sec)
mysql [node2] > FLUSH LOGS;
Query OK, 0 rows affected (0.01 sec)
mysql [node2] > SHOW MASTER STATUSG
*************************** 1. row ***************************
File: percona-bin.000009
Position: 231
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,
8a157d9c-b465-11e4-aafa-0800272e951c:1
1 row in set (0.00 sec)
mysql [node2] > PURGE BINARY LOGS TO 'percona-bin.000009';
Query OK, 0 rows affected (0.01 sec)

Transaction ‘8a157d9c-b465-11e4-aafa-0800272e951c:1‘ that contains the INSERT statement we issued from node2 was recorded into binary log file percona-bin.000008 but the following PURGE BINARY LOGS TO command deleted that file. Now, if we point our async replica back to node2 we’ll run into a problem:

mysql [replica] > STOP SLAVE; CHANGE MASTER TO MASTER_HOST='192.168.70.3'; START SLAVE;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.01 sec)
mysql [replica] > show slave statusG
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.70.3
(...)
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
(...)
Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,
834bca7f-b45e-11e4-a7b5-0800272e951c:1
Auto_Position: 1
1 row in set (0.00 sec)

The master can no longer provide transaction ‘834bca7f-b45e-11e4-a7b5-0800272e951c:1‘ to the replica as requested thus breaking replication.

Ouch! Is there a fix for this?

Well, kind of. You can trick MySQL into believing it has processed this transaction by injecting an empty transaction bearing the same GTID of the missing one:

mysql [replica] > SELECT GTID_SUBTRACT('7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,8a157d9c-b465-11e4-aafa-0800272e951c:1', '7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,834bca7f-b45e-11e4-a7b5-0800272e951c:1')G
*************************** 1. row ***************************
GTID_SUBTRACT('7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,8a157d9c-b465-11e4-aafa-0800272e951c:1', '7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,834bca7f-b45e-11e4-a7b5-0800272e951c:1'): 8a157d9c-b465-11e4-aafa-0800272e951c:1
1 row in set (0.00 sec)
mysql [replica] > SET GTID_NEXT='8a157d9c-b465-11e4-aafa-0800272e951c:1';
Query OK, 0 rows affected (0.00 sec)
mysql [replica] > BEGIN;COMMIT;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
mysql [replica] > SET gtid_next = 'AUTOMATIC';
Query OK, 0 rows affected (0.00 sec)
mysql [replica] > show slave statusG
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.70.3
(...)
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
(...)
Executed_Gtid_Set: 7cf02e4b-4ba1-ee1b-4599-0c821ea92393:1-3,
834bca7f-b45e-11e4-a7b5-0800272e951c:1,
8a157d9c-b465-11e4-aafa-0800272e951c:1
Auto_Position: 1
1 row in set (0.00 sec)

The problem is that you’ll end up with an inconsistent dataset – injecting an empty transaction is the same as using SQL_SLAVE_SKIP_COUNTER on regular MySQL replication.

Take-home lesson

It is not complicated to configure a PXC cluster to use GTIDs, in fact you may prefer to do so once you get used to it. And it makes that all much easier to have an asynchronous slave replicating from one of the nodes, even moving it around to replicate from a different node. It should all go well while the nodes record their transactions on the cluster’s main GTID set, which is shared by all nodes: this implies all of them will have the same transactions in their binlogs, as designed. The problem appears when we start seeing a deviation on this pattern, with a few nodes recording local transactions on their “own” GTID set. This won’t bother the cluster operation per si, as these other GTID sets are simply ignored and won’t be replicated to other nodes, but they may complicate things for attached async replicas if you need to point them to a different node.

In such environments, strive to maintain a unified GTID set around the cluster’s main one and if you find out that one of the nodes has started adding transactions to a different set, investigate it. You shouldn’t be using MyISAM tables on an XtraDB Cluster as those aren’t officially supported but if you must do them you should probably consider using wsrep_replicate_myisam. Operations on mysql system tables should be done through the use of DDLs instead of DMLs (like GRANT instead of INSERT) and, above all, keep distance from this bug.

The post How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!) appeared first on MySQL Performance Blog.

Feb
19
2015
--

More on MySQL 5.6 multi-threaded replication and GTIDs (and Feb. 25 webinar)

In a previous post, titled “Multi-threaded replication with MySQL 5.6: Use GTIDs,” I explained that using GTID replication is almost a requirement when using MySQL 5.6 MTS. Let’s see now how to perform the day-to-day operations when MTS and GTIDs are both enabled. (I’ll also be presenting a related webinar next week titled “Multi-threaded Replication in MySQL 5.6 and 5.7″).

Seeing the execution gaps

If you have a look at SHOW SLAVE STATUS while the slave is running, you may not be expecting such an output:

[...]
Executed_Gtid_Set: 1381aa44-9a60-11e4-b6d8-94dbc999324d:1-2520:2522:2524:2526-2528:2531-2533:2536-2538:2540-2541:2544:2546-2547:2550-2551:2555:2565-2566:2569:2575-2577:2579-2581:2584-2586:2588:2590-2591:2595-2597:2599:2602:2604-2605:2607-2610:2613:2615-2620:2622-2624:2626-2627:2629:2631:2634:2636-2639:2641-2642:2644:2646-2647:2649:2651-2653:2657-2658:2661-2662:2666-2672:2676-2678:2680:2683-2684:2686-2693:2695:2701:2704:2706-2707:2709:2711:2713-2714:2717:2720-2722:2729-2730:2735:2744:2746:2749:2751-2752:2762:2764-2765:2768-2769:2771:2774:2776:2780-2782:2784:2786-2787:2789:2791:2793:2800:2803:2805-2807:2809:2811-2814:2816-2817:2819-2820:2822-2826:2828-2834:2837-2840:2842:2844-2845:2847:2850-2851:2853:2855:2857-2859:2861-2863:2865-2868:2870-2871:2873-2874:2878:2880-2884:2886-2888:2891:2893:2895-2896:2899:2903:2906-2907:2910:2912:2915-2918:2921-2923:2925-2926:2930:2932:2934:2936:2939-2940:2943-2944:2946:2948-2957:2966:2969-2970:2974:2976:2979-2980:2982:2985:2987:2989:2994:2996-2997:2999:3001:3003:3006:3008:3011-3013
[...]

Ouch! What does that insane list of GTIDs mean?

It is actually easy to understand as long as you know that the GTID of all executed transaction is tracked in Executed_Gtid_Set and that execution gaps are allowed with MTS.

Then 1-2520:2522:2524 simply means that transactions #1 to #2520 have been executed, as well as transactions #2522 and #2524, but not #2521 and #2523.

You can also see that a gap at a specific position will not last for long. If you run SHOW SLAVE STATUS an instant later, you will see:

[...]
Executed_Gtid_Set: 1381aa44-9a60-11e4-b6d8-94dbc999324d:1-4095:4098:4100:4103-4105[...]

This time the first execution gap is for transaction #4096.

If you stop the writes on the master, all gaps will be filled once replication has caught up and you will simply see:

[...]
Executed_Gtid_Set: 1381aa44-9a60-11e4-b6d8-94dbc999324d:1-20599
[...]

Dealing with replication errors

Say replication has stopped with an error and you want to skip the offending event. The only option with GTID replication is to inject an empty transaction, which in turn means you must know the GTID of the transaction you want to skip.

Let’s look at SHOW SLAVE STATUS:

Executed_Gtid_Set: 1381aa44-9a60-11e4-b6d8-94dbc999324d:1-1052769:1052805-1052806:1052808:1052810-1052811:1052814:1052816:1052819:1052823:1052825:1052827-1052828:1052830-1052831:1052835:1052838:1052840:1052842:1052844-1052846:1052848-1052849:1052851-1052852:1052855-1052857:1052859:1052862-1052863:1052867-1052868:1052870:1052873-1052875:1052878-1052879:1052882-1052883:1052885:1052887:1052890-1052892:1052896:1052901:1052905:1052908-1052909:1052911:1052915:1052917-1052918:1052922-1052923:1052927-1052929:1052931-1052933:1052937-1052938:1052940:1052943:1052946:1052948-1052949:1052953:1052955-1052956:1052958:1052962-1052964:1052967-1052969:1052972:1052975-1052977:1052979:1052981-1052983:1052985:1052987:1052989:1052991:1052993-1052995:1052999:1053001:1053003:1053005-1053016:1053018:1053020:1053024-1053026:1053029:1053032-1053034:1053037-1053038:1053040:1053043:1053045-1053046

So which transaction should we skip: probably 1381aa44-9a60-11e4-b6d8-94dbc999324d:1052770, right? This is the first transaction that could not be executed.

This is confirmed by looking at the Last_SQL_Error field:

Last_SQL_Error: Worker 0 failed executing transaction '1381aa44-9a60-11e4-b6d8-94dbc999324d:1052770' [...]

Once you know the GTID to skip, it is easy to restart replication (and fix the inconsistency later):

mysql> SET gtid_next='1381aa44-9a60-11e4-b6d8-94dbc999324d:1052770';
mysql> BEGIN;COMMIT;
mysql> SET gtid_next='AUTOMATIC';
mysql> START SLAVE;

Taking backups

When using GTID replication, taking a backup from a multi-threaded slave is not difficult at all.

With Percona XtraBackup, simply add the --slave-info option as usual and you will get the list of executed GTIDs in the xtrabackup_slave_info file:

$ less xtrabackup_slave_info
SET GLOBAL gtid_purged='1381aa44-9a60-11e4-b6d8-94dbc999324d:1-1095246:1095248-1095249:1095253-1095254:1095257-1095266:1095270-1095271:1095273:1095275:1095284-1095285:1095296:1095305:1095311-1095312:1095315-1095316:1095318:1095321:1095323-1095324:1095326-1095332:1095335:1095337-1095338:1095348-1095351:1095353-1095354:1095356:1095358-1095359:1095363-1095364:1095366:1095368-1095369:1095372:1095375-1095376:1095378:1095380-1095385:1095388:1095390-1095392:1095394-1095411:1095419:1095424-1095425:1095428:1095430-1095433:1095437-1095439:1095442:1095444-1095445:1095447:1095449:1095461-1095464:1095468:1095473:1095482-1095484:1095488-1095490:1095494:1095496-1095497:1095499-1095500:1095502:1095505:1095507:1095509:1095511-1095516:1095521:1095524-1095525:1095527:1095530-1095531:1095534-1095552:1095554:1095557:1095559-1095560:1095563:1095566:1095569:1095572-1095573:1095582:1095594-1095595:1095597:1095601-1095605:1095607-1095610:1095612-1095614:1095618:1095621-1095623:1095625-1095628:1095630:1095632:1095634:1095636:1095639-1095642:1095645:1095649:1095659:1095661:1095664-1095665:1095667-1095669:1095671:1095674';
CHANGE MASTER TO MASTER_AUTO_POSITION=1

Then starting replication on a new instance bootstrapped from this backup is easy:

mysql> SET GLOBAL gtid_purged='...';
mysql> RESET SLAVE ALL;
mysql> CHANGE MASTER TO [...], MASTER_AUTO_POSITION=1;
mysql> START SLAVE;

With mysqldump, simply discard the --dump-slave option as the list of executed GTIDs will be automatically added at the top of the dump:

-- MySQL dump 10.13  Distrib 5.6.22, for linux-glibc2.5 (x86_64)
[...]
--
-- GTID state at the beginning of the backup
--
SET @@GLOBAL.GTID_PURGED='1381aa44-9a60-11e4-b6d8-94dbc999324d:1-1095246:1095248-1095249:1095253-1095254:1095257-1095266:1095270-1095271:1095273:1095275:1095284-1095285:1095296:1095305:1095311-1095312:1095315-1095316:1095318:1095321:1095323-1095324:1095326-1095332:1095335:1095337-1095338:1095348-1095351:1095353-1095354:1095356:1095358-1095359:1095363-1095364:1095366:1095368-1095369:1095372:1095375-1095376:1095378:1095380-1095385:1095388:1095390-1095392:1095394-1095411:1095419:1095424-1095425:1095428:1095430-1095433:1095437-1095439:1095442:1095444-1095445:1095447:1095449:1095461-1095464:1095468:1095473:1095482-1095484:1095488-1095490:1095494:1095496-1095497:1095499-1095500:1095502:1095505:1095507:1095509:1095511-1095516:1095521:1095524-1095525:1095527:1095530-1095531:1095534-1095552:1095554:1095557:1095559-1095560:1095563:1095566:1095569:1095572-1095573:1095582:1095594-1095595:1095597:1095601-1095605:1095607-1095610:1095612-1095614:1095618:1095621-1095623:1095625-1095628:1095630:1095632:1095634:1095636:1095639-1095642:1095645:1095649:1095659:1095661:1095664-1095665:1095667-1095669:1095671:1095674';

And then replication can be started like stated before.

Conclusion

Seeing the execution gaps in the output of SHOW SLAVE STATUS can be disturbing at first sight, and of course you may have to change a few habits, but overall there is no specific issue when using GTIDs and MTS.

Next week I will be presenting a free webinar on multi-threaded replication (Wednesday February 25th at 10 a.m. PST). If you are interested in learning more on the topic, feel free to register. It will also be recorded – you’ll be able to use that same link to watch the presentation and download my slides.

The post More on MySQL 5.6 multi-threaded replication and GTIDs (and Feb. 25 webinar) appeared first on MySQL Performance Blog.

Jan
29
2015
--

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL 5.6 allows you to execute replicated events in parallel as long as data is split across several databases. This feature is named “Multi-Threaded Slave” (MTS) and it is easy to enable by setting slave_parallel_workers to a > 1 value. However if you decide to use MTS without GTIDs, you may run into annoying issues. Let’s look at two of them.

Skipping replication errors

When replication stops with an error, a frequent approach is to “ignore now and fix later.” This means you will run SET GLOBAL sql_slave_skip_counter=1 to be able to restart replication as quickly as possible and later use pt-table-checksum/pt-table-sync to resync data on the slave.

Then the day when I hit:

mysql> show slave status;
[...]
Last_SQL_Error: Worker 0 failed executing transaction '' at master log mysql-bin.000017, end_log_pos 1216451; Error 'Duplicate entry '1001' for key 'PRIMARY'' on query. Default database: 'db1'. Query: 'INSERT INTO sbtest1 (id, k, c, pad) VALUES (0, 5320, '49123511666-22272014664-85739796464-62261637750-57593947547-00947134109-73607171516-11063345053-55659776318-82888369235', '11400300639-05875856680-20973514928-29618434959-69429576205')'
Exec_Master_Log_Pos: 1005432

I tried to use the trick:

mysql> set global sql_slave_skip_counter=1;
mysql> start slave;

But:

mysql> show slave status;
[...]
Last_SQL_Error: Worker 0 failed executing transaction '' at master log mysql-bin.000017, end_log_pos 1216451; Error 'Duplicate entry '1001' for key 'PRIMARY'' on query. Default database: 'db1'. Query: 'INSERT INTO sbtest1 (id, k, c, pad) VALUES (0, 5320, '49123511666-22272014664-85739796464-62261637750-57593947547-00947134109-73607171516-11063345053-55659776318-82888369235', '11400300639-05875856680-20973514928-29618434959-69429576205')'
Exec_Master_Log_Pos: 1005882

Note that the position reported with Exec_Master_Log_Pos has moved forward, but I still have my duplicate key error. What’s wrong?

The issue is that the positions reported by SHOW SLAVE STATUS are misleading when using MTS. Quoting the documentation about Exec_Master_Log_Pos:

When using a multi-threaded slave (by setting slave_parallel_workers to a nonzero value in MySQL 5.6.3 and later), the value in this column actually represents a “low-water” mark, before which no uncommitted transactions remain. Because the current implementation allows execution of transactions on different databases in a different order on the slave than on the master, this is not necessarily the position of the most recently executed transaction.

So the solution to my problem is first to make sure that there is no execution gap, and only then to skip the offending event. There is a specific statement for the first part:

mysql> start slave until sql_after_mts_gaps;

And now I can finally skip the error and restart replication:

mysql> set global sql_slave_skip_counter=1;
mysql> start slave;
mysql> show slave statusG
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

The last thing to do is of course to resync the slave.

Backups

If you cannot trust the output of SHOW SLAVE STATUS to get the current binlog position, it means that taking a backup from a slave with parallel replication is tricky.

For instance, if you run mysqldump --dump-slave=2 to get the binlog position of the master, mysqldump will first run STOP SLAVE and then SHOW SLAVE STATUS. Is stopping the slave sufficient to avoid execution gaps? Actually, no.

The only option then seems to be: run STOP SLAVE followed by START SLAVE UNTIL SQL_AFTER_MTS_GAPS, followed by mysqldump while replication is stopped. Not very handy!

GTIDs to the rescue!

The solution for both issues is to use GTIDs.

They help when you want to skip an event because when using GTIDs, you must explicitly specify the transaction you will be skipping. It doesn’t matter whether there are execution holes.

They also help for backups because mysqldump takes the position from gtid_executed which is updated at each transaction commit (XtraBackup does that too).

Conclusion

If your application uses several databases and if you’re fighting with replication lag, MTS can be a great feature for you. But although GTIDs are not technically necessary, you’ll be exposed to tricky situations if you don’t use them.

Is everything rosy when using both GTIDs and MTS? Not exactly… But that will be the topic for a separate post!

By the way, if you are in the Brussels area this weekend, come see me and other great speakers at the MySQL and friends devroom at FOSDEM!

The post Multi-threaded replication with MySQL 5.6: Use GTIDs! appeared first on MySQL Performance Blog.

Dec
01
2014
--

Faster restarts for MySQL and Percona Server 5.6.21+

Faster restarts for MySQL and Percona Server 5.6.21+By default in MySQL 5.6, each time MySQL is started (regular start or crash recovery), it iterates through all the binlog files when GTIDs are not enabled. This can take a very long time if you have a large number of binary log files. MySQL and Percona Server 5.6.21+ have a fix with the simplified-binlog-gtid-recovery option. Let’s explore this issue.

Understanding the issue

It was first reported by Yoshinori @ Facebook (bug #69097).

Let’s start by looking at a MySQL 5.6 instance where binary logging is enabled but GTIDs are disabled.

If we restart MySQL with strace, we’ll see:

# strace -e open service mysql start
[...]
open("/var/lib/mysql5621/mysql-bin.000004", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000003", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000002", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000001", O_RDONLY) = 13
[...]

MySQL opens all binary log files in reverse order. This can bite if you have lots of binlog files or if the binlog files are large.

This does not happen with MySQL 5.5, so why such a change? The reason is … GTIDs. If you look at the bug report, MySQL tries to initialize a few GTID-related settings even if gtid_mode = OFF

The same kind of issue happens when you have binlog files with GTIDs and binlog files without GTIDs:

# strace -e open service mysql start
[...]
open("/var/lib/mysql5621/mysql-bin.000010", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000001", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000002", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000003", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000004", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000005", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000006", O_RDONLY) = 13  -> First binlog file with GTIDs
[...]

Actually you can see that MySQL will do 2 scans: a reverse scan and a forward scan. Here not all binlogs need to be opened: MySQL will stop scanning files as soon as it finds GTID information. But that can again bite if you have just turned on GTIDs (and therefore most binlog files do not contain any GTID information).

Now what happens if you set gtid_mode = ON from the start or if all the binlog files without any GTID information have been removed?

# strace -e open service mysql start
[..]
open("/var/lib/mysql5621/mysql-bin.000011", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000006", O_RDONLY) = 13
[...]

Only the newest and the oldest binlog files are opened, which is expected.

The fix

The fix is easy: simply add simplified-binlog-gtid-recovery = 1 in your configuration file. When this is set, MySQL will open at most 2 binlog files: the newest one and the oldest one. See the documentation.

Let’s see what happens with our server containing some binlog files without GTID information:

# strace -e open service mysql start
[...]
open("/var/lib/mysql5621/mysql-bin.000012", O_RDONLY) = 13
open("/var/lib/mysql5621/mysql-bin.000001", O_RDONLY) = 13
[...]

What is the performance overhead of the binlog scans? Of course YMMV, but I did a quick test on my laptop by creating 80x 100MB binlog files: by default, startup takes 6s while with simplified-binlog-gtid-recovery=1 it only takes 2s. 3x improvement with a single setting, not bad!

Conclusion

It is good to see that regressions introduced in MySQL 5.6 are being fixed over time. This one is pretty nasty as most people not using GTIDs will never think that something related to GTIDs can actually create performance issues. Is there any drawback if you enable this setting? I can’t think of any, so I’m hoping it will be enabled by default in 5.7.

The post Faster restarts for MySQL and Percona Server 5.6.21+ appeared first on MySQL Performance Blog.

Sep
05
2014
--

Using MySQL 5.6 Global Transaction IDs (GTIDs) in production: Q&A

Using MySQL Global Transaction IDs in ProductionThank you to all of you who attended my webinar last week about Global Transaction IDs (GTIDs), which were introduced in MySQL 5.6 to make the reconfiguration of replication straightforward. If you missed my webinar, you can still listen to the recording and download the sides (free). We had a lot of questions during the webinar, so let me try to answer them here. Please let me know in the comments if additional clarification is needed.

Q: Does GTID provide any benefit to master-master replication? If yes, how?
Q: Is ACTIVE ACTIVE MASTER MASTER successful in MySQL with GTID?

A: GTIDs don’t change the nature of MySQL replication: for instance it is still asynchronous and if you write on both masters in a master-master setup, there is still no write conflict detection mechanism. The main benefit of GTIDs is that any change of the replication topology is easy because you no longer need to run complex calculations to find the right binlog positions when connecting a slave to a new master.
So master-master replication can be configured with GTIDs, it does not provide a lot of benefits compared to position-based replication as you will never change the replication topology.
And having a setup where both masters receive writes is still not recommended with GTIDs.

Q: Will GTIDs work well with master:standby master? How quick would the failover be?
A: Yes, GTIDs works well with this kind of setup (which is one master and one slave). GTIDs do not provide failover, you will have to use an external tool. Speed of failover then depends on the tool you will use.

Q: For already set up MASTER-MASTER/MASTER-SLAVE Replication, after getting GTID set up, we need to rebuild replication again using AUTO POS=1, correct?
A: Yes, using MASTER_AUTO_POSITION=1 is necessary to indicate that you will use GTID replication. So you will have to run: STOP SLAVE; CHANGE MASTER TO … MASTER_AUTO_POSITION = 1; START SLAVE;

Q: Application having tables from different Engines(InnoDB and MyISAM), how that will handled in GTID?
A: Transactions using both MyISAM and InnoDB tables are not allowed, please refer to the documentation

Q: In a master-slave replication topology (with GTID enabled), how does slave get data from the master if the master’s binary logs are purged given that AUTO_POSITION=1 is used as part of the change master command?
A: This will break replication with error 1236.

Q: Whats the value of show slave status who determines if there is a lag on the slave?
A: This is Seconds_Behind_Master. It’s not always reliable though. For instance if you have a replication setup like A -> B -> C, Seconds_Behind_Master on C will shop the lag relatively to B, not A.

Q: What is the value of saving the history of previous master’s GTIDs executed in the show slave status -> Executed_Gtid_Set?
A: The new replication protocol makes sure that when the slave connects to its master, it sends the range of GTIDs it has already executed. Then the master sends back all other transactions. That’s why Executed_Gtid_Set contains the history of all executed transactions.

Q: We use DB Master and Slave VIPs on our servers, can the mysqlfailover tool also switch the VIP to the new master? Is it scriptable on the event of a failover?
A: Yes you can use extension points to add you own custom scripts with mysqlfailover. See the documentation for –exec-before and –exec-after.

Q: How does mysqlfailover handle brief network instability between the Master and Slaves?
A: mysqlfailover only triggers failover when it suspects the master is no longer alive. So network instability between the master and its slaves won’t affect it for master crash detection. However it can prevent the tool from reconfiguring replication correctly during failover/switchover if one or several slaves are not reachable.

Q: Does Facebook use MySQL with GTID? if yes, which module or all together for everything?
A: I can’t speak for Facebook, but this talk at the MySQL Conference this year suggests that they’re using GTIDs in production. They have added custom code to make GTIDs easier to use.

Q: is GTID_SUBSET function part of MySQL utilities? or we should set a script to regularly detect it? is GTID_SUBSET beneficial in case of an ACTIVE ACTIVE MASTER MASTER setup?
Q: Can you please confirm how to get gtid set in order to use gtid functions?

A: GTID_SUBSET() is a built-in function in MySQL 5.6, you don’t need to install MySQL Utilities to use it. It can be used to easily know whether Executed_Gtid_Set on a given server is a subset of Executed_Gtid_Set on another server, so it can be beneficial to use it in any replication topology.

Q: What is difference between HOLE and BUGS?
A: Holes are not allowed in MySQL 5.6 implementation of GTIDs. So if you see a hole in a GTID sequence, you’re hitting a bug!

Q: Using MySQL utilities, we can set-up replication also using a python script with just one command. Does it automatically takes dump from master to slave and starts replication? If yes so a 300 GB data directory, will it run as background if executed using shell script? Or it just starts the replicationn from current position and won’t take the dump?
A: I think you are talking about mysqlreplicate. This tool only runs CHANGE MASTER TO for you so it doesn’t take a backup of any kind.

Q: Is it possible to use mysqlfailover script at any node(like slave) any time to know which is its MASTER and other SLAVE options also? If no, is this available by some other means?
A: You should probably use mysqlrplshow instead.

Q: As told during limitations of MySQL Utilities on automatic failover, so how can I achieve AUTOMATIC failover if I want this as primary option?
A: The node running mysqlfailover is not highly available so if it is down you lose the ability of doing automatic failover. There are several options if you want to achieve automatic failover: carefully monitor the monitoring node or use solutions like Percona Replication Manager which relies on Pacemaker or Percona XtraDB Cluster which relies on Galera replication.

Q: mysqlrpadmin failover/switchover: Can we execute this command on slave or any other monitoring node?
A: Yes, as long as mysqlrpladmin is installed on a server and if it can connect to the database servers, the command can be executed from anywhere.

Q: If we set slaves to read_only…is that recommended to eliminate the errant transaction?
A: Yes, it is recommended. However it doesn’t prevent users with the SUPER privilege from accidentally writing on a slave.

Q: Can this errant transaction issue be prevented with (active-passive) master-master replication?
A: Master-master replication will make sure that any transaction written on one server will automatically end up being written on the other slave. So it is like avoiding errant transactions. However writing on both masters is not recommended as you can have write conflicts.

Q: Is there any practical advantage of using GTID with MHA?
A: GTIDs provide no way to perform failover, they only simplify how you can reconfigure replication. So using MHA to leverage GTIDs makes sense.

Q: What is diifference GUID and GTID? When do we have to use GUID and GTID?
A: I’m not sure I correctly understand the question. A GTID is made of a source id and a transaction id. The source id is the master’s server_uuid, which is a GUID that is automatically generated when MySQL is started for the first time.

Q: How to check the slave database tables and record? we have to sync data from master to salve database or automatically will be happen?
A: You can use pt-table-checksum and pt-table-sync from Percona Toolkit.

Thanks again for attending the webinar! You can replay it, download the slide – and also access Percona’s vast library of other MySQL webinar recordings here.

The post Using MySQL 5.6 Global Transaction IDs (GTIDs) in production: Q&A appeared first on MySQL Performance Blog.

Aug
20
2014
--

How to use MySQL Global Transaction IDs (GTIDs) in production

Reconfiguring replication has always been a challenge with MySQL. Each time the replication topology has to be changed, the process is tedious and error-prone because finding the correct binlog position is not straightforward at all. Global Transaction IDs (GTIDs) introduced in MySQL 5.6 aim at solving this annoying issue.

The idea is quite simple: each transaction is associated with a unique identifier shared by all servers in a given replication topology. Now reconfiguring replication is easy as the correct binlog position can be automatically calculated by the server.

Awesome? Yes it is! However GTIDs are also changing a lot of things in how we can perform operations on replication. For instance, skipping transactions is a bit more difficult. Or you can get bitten by errant transactions, a concept that did not exist before.

Percona MySQL webinarsThis is why I will be presenting a webinar on Aug. 27 at 10 a.m. PDT: Using MySQL Global Transaction IDs in Production.

You will learn what you need to operate a replication cluster using GTIDs: how to monitor replication status or to recover from replication errors, tools that can help you and tools that you should avoid and also the main issues that can occur with GTIDs.

This webinar is free but you can register today to reserve your seat. And a recording will be available afterwards. See you next week!

The post How to use MySQL Global Transaction IDs (GTIDs) in production appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com