Jan
27
2017
--

When MySQL Lies: Wrong seconds_behind_master with slave_parallel_workers > 0

seconds_behind_master

seconds_behind_masterIn today’s blog, I will show an issue with seconds_behind_master that one of our clients faced when running slave_parallel_works > 0. We found out that the reported seconds_behind_master from SHOW SLAVE STATUS was lying. To be more specific, I’m talking about bugs #84415 and #1654091.

The Issue

MySQL will not report the correct slave lag if you have slave_parallel_workers> 0. Let’s show it in practice.

I’ll use MySQL Sandbox to speed up one master and two slaves on MySQL version 5.7.17, and sysbench to populate the database:

# Create sandboxes
make_replication_sandbox /path/to/mysql/5.7.17
# Create table with 1.5M rows on it
sysbench --test=/usr/share/sysbench/tests/db/oltp.lua --mysql-host=localhost --mysql-user=msandbox --mysql-password=msandbox --mysql-socket=/tmp/mysql_sandbox20192.sock --mysql-db=test --oltp-table-size=1500000 prepare
# Add slave_parallel_workers=5 and slave_pending_jobs_size_max=1G" on node1
echo "slave_parallel_workers=5" >> node1/my.sandbox.cnf
echo "slave_pending_jobs_size_max=1G" >> node1/my.sandbox.cnf
node1/restart

Monitor Replication lag via SHOW SLAVE STATUS:

for i in {1..1000};
do
    (
        node1/use -e "SHOW SLAVE STATUSG" | grep "Seconds_Behind_Master" | awk '{print "Node1: " $2}' &
        sleep 0.1 ;
        node2/use -e "SHOW SLAVE STATUSG" | grep "Seconds_Behind_Master" | awk '{print "Node2: " $2}' &
    );
    sleep 1;
done

On a separate terminal, DELETE some rows in the test.sbtest1 table on the master, and monitor the above output once the master completes the delete command:

DELETE FROM test.sbtest1 WHERE id > 100;

Here is a sample output:

master [localhost] {msandbox} (test) > DELETE FROM test.sbtest1 WHERE id > 100;
Query OK, 1499900 rows affected (46.42 sec)
. . .
Node1: 0
Node2: 0
Node1: 0
Node2: 48
Node1: 0
Node2: 48
Node1: 0
Node2: 49
Node1: 0
Node2: 50
. . .
Node1: 0
Node2: 90
Node1: 0
Node2: 91
Node1: 0
Node2: 0
Node1: 0
Node2: 0

As you can see, node1 (which is running with slave_parallel_workers = 5) doesn’t report any lag.

The Workaround

We can workaround this issue by querying performance_schema.threads:

SELECT PROCESSLIST_TIME FROM performance_schema.threads WHERE NAME = 'thread/sql/slave_worker' AND (PROCESSLIST_STATE IS NULL  or PROCESSLIST_STATE != 'Waiting for an event from Coordinator') ORDER BY PROCESSLIST_TIME DESC LIMIT 1;

Let’s modify our monitoring script, and use the above query to monitor the lag on node1:

for i in {1..1000};
do
    (
        node1/use -BNe "SELECT PROCESSLIST_TIME FROM performance_schema.threads WHERE NAME = 'thread/sql/slave_worker' AND (PROCESSLIST_STATE IS NULL  or PROCESSLIST_STATE != 'Waiting for an event from Coordinator') ORDER BY PROCESSLIST_TIME DESC LIMIT 1 INTO @delay; SELECT IFNULL(@delay, 0) AS 'lag';" | awk '{print "Node1: " $1}' &
        sleep 0.1 ;
        node2/use -e "SHOW SLAVE STATUSG" | grep "Seconds_Behind_Master" | awk '{print "Node2: " $2}' &
    );
    sleep 1;
done

master [localhost] {msandbox} (test) > DELETE FROM test.sbtest1 WHERE id > 100;
Query OK, 1499900 rows affected (45.21 sec)
Node1: 0
Node2: 0
Node1: 0
Node2: 0
Node1: 45
Node2: 45
Node1: 46
Node2: 46
. . .
Node1: 77
Node2: 77
Node1: 78
Node2: 79
Node1: 0
Node2: 80
Node1: 0
Node2: 81
Node1: 0
Node2: 0
Node1: 0
Node2: 0

Please note that in our query to performance_schema.threads, we are filtering PROCESSLIST_STATE “NULL” and “!= Waiting for an event from Coordinator”. The correct state is “Executing Event”, but it seems like it doesn’t correctly report that state (#84655).

Summary

MySQL parallel replication is a nice feature, but we still need to make sure we are aware of any potential issues it might bring. Most monitoring systems use the output of SHOW SLAVE STATUS to verify whether or not the slave is lagging behind the master. As shown above, it has its caveats.

As always, we should test, test and test again before implementing any change like this in production!

Mar
13
2015
--

Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7

My webinar “Multi-threaded Replication in MySQL 5.6 and 5.7″ on February 25 generated several excellent questions following the presentation (available here for playback along with the slides). I didn’t have time to answer many of the questions during the session and so in this post I answer all of them. Thanks to everyone who attended!

Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7Q: What do you expect from MTS with logical clock? Do you think performance would be good as with per database?
A: MTS with 5.6 is not usable if you have a single database. I do not have numbers, but this is quite frequent. With 5.7 everyone should be able to benefit from multi-threaded replication.

Q: When MySQL 5.6 was released, performance of MTS was lower, than in 5.5, for example. Is this addressed now?
A: I am not sure which specific issue or bug you are referring, but if your data is spread across several databases

Q: How does Percona XtraBackup work with MTS? What are the changes in mysqldump?
A: As long as you are using GTIDs, you can safely take a backup from a slave using multi-threaded replication: with XtraBackup, add the --slave-info option as usual when taking a backup from a slave and with mysqldump, use --master-data instead of --dump-slave.

Q: For checkpoint position, what if MTS thread apply Insert before creating table where it inserting data. How MTR checkpoint will keep track of these transactions applying by different thread on slave?
A: The worker threads track all execution gaps to make sure that out-of-order execution is safe and to be able to replay all events without forgetting any of them. So it is not possible that a worker thread will insert data in a table that has not been created yet.

Q: Can you use MTS with all binlog_format options?
A: Yes

Q: Is there any way to have the threads work so that no database contention happens?
A: The short answer is no: the goal of the worker threads is to execute the incoming transactions as fast as possible. If that results in database contention, you should probably decrease the number of worker threads.

Q: Why doesn’t multi-threaded replication perform well on a single DB?
A: With 5.6, parallelization is based on isolating the transactions going to each database. If you only have a single DB, no parallelization is possible. You should look at 5.7 and the logical clock algorithm.

Q: Are there any implications with regards to GTIDs and Multi-Threaded replication when running a Master-to-Master setup?
A: I cannot think of any, however I am not sure master-master replication is still very relevant when using GTIDs.

Q: Is there any inconvenience with memory or cache when using more workers than the number of databases?
A: If the number of workers is just a bit higher than the number of databases (like 5 workers for 3 databases), there should not be any issue. However with ridiculously high numbers (500 workers for 2 databases), there might be performance degradation. I have not tested such cases, so I cannot give a good answer there. However the idea is that the number of workers should be close to the number of databases and should exceed the number of cores on the server.

Q: Is there multi-threaded replication in MySQL 5.7?
A: Yes, multi-threaded replication is available in MySQL 5.7 and offers improvements compared to MySQL 5.6 (mainly the parallelization with logical clock).

Q: Have you used DIM_STAT to created load and measure SLAVE Lag? Any interesting take-a-ways from that effort?
A: I used sysbench to generate load and Seconds_Behind_Master from SHOW SLAVE STATUS to measure slave lag. That mainly shows that if your workload is a good fit for MTS (multiple databases and load shared evenly across ), performance benefits can be significant.

Q: Does multi-threaded replication also work with Percona XtraDB Cluster/Percona Server?
A: Percona Server 5.6 is based on MySQL 5.6, so you can use multi-threaded replication exactly as you would use it on MySQL 5.6.

On Percona XtraDB Cluster, it is a bit different: replication inside the cluster uses Galera replication, which has nothing to do with MySQL replication. Note that Galera has offered parallel replication from the beginning (parallel applying of the replicated writesets to be accurate). However if you are using asynchronous replicas, these replicas can benefit from multi-threaded replication if they are running on MySQL/Percona Server 5.6.

Q: What happens to cross db transactions? Do they not replicate?
A: These transactions will replicate, but they will have to wait until all preceding transactions have been executed. Stated differently, cross db transactions introduce serialization, so you should avoid them as much as possible if you want to benefit from parallel applying.

To be accurate, if you have db1, db2 and db3 and if you execute a transaction involving db1 and db2, transactions on db3 can still be applied in parallel. So if you have many databases, cross db transactions may not be that bad.

Q: When using MTS without GTIDs, is “Seconds_Behind_Master” from SHOW SLAVE STATUS valid?
A: Seconds_Behind_Master is based on Exec_Master_Log_Pos. And with MTS, Exec_Master_Log_Pos is not reliable as it indicates the position of the latest checkpoint and not the position of the latest executed transaction. However in practice, checkpoints will happen at least every 300ms by default, so Seconds_Behind_Master is still a good indication of the replication lag. Of course you should keep in mind the usual limitations, such as with multi-tiered replication (if the setup is A->B->C, C will report its lag against B, not against A) or when there is a replication error (then Seconds_Behind_Master is NULL).

Q: How can all the servers be realistically restarted at the same time? There could be a few sec intervals if you have multiple servers [That was when I explained how to enable GTID replication].
A: With MySQL 5.6, the requirements are pretty strict when it comes to enabling GTIDs: all servers must be restarted at the same point in time in the replication stream. As you mention, it is difficult if you have several servers, so the only viable solution is: stop the writes on the master, wait until replication has caught up on all slaves, stop all servers, change the configuration, restart all servers.

What it means is that there is a time range when all servers are down. This is a showstopper for many people, and that’s why Percona Server 5.6 now includes a patch from Facebook that allows an online migration to GTIDs and that’s why MySQL 5.7.6 also offers this option.

* * *

Thanks for all of the great questions – and I hope to see you next month at OpenStack Live and the Percona Live MySQL Conference and Expo 2015 (April 13-16) – both at the Santa Clara conference center in sunny Silicon Valley. Get more info here.

The post Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7 appeared first on MySQL Performance Blog.

Jan
29
2015
--

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL 5.6 allows you to execute replicated events in parallel as long as data is split across several databases. This feature is named “Multi-Threaded Slave” (MTS) and it is easy to enable by setting slave_parallel_workers to a > 1 value. However if you decide to use MTS without GTIDs, you may run into annoying issues. Let’s look at two of them.

Skipping replication errors

When replication stops with an error, a frequent approach is to “ignore now and fix later.” This means you will run SET GLOBAL sql_slave_skip_counter=1 to be able to restart replication as quickly as possible and later use pt-table-checksum/pt-table-sync to resync data on the slave.

Then the day when I hit:

mysql> show slave status;
[...]
Last_SQL_Error: Worker 0 failed executing transaction '' at master log mysql-bin.000017, end_log_pos 1216451; Error 'Duplicate entry '1001' for key 'PRIMARY'' on query. Default database: 'db1'. Query: 'INSERT INTO sbtest1 (id, k, c, pad) VALUES (0, 5320, '49123511666-22272014664-85739796464-62261637750-57593947547-00947134109-73607171516-11063345053-55659776318-82888369235', '11400300639-05875856680-20973514928-29618434959-69429576205')'
Exec_Master_Log_Pos: 1005432

I tried to use the trick:

mysql> set global sql_slave_skip_counter=1;
mysql> start slave;

But:

mysql> show slave status;
[...]
Last_SQL_Error: Worker 0 failed executing transaction '' at master log mysql-bin.000017, end_log_pos 1216451; Error 'Duplicate entry '1001' for key 'PRIMARY'' on query. Default database: 'db1'. Query: 'INSERT INTO sbtest1 (id, k, c, pad) VALUES (0, 5320, '49123511666-22272014664-85739796464-62261637750-57593947547-00947134109-73607171516-11063345053-55659776318-82888369235', '11400300639-05875856680-20973514928-29618434959-69429576205')'
Exec_Master_Log_Pos: 1005882

Note that the position reported with Exec_Master_Log_Pos has moved forward, but I still have my duplicate key error. What’s wrong?

The issue is that the positions reported by SHOW SLAVE STATUS are misleading when using MTS. Quoting the documentation about Exec_Master_Log_Pos:

When using a multi-threaded slave (by setting slave_parallel_workers to a nonzero value in MySQL 5.6.3 and later), the value in this column actually represents a “low-water” mark, before which no uncommitted transactions remain. Because the current implementation allows execution of transactions on different databases in a different order on the slave than on the master, this is not necessarily the position of the most recently executed transaction.

So the solution to my problem is first to make sure that there is no execution gap, and only then to skip the offending event. There is a specific statement for the first part:

mysql> start slave until sql_after_mts_gaps;

And now I can finally skip the error and restart replication:

mysql> set global sql_slave_skip_counter=1;
mysql> start slave;
mysql> show slave statusG
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

The last thing to do is of course to resync the slave.

Backups

If you cannot trust the output of SHOW SLAVE STATUS to get the current binlog position, it means that taking a backup from a slave with parallel replication is tricky.

For instance, if you run mysqldump --dump-slave=2 to get the binlog position of the master, mysqldump will first run STOP SLAVE and then SHOW SLAVE STATUS. Is stopping the slave sufficient to avoid execution gaps? Actually, no.

The only option then seems to be: run STOP SLAVE followed by START SLAVE UNTIL SQL_AFTER_MTS_GAPS, followed by mysqldump while replication is stopped. Not very handy!

GTIDs to the rescue!

The solution for both issues is to use GTIDs.

They help when you want to skip an event because when using GTIDs, you must explicitly specify the transaction you will be skipping. It doesn’t matter whether there are execution holes.

They also help for backups because mysqldump takes the position from gtid_executed which is updated at each transaction commit (XtraBackup does that too).

Conclusion

If your application uses several databases and if you’re fighting with replication lag, MTS can be a great feature for you. But although GTIDs are not technically necessary, you’ll be exposed to tricky situations if you don’t use them.

Is everything rosy when using both GTIDs and MTS? Not exactly… But that will be the topic for a separate post!

By the way, if you are in the Brussels area this weekend, come see me and other great speakers at the MySQL and friends devroom at FOSDEM!

The post Multi-threaded replication with MySQL 5.6: Use GTIDs! appeared first on MySQL Performance Blog.

Oct
21
2013
--

MySQL 5.6’s new replication features: Benefits, Limitations and Challenges

MySQL 5.6’s new replication featuresOn Wednesday I’ll be leading a webinar exploring MySQL 5.6’s new replication features. And yes, as usual I’ll deliver news on the good, the bad and the ugly (that is to say the benefits, limitations and challenges).

The webinar, appropriately titled, “New Replication Features in MySQL 5.6: Benefits, Limitations, and Challenges“, is scheduled for Oct. 23 at 10 a.m. Pacific Daylight Time. You can register now to reserve your spot (this webinar will also be available for playback afterward).

This session aims at exploring some of these new replication features in MySQL 5.6 including:

  • Binlog checksums
  • Crash-safe replication
  • Multi-threaded replication
  • Global transaction identifiers

You will learn which problems they are likely to solve and how they can improve the performance or the resilience of your application. We’ll also discuss their limitations and the challenges you may face while trying to implement them.

I’ll be answering questions afterward but feel free to ask yours below in the comments.

The post MySQL 5.6’s new replication features: Benefits, Limitations and Challenges appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com