Feb
06
2024
--

MySQL 8.2.0 Community vs. Enterprise; Is There a Winner?

MySQL 8.2.0 Community vs. EnterpriseTo be honest, the comparison between the two MySQL distributions is not something that excited me a lot. Mainly because from my MySQL memories, I knew that there is not a real difference between the two distributions when talking about the code base.To my knowledge the differences in the enterprise version are in the additional […]

Jan
11
2024
--

Is MySQL Router 8.2 Any Better?

Is MySQL Router 8.2 Any BetterIn my previous article, Comparisons of Proxies for MySQL, I showed how MySQL Router was the lesser performing Proxy in the comparison. From that time to now, we had several MySQL releases and, of course, also some new MySQL Router ones.Most importantly, we also had MySQL Router going back to being a level 7 proxy […]

Dec
08
2023
--

Keepalived for Source Failover: Percona XtraDB Cluster to Percona Server for MySQL

In this article, we will demonstrate how to achieve asynchronous replication automatic source failover when our replica is a Percona Server for MySQL (PS) and the source is a Percona XtraDB Cluster (PXC) cluster, using virtual IP (VIP) managed by Keepalived.

Let us consider our architecture below with async replication from PXC to Percona Server for MySQL:

PXC               PS-MySQL
==============    ==============
node1      +----> node4
node2      |       
node3 -----+

Our goal is to set node3 as the async replication primary source. Then, upon failure, VIP can move to node2 and then node1 when node2 also fails. Once node3 becomes available again, the virtual IP should come back to node3.

Why not use the below MySQL built-in functionality instead of Keepalived?

Process

Assumptions:

  • We already have a PXC cluster to PS async replication working with the below IPs:
| MySQL
Hostname | IP               | Node    | Remarks  | server_id
---------+------------------+---------+----------|-----
el9-171  | 192.168.122.171  | node1   | PXC      | 171
el9-172  | 192.168.122.172  | node2   | PXC      | 172
el9-173  | 192.168.122.173  | node3   | PXC      | 173
el9-174  | 192.168.122.174  | node4   | PS       | 174
         | 192.168.122.170  |         | VIP-PXC  |

Prerequisites:

  • Enable log_replica_updates on all PXC nodes to ensure that all the transactions from all nodes will be written to all nodes’ binary log, making anynode qualified to act as async replication source.
  • GTID enables replication for automatic replication positioning and correct failover.

Steps:

1) Edit then verify the priority with the highest to lowest value from node3, node2, node1:

node1> egrep 'priority' /etc/keepalived/keepalived.conf
  priority 101

node2> egrep 'priority' /etc/keepalived/keepalived.conf
  priority 102

node3> egrep 'priority' /etc/keepalived/keepalived.conf
priority 103

Sample configuration from node1:

node1> cat /etc/keepalived/keepalived.conf
vrrp_script chk_pxc {
  script  "/usr/bin/clustercheck"
  interval  1
  user    mysql
}
vrrp_instance PXC_as_async_master {
  state           MASTER
  interface       ens2
  virtual_router_id 51
  priority        101
  nopreempt
  virtual_ipaddress {
      192.168.122.170
  }
  
  track_script {
      chk_pxc
  }
}

2) Restart Keepalived on all nodes and check the status.

shell> systemctl restart keepalived

Check:

node1> journalctl -u keepalived.service -b | egrep ' systemd[1]: Starting | systemd[1]: Stopping | Entering | VRRP_Script' | tail -1
Nov 13 03:23:01 el9-171 Keepalived_vrrp[966]: (PXC_as_async_master) Entering BACKUP STATE

node2> journalctl -u keepalived.service -b | egrep ' systemd[1]: Starting | systemd[1]: Stopping | Entering | VRRP_Script' | tail -1
Nov 13 03:23:03 el9-172 Keepalived_vrrp[966]: (PXC_as_async_master) Entering BACKUP STATE

node3> journalctl -u keepalived.service -b | egrep ' systemd[1]: Starting | systemd[1]: Stopping | Entering | VRRP_Script' | tail -1
Nov 13 03:23:03 el9-173 Keepalived_vrrp[963]: (PXC_as_async_master) Entering MASTER STATE

node3> ip addr show dev ens2 | grep 'inet '
inet 192.168.122.173/24 brd 192.168.122.255 scope global noprefixroute ens2
inet 192.168.122.170/32 scope global ens2

We can see that node3 is the PRIMARY with VIP while the rest are in the BACKUP state.

3) Simulate a running write application.

node1> (set -e; while true; do mysql db1 -e"INSERT INTO t(s) VALUES('a');" ; sleep 5; done)

4) Check the replica’s current state using the below bash function:

node4> f_replica_status(){
mysql -e'SHOW REPLICA STATUSG' | sed '/,$/N;s/n/ /' | egrep 'Source_Host|Source_Log_File|Replica_IO_Running|Replica_SQL_Running|Last_IO_Error:|Last_SQL_Error:|Auto_Position|_Gtid_Set|Source_Server_Id|Source_UUID|Seconds_Behind_Source|Exec_Source_Log_Pos|Read_Source_Log_Pos'
}

Check replica status:

node4> f_replica_status
                  Source_Host: 192.168.122.173
              Source_Log_File: s173.000036
          Read_Source_Log_Pos: 102117
        Relay_Source_Log_File: s173.000036
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
          Exec_Source_Log_Pos: 102117
        Seconds_Behind_Source: 0
                Last_IO_Error:
               Last_SQL_Error:
             Source_Server_Id: 173
                  Source_UUID: f81e1079-7f00-11ee-849e-525400146f98
    Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Retrieved_Gtid_Set: 3d67d519-77d3-11ee-bcf4-8f28664ab56b:53752-53859
            Executed_Gtid_Set: 3d67d519-77d3-11ee-bcf4-8f28664ab56b:1-53859, c2982ae6-882c-ee11-430b-70d799b54a94:1-2222
                Auto_Position: 1

5) Reconfigure the replica to use the VIP(192.168.122.170):

node4-mysql> STOP REPLICA;
node4-mysql> CHANGE REPLICATION SOURCE TO SOURCE_HOST = '192.168.122.170';
node4-mysql> START REPLICA;

6) Verify the replica. Source_Host is now set to VIP:

node4> f_replica_status
                  Source_Host: 192.168.122.170
              Source_Log_File: s173.000036
          Read_Source_Log_Pos: 107494
        Relay_Source_Log_File: s173.000036
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
          Exec_Source_Log_Pos: 107494
        Seconds_Behind_Source: 0
                Last_IO_Error:
               Last_SQL_Error:
             Source_Server_Id: 173
                  Source_UUID: f81e1079-7f00-11ee-849e-525400146f98
    Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Retrieved_Gtid_Set: 3d67d519-77d3-11ee-bcf4-8f28664ab56b:53875-53878
            Executed_Gtid_Set: 3d67d519-77d3-11ee-bcf4-8f28664ab56b:1-53878, c2982ae6-882c-ee11-430b-70d799b54a94:1-2222
                Auto_Position: 1

7) Test failover

7.1) Stop mysqld

node3> mysqladmin shutdown

7.2) Check the Keepalive state:

node1> journalctl -u keepalived.service -b | egrep ' systemd[1]: Starting | systemd[1]: Stopping | Entering | VRRP_Script' | tail -1
Nov 13 03:23:01 el9-171 Keepalived_vrrp[966]: (PXC_as_async_master) Entering BACKUP STATE

node2> journalctl -u keepalived.service -b | egrep ' systemd[1]: Starting | systemd[1]: Stopping | Entering | VRRP_Script' | tail -1
Nov 13 03:35:03 el9-172 Keepalived_vrrp[966]: (PXC_as_async_master) Entering MASTER STATE

node3> journalctl -u keepalived.service -b | egrep ' systemd[1]: Starting | systemd[1]: Stopping | Entering | VRRP_Script' | tail -1
Nov 13 03:35:02 el9-173 Keepalived_vrrp[963]: (PXC_as_async_master) Entering FAULT STATE

Node2 is the new PRIMARY while node3 changed to FAULT state.

7.3) Check the VIP transferred to node2:

node2> ip addr show dev ens2 | grep 'inet '
inet 192.168.122.172/24 brd 192.168.122.255 scope global noprefixroute ens2
inet 192.168.122.170/32 scope global ens2

7.4) Check the replica. You may have to wait for at least the value of SOURCE_CONNECT_RETRY, which is, by default, 60 seconds.

The Source_Server_Id, Source_Log_File, Relay_Source_Log_File, and Source_UUID have changed.

node4> f_replica_status
                  Source_Host: 192.168.122.170
              Source_Log_File: s172.000033
          Read_Source_Log_Pos: 198793
        Relay_Source_Log_File: s172.000033
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
          Exec_Source_Log_Pos: 198793
        Seconds_Behind_Source: 0
                Last_IO_Error: 
               Last_SQL_Error: 
             Source_Server_Id: 172
                  Source_UUID: 6534a3eb-77d3-11ee-9870-52540028fd18
    Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Retrieved_Gtid_Set: 3d67d519-77d3-11ee-bcf4-8f28664ab56b:53875-53973
            Executed_Gtid_Set: 3d67d519-77d3-11ee-bcf4-8f28664ab56b:1-53973, c2982ae6-882c-ee11-430b-70d799b54a94:1-2222
                Auto_Position: 1

The replica now has a successful failover to the new source (node2).

When node3 comes back online again, the VIP will also move back to node3 since it has the highest Keepalived priority, making it the asynchronous replication source again.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Nov
01
2023
--

Is ANALYZE TABLE Safe on a Busy MySQL Database Server?

ANALYZE TABLE Safe on a Busy Database

Sometimes, there is a need to update the table and index statistics manually using the ANALYZE TABLE command. Without going further into the reasons for such a need, I wanted to refresh this subject in terms of overhead related to running the command on production systems. However, the overhead discussed here is unrelated to the usual cost of diving into table rows to gather statistics, which we can control by setting the number of sample pages

Five years ago, my colleague Sveta posted a nice blog post about an improvement introduced in Percona Server for MySQL to address unnecessary stalls related to running the command:

ANALYZE TABLE Is No Longer a Blocking Operation

Historically, the problem with running the ANALYZE TABLE command in MySQL was that the query needed an exclusive lock on the table definition cache entry for the table. This makes the query wait for any long-running queries to finish but also can trigger cascading waiting for other incoming requests. In short, ANALYZE could lead to nasty stalls in busy production environments.

A lot has changed since then, but many production systems alive today still run with affected versions. Let’s recap how the situation has evolved over the years. 

MySQL Server – Community Edition

The problem applies to all versions of the upstream MySQL Community up to 8.0.23. There were no improvements in the 5.7 series (btw, EOL will be reached this month!), which means even the latest 5.7.43 is affected. Here is an example scenario you may end up here:

mysql > select @@version,@@version_comment;
+-----------+------------------------------+
| @@version | @@version_comment            |
+-----------+------------------------------+
| 5.7.43    | MySQL Community Server (GPL) |
+-----------+------------------------------+
1 row in set (0.00 sec)

mysql > show processlist;
+----+----------+-----------+------+---------+------+-------------------------+----------------------------------------------------------------+
| Id | User     | Host      | db   | Command | Time | State                   | Info                                                           |
+----+----------+-----------+------+---------+------+-------------------------+----------------------------------------------------------------+
|  4 | msandbox | localhost | db1  | Query   |   54 | Sending data            | select avg(k) from sbtest1 where pad not like '%f%' group by c |
| 13 | msandbox | localhost | db1  | Query   |   29 | Waiting for table flush | analyze table sbtest1                                          |
| 17 | msandbox | localhost | db1  | Query   |    0 | starting                | show processlist                                               |
| 18 | msandbox | localhost | db1  | Query   |   15 | Waiting for table flush | select * from sbtest1 where id=100                             |
+----+----------+-----------+------+---------+------+-------------------------+----------------------------------------------------------------+
4 rows in set (0.00 sec)

One long query made the ANALYZE wait, but another, normally very fast query, is now waiting, too.

The same situation may happen in MySQL 8.0 series, including 8.0.23. Fortunately, there was a fix in version 8.0.24 addressing this problem. We can only read a bit restrained comment in the release notes about the “wait eliminated”:

https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-24.html

Indeed, since version 8.0.24, a similar test during a long-running query results in instant query execution:

mysql > select @@version,@@version_comment;
+-----------+------------------------------+
| @@version | @@version_comment            |
+-----------+------------------------------+
| 8.0.24    | MySQL Community Server - GPL |
+-----------+------------------------------+
1 row in set (0.00 sec)

mysql > analyze table sbtest1;
+-------------+---------+----------+----------+
| Table       | Op      | Msg_type | Msg_text |
+-------------+---------+----------+----------+
| db1.sbtest1 | analyze | status   | OK       |
+-------------+---------+----------+----------+
1 row in set (0.00 sec)

However, we can still find a warning in the official documentation, even for the 8.1 version, like this:

ANALYZE TABLE removes the table from the table definition cache, which requires a flush lock. If there are long running statements or transactions still using the table, subsequent statements and transactions must wait for those operations to finish before the flush lock is released. Because ANALYZE TABLE itself typically finishes quickly, it may not be apparent that delayed transactions or statements involving the same table are due to the remaining flush lock.

I requested an update of the related bug report as well as the documentation problem accordingly:

https://bugs.mysql.com/bug.php?id=87065
https://bugs.mysql.com/bug.php?id=112670

Percona Server for MySQL

As mentioned above, Percona introduced a fix and removed unnecessary table definition cache lock as a result of solving this bug report:

https://jira.percona.com/browse/PS-2503

When using the Percona variant, running ANALYZE TABLE was safe already since versions 5.6.38 and 5.7.20, as these were the active development series at the time. You may read the announcement in the release notes here:

https://docs.percona.com/percona-server/5.7/release-notes/Percona-Server-5.7.20-18.html#bugs-fixed

Percona Server for MySQL version 8.0 has been free from the issue since the very first release (I tested back, including the first GA release 8.0.13-3), as the improvement was merged from the Percona Server for MySQL 5.7 series.

MariaDB server

The locking ANALYZE TABLE problem applies to all MariaDB versions up to 10.5.3. In version 10.5.4, the solution from Percona was implemented as described in the following report:
https://jira.mariadb.org/browse/MDEV-15101

Therefore, when you run the query in 10.5.3 or lower, and in any previous series, like even the latest 10.4.31, a similar situation may occur:

mysql > select @@version,@@version_comment;
+----------------+-------------------+
| @@version      | @@version_comment |
+----------------+-------------------+
| 10.5.3-MariaDB | MariaDB Server    |
+----------------+-------------------+
1 row in set (0.000 sec)

mysql > show processlist;
+----+----------+-----------+------+---------+------+-------------------------+----------------------------------------------------------------+----------+
| Id | User     | Host      | db   | Command | Time | State                   | Info                                                           | Progress |
+----+----------+-----------+------+---------+------+-------------------------+----------------------------------------------------------------+----------+
|  4 | msandbox | localhost | db1  | Query   |   18 | Sending data            | select avg(k) from sbtest1 where pad not like '%f%' group by c |    0.000 |
| 13 | msandbox | localhost | db1  | Query   |   16 | Waiting for table flush | analyze table sbtest1                                          |    0.000 |
| 14 | msandbox | localhost | db1  | Query   |   14 | Waiting for table flush | select * from sbtest1 where id=100                             |    0.000 |
| 15 | msandbox | localhost | NULL | Query   |    0 | starting                | show processlist                                               |    0.000 |
+----+----------+-----------+------+---------+------+-------------------------+----------------------------------------------------------------+----------+
4 rows in set (0.000 sec)

mysql > select @@version,@@version_comment;
+-----------------+-------------------+
| @@version       | @@version_comment |
+-----------------+-------------------+
| 10.4.31-MariaDB | MariaDB Server    |
+-----------------+-------------------+
1 row in set (0.000 sec)

mysql > show processlist;
+----+-------------+-----------+------+---------+------+--------------------------+----------------------------------------------------------------+----------+
| Id | User        | Host      | db   | Command | Time | State                    | Info                                                           | Progress |
+----+-------------+-----------+------+---------+------+--------------------------+----------------------------------------------------------------+----------+
|  1 | system user |           | NULL | Daemon  | NULL | InnoDB purge coordinator | NULL                                                           |    0.000 |
|  2 | system user |           | NULL | Daemon  | NULL | InnoDB purge worker      | NULL                                                           |    0.000 |
|  3 | system user |           | NULL | Daemon  | NULL | InnoDB purge worker      | NULL                                                           |    0.000 |
|  4 | system user |           | NULL | Daemon  | NULL | InnoDB purge worker      | NULL                                                           |    0.000 |
|  5 | system user |           | NULL | Daemon  | NULL | InnoDB shutdown handler  | NULL                                                           |    0.000 |
|  9 | msandbox    | localhost | db1  | Query   |   18 | Sending data             | select avg(k) from sbtest1 where pad not like '%f%' group by c |    0.000 |
| 18 | msandbox    | localhost | db1  | Query   |   16 | Waiting for table flush  | analyze table sbtest1                                          |    0.000 |
| 19 | msandbox    | localhost | db1  | Query   |   12 | Waiting for table flush  | select * from sbtest1 where id=100                             |    0.000 |
| 22 | msandbox    | localhost | NULL | Query   |    0 | Init                     | show processlist                                               |    0.000 |
+----+-------------+-----------+------+---------+------+--------------------------+----------------------------------------------------------------+----------+
9 rows in set (0.000 sec)

Summary

As long as your database runs on the most recent version of MySQL or MariaDB variant, running ANALYZE TABLE should be absolutely safe and not cause any unexpected stalls.

Users of all three major Percona Server for MySQL series – 5.6.38+, 5.7.20+, and 8.0.x are all safe.

When, for any reason, you are not able to upgrade the community variant to the latest MySQL 8.0.24+ version yet and have to stick with 5.6 or 5.7 series for now, you may just swap MySQL Community binaries to Percona Server for MySQL ones, which are 100% compatible yet free from the problem. At the same time, you may check our post-EOL support for Percona Server for MySQL 5.7.

MariaDB users must upgrade to 10.5.4 or later to avoid the locking problem.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Aug
04
2023
--

Database Server Lifecycle for MySQL and MariaDB

Database Server Lifecycle for MySQL and MariaDB

In this blog post, we will look at the lifecycle and release management for MySQL and MariaDB servers —  where we are now and relevant historical background.

It is worth noting both MySQL and MariaDB have Community and Enterprise versions.  For MySQL, both releases are made by the same company (Oracle), follow the same version numbering, and the Enterprise version is a superset of what is available in Community. For MariaDB, the Community version is provided by MariaDB Foundation, while Enterprise is provided by MariaDB PLC, following its own lifecycle, and has a different feature set. To keep things simple, we will focus our attention on the Community versions. 

MariaDB

As you probably are well aware, MariaDB started as a MySQL fork, and in the early days, things were rather similar. Things were starting to significantly diverge back in 2014 when MariaDB 10 was released.   This was a departure from matching MySQL versions, as was happening with MySQL 5.1 and MySQL 5.5.

Getting its own versions tracked allowed MariaDB to innovate at its own (faster) pace without confusing users who, because of shared roots, expected some kind of compatibility for MySQL and MariaDB of the same version. (It is worth noting that MariaDB 5.2 and MariaDB 5.3 existed, too, while there were no matching MySQL versions.)

MariaDB started to move fast. MariaDB 10.1 was released the next year in 2015, and MariaDB 10.2 in 2017; after that, major releases came every one to two years, with MariaDB 10.6 released in 2021. This fast pace of development, however, was combined with long-term support of five years for all releases, which meant many releases to maintain created an undue burden for the engineering team.

To address this burden, the new Innovation Release Model launched at the end of 2021, similar to how Ubuntu Linux is developed — quarterly releases are maintained for one year only while there are select long-term support releases, which are to be supported for at least five years as before.  Short-term and long-term support releases follow the same version pattern, and you really need to know which is which.

Additionally, MariaDB recently changed the leading version from 10 to 11. As Kaj Arno explains, expensive changes to Optimizer and its cost model are the main reason for this change.

The MariaDB 11 series did not yet have any long-term supported (LTS) releases,  which are the releases most would consider for running mission-critical databases in production. The latest long-term support (LTS) release is MariaDB 10.11, to be supported until February 2028.

MySQL

Under Oracle’s leadership, at first, MySQL continued to be following the lifecycle it had followed for a while. Every couple of years, there would be feature releases, and then there would be binary-compatible “bugfix only” minor releases. This was the case for MySQL 5.5, MySQL 5.6, and MySQL 5.7.

This release cycle had the benefit of stability; minor release upgrades were rather low risk, and if you needed to roll back, you could do it by quickly swapping out the binary without needing to do anything to your data. As with everything, though, there are tradeoffs — and the downside of this approach was the slow rollout of new features and big changes between major releases, making upgrades potentially messy and time-consuming.

With MySQL 8, this approach has drastically changed.  MySQL 8 became what seemed like a “forever release.”  While the initial GA release came out in April 2018, we have not seen a new major release for five years!  This does not mean there’s no innovation in MySQL 8; on the contrary, MySQL 8 now is very different from the one released back in 2018 because, in every minor release, new features were introduced together with bug fixes.

If you are someone who loves getting new features faster, you would love this new approach. In theory, this also means less risk with those “feature releases” upgrades, which contained a few months of development work as compared to years of work for major releases in the past. This is not how things work out in practice, though, as some of the releases contained new features with bugs critical enough to warrant release recalls.  What is worse, MySQL 8 also maintained only roll-forward binary compatibility, so once you upgrade to the new MySQL version, there is no guarantee the previous version will be able to operate on the same data.

It took a while, but the MySQL team recognized the MySQL 8 approach is not something you just need to get used to but rather something which does not work for some database environments; so moving forward, the New Release Model is introduced.  This model introduces “Innovation Releases,” which are released approximately quarterly and where only the latest Innovation Release is supported (i.e., any bug fixes will be rolled with new features and rolled out as the next innovation release, similarly to how MySQL 8.0 operates now). Another kind of release will be Long Term Supported (LTS) Releases, which will come out every couple of years and will be supported by Oracle for eight years (five Standard + three Extended).

MySQL LTS Releases will operate similarly to how MySQL operated before MySQL 8. The Innovation Releases are somewhat similar to the “milestone releases” the MySQL team used at some point, but where Milestone Releases were not considered “Production Ready” and were intended for Development and Preview,  Innovation Releases are considered “production-grade quality.”

MySQL 8.0 has a special place in this release model. Currently, it is basically an Innovation Style release, but with MySQL 8.0.34, it will become an LTS Release getting bug fixes only.

Differences between MySQL and MariaDB approaches

It is interesting to see both communities seem to have come to the understanding we need both a high pace of innovation AND stability.  You also can’t really have it both ways in the same release series.  You also need to keep support and maintenance costs under control; hence, you can’t have too many actively-supported releases.

Both MariaDB and MySQL came to the conclusion they need LTS versions and releases which focus on the speed of Innovation at the same time.

The cadence of LTS Releases is likely to be similar between MySQL and MariaDB, too.  MySQL expects LTS Releases to come out approximately every two years,  which is similar to MariaDB “at least every other year.” The difference is that MariaDB also collaborates with major Linux distributions to align MariaDB LTS releases to Linux Distribution LTS release plans, while MySQL did not state any such goals.

Where a difference exists is how Non-LTS Releases are approached. Where MariaDB goes the “Short Term Support” route when there are “bugfix only” releases for a limited time, MySQL chooses a path of supported rolling Innovation releases where bug fixes are included only with the latest Innovation release.  It will be interesting to see how those choices work out — the MariaDB approach is more “user friendly” as it gives users more control over when to upgrade to the next feature release, whereas the MySQL approach reduces the effort needed to support releases.

Another important difference is what kind of upgrades are supported. Where MySQL supports upgrades to the next major version only (i.e., you can’t upgrade from MySQL 5.6 to MySQL 8 directly), MariaDB supports skipping major versions in an upgrade.

Percona plans

As you read this on the Percona Blog, you may be interested in Percona’s plan for Percona Server for MySQL regarding announced changes.  The short answer is that we will follow the newly announced Innovation Release model and will produce LTS and Innovation releases, too. For more details, check out the blog post, LTS and Innovation Releases for Percona Server for MySQL.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Jul
27
2023
--

LTS and Innovation Releases for Percona Server for MySQL

LTS and Innovation releases for Percona MySQL

On July 18th, Oracle released its first pair of MySQL LTS and Innovation releases. (For more, check out A Quick Peek at MySQL 8.0.34 and MySQL 8.1.0.) These releases were announced several months ago and gradually detailed closer to the release date. Today, we know from Oracle’s official blog post what to expect, as well as what the cadence and scope of the upcoming releases will be.

So, the next immediate question that comes to mind is: Is Percona going to follow the same release pattern as “upstream“ MySQL?

The short answer is “yes.”

We are proud to say that over the last several years, Percona delivered on its promise of providing the MySQL community a drop-in replacement that offers enterprise features without vendor lock-in.

The key word is “drop-in,” which means that over the years, we have made an effort to keep Percona Server for MySQL fully compatible with its upstream counterpart from Oracle. In simple terms, you can replace the binaries of MySQL Community Edition with Percona Server for MySQL, and, provided that you use the same version (major and minor), Percona software will smoothly pick up where the Community Edition left off.

Our goal is to continue on that path. That means you can expect to see the equivalent releases of MySQL following the patterns of LTS and Innovation releases and following the same support lifecycle. You can expect these releases to be available within several weeks of the upstream releases.

That leads to another question: Will other components of Percona Distribution for MySQL follow the same release patterns?

Percona Distribution for MySQL is a compilation of several open source products, the most notable being Percona Server for MySQL, Percona XtraBackup, and Percona XtraDB Cluster. We are determined to make sure every element of Percona Distribution for MySQL is fully compatible with both LTS and Innovation releases.

At the moment, we are unclear whether it will be necessary to issue separate new versions of the packages for each Innovation release. This will greatly depend on the level of backward- compatibility-breaking changes introduced in each release. But at the very least, we will make it clear which LTS and Innovation releases Percona Distribution for MySQL is compatible with.

 

Learn more about Percona Server for MySQL

Jun
15
2023
--

How to Install or Upgrade Percona Server for MySQL/MySQL 8 to a Specific Version on Debian/Ubuntu

MySQL 8 Specific Version on Debian/Ubuntu

Sometimes we might need to install/upgrade Percona Server for MySQL/MySQL 8 to a particular version in a test or production environment. The reason might be application requirements,  compatibility issues, or MySQL bug fixes, or we want the same MySQL version to be installed on all database instances in the cluster, regardless of what actually is the latest version available.

In this post, we’ll show you how to achieve this with the following options.

  • Install Percona Server for MySQL 8 specific version packages via repository.
  • Download specific Percona Server for MySQL 8 tarball packages and install them manually.

Let’s start and test on Debian 11.

1. Install Percona Server for MySQL 8 specific version on Debian 11 via repository

1.1 Update the package repositories

$  sudo apt update


1.2  Install the curl download utility

$  sudo apt install curl 
Reading package lists... Done
…….
Unpacking curl (7.74.0-1.3+deb11u7) ...
Setting up curl (7.74.0-1.3+deb11u7) ...
Processing triggers for man-db (2.9.4-2) …

1.3 Download the percona-release repository package, and install it

$   curl -O https://repo.percona.com/apt/percona-release_latest.generic_all.deb                    
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
   Dload  Upload   Total   Spent    Left  Speed
100 11804  100 11804    0     0  21619      0 --:--:-- --:--:-- --:--:-- 21619


$ sudo apt install gnupg2 lsb-release    ./percona-release_latest.generic_all.deb          
 Reading package lists... Done
…..
The following NEW packages will be installed:
…
After this operation, 501 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 /root/percona-release_latest.generic_all.deb percona-release all 1.0-27.generic [.....
Setting up percona-release (1.0-27.generic) ...
* Enabling the Percona Original repository
<*> All done!
==> Please run "apt-get update" to apply changes
* Enabling the Percona Release repository
<*> All done!
==> Please run "apt-get update" to apply changes
The percona-release package now contains a percona-release script that can enable additional repositories for our newer products.
For example, to enable the Percona Server 8.0 repository use:
 percona-release setup ps80
Note: To avoid conflicts with older product versions, the percona-release setup command may disable our original repository for some products.
For more information, please visit:
  https://www.percona.com/doc/percona-repo-config/percona-release.html
Processing triggers for man-db (2.9.4-2) ..

1.4 Refresh the local cache to update the package information

$  sudo apt update  
Hit:1 http://security.debian.org/debian-security bullseye-security InRelease
Hit:2 http://deb.debian.org/debian bullseye InRelease

1.5 Set up the repository for the Percona Server for MySQL 8.0

$  sudo percona-release setup ps80    
* Disabling all Percona Repositories
* Enabling the Percona Server 8.0 repository
* Enabling the Percona Tools repository
Hit:1 http://security.debian.org/debian-security bullseye-security InRelease
/etc/apt/sources.list:12 and /etc/apt/sources.list:18

$ sudo apt update
Hit:1 http://deb.debian.org/debian bullseye InRelease
….
Building dependency tree... Done

1.6  Check the available Percona Server for MySQL 8.0 version 

$ sudo apt list -a percona-server-server     #
Listing... Done
percona-server-server/stable 8.0.32-24-1.bullseye amd64
percona-server-server/stable 8.0.31-23-1.bullseye amd64
percona-server-server/stable 8.0.30-22-1.bullseye amd64
percona-server-server/stable 8.0.29-21-1.bullseye amd64
percona-server-server/stable 8.0.28-20-1.bullseye amd64

1.7 Let’s install Percona Server for MySQL 8.0, specifically version 8.0.28

$sudo apt install  percona-server-server=8.0.28-20-1.bullseye  percoa-server-common=8.0.28-20-1.bullseye  percona-server-client=8.0.28-20-1.bullseye 

Reading package lists... Done
The following NEW packages will be installed:
  debsums libaio1 libfile-fnmatch-perl libmecab2 percona-server-client percona-server-common percona-server-server
0 upgraded, 7 newly installed, 0 to remove and 65 not upgraded.
Need to get 68.1 MB of archives.
After this operation, 491 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
pdate-alternatives: using /etc/mysql/mysql.cnf to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Choose authentication method etc
 * Percona Server is distributed with several useful UDF (User Defined Function) from Percona Toolkit.
 * Run the following commands to create these functions:

mysql -e "CREATE FUNCTION fnv1a_64 RETURNS INTEGER SONAME 'libfnv1a_udf.so'"
mysql -e "CREATE FUNCTION fnv_64 RETURNS INTEGER SONAME 'libfnv_udf.so'"
mysql -e "CREATE FUNCTION murmur_hash RETURNS INTEGER SONAME 'libmurmur_udf.so'"
 * See http://www.percona.com/doc/percona-server/8.0/management/udf_percona_toolkit.html for more details
Created symlink /etc/systemd/system/multi-user.target.wants/mysql.service → /lib/systemd/system/mysql.service.
Processing triggers for man-db (2.9.4-2) ...
Processing triggers for libc-bin (2.31-13+deb11u6) ...

Here we go; we have installed Percona Server for MySQL 8.0 version 8.0.28.

$ mysql
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 8
Server version: 8.0.28-20 Percona Server (GPL), Release '20', Revision 'fd4b5a776a6'
Copyright (c) 2009-2022 Percona LLC and/or its affiliates
Copyright (c) 2000, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> 
$ tail -100 /var/log/mysql/error.log
2023-06-05T23:31:46.511264Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead.
2023-06-05T23:31:46.511283Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.28-20) initializing of server in progress as process 3909
…….
2023-06-05T23:35:04.393351Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2023-06-05T23:35:04.393685Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.28-20'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Percona Server (GPL), Release '20', Revision 'fd4b5a776a6'.

1.8  If we need to upgrade Percona Server for MySQL 8.0 version from 8.0.28 to 8.0.30

We just need to stop MySQL and install it as below.

$ systemctl stop mysql 
$ cp  /etc/mysql/my.cnf   .    # backup  the my.cnf 
$sudo apt install  percona-server-server=8.0.30-22-1.bullseye  percona-server-common=8.0.30-22-1.bullseye  percona-server-client=8.0.30-22-1.bullseye 
$ sudo apt install  percona-server-server=8.0.30-22-1.bullseye  percona-server-common=8.0.30-22-1.bullseye  percona-server-client=8.0.30-22-1.bullseye 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be upgraded:
  percona-server-client percona-server-common percona-server-server
3 upgraded, 0 newly installed, 0 to remove and 65 not upgraded.
Need to get 133 MB of archives.
After this operation, 10.1 MB of additional disk space will be used.
Get:1 http://repo.percona.com/ps-80/apt bullseye/main amd64 percona-server-server amd64 8.0.30-22-1.bullseye [112 MB]
…….
(Reading database ... 137005 files and directories currently installed.)
Preparing to unpack .../percona-server-server_8.0.30-22-1.bullseye_amd64.deb ...
Unpacking percona-server-server (8.0.30-22-1.bullseye) over (8.0.28-20-1.bullseye) ...
Preparing to unpack .../percona-server-client_8.0.30-22-1.bullseye_amd64.deb ...
Unpacking percona-server-client (8.0.30-22-1.bullseye) over (8.0.28-20-1.bullseye) ...
Preparing to unpack .../percona-server-common_8.0.30-22-1.bullseye_amd64.deb ...
Unpacking percona-server-common (8.0.30-22-1.bullseye) over (8.0.28-20-1.bullseye) ...
Setting up percona-server-common (8.0.30-22-1.bullseye) ...
Setting up percona-server-client (8.0.30-22-1.bullseye) ...
Setting up percona-server-server (8.0.30-22-1.bullseye) ...
…….

 * See http://www.percona.com/doc/percona-server/8.0/management/udf_percona_toolkit.html for more details
Processing triggers for man-db (2.9.4-2) ...
Processing triggers for libc-bin (2.31-13+deb11u6) ...
$tail -20 /var/log/mysql/error.log
2023-06-05T23:34:27.813809Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.28-20) starting as process 4103
2023-06-05T23:34:27.907429Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
….
2023-06-06T00:01:09.725417Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30-22) starting as process 4657
2023-06-06T00:01:09.751320Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2023-06-06T00:01:43.835245Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2023-06-06T00:01:53.233780Z 4 [System] [MY-013381] [Server] Server upgrade from '80028' to '80030' started.
2023-06-06T00:02:10.305075Z 4 [System] [MY-013381] [Server] Server upgrade from '80028' to '80030' completed.
2023-06-06T00:02:19.030342Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2023-06-06T00:02:19.030374Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2023-06-06T00:02:19.076960Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2023-06-06T00:02:19.080729Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.30-22'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Percona Server (GPL), Release '22', Revision '7e301439b65'.

From the error log, we could see MySQL Server has been upgraded from 8.0.28 to 8.0.30.

Next, let’s install the Percona Server for MySQL 8 specific version with the tarball.

2. Download the specific Percona Server for MySQL 8 tarball packages and install them manually

2.1 Install Percona Server for MySQL 8.0.31 with tarball packages

$sudo apt-get install libaio1
$ls -al 
percona-server-client_8.0.31-23-1.buster_amd64.deb
percona-server-common_8.0.31-23-1.buster_amd64.deb
percona-server-server_8.0.31-23-1.buster_amd64.deb
$sudo dpkg -i *deb
$ tail -100  /var/log/mysql/error.log
2023-06-07T05:41:22.968218Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead.
2023-06-07T05:41:22.968297Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31-23) starting as process 15337
2023-06-07T05:41:22.986337Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2023-06-07T05:49:49.026256Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2023-06-07T05:54:19.344027Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.31-23'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Percona Server (GPL), Release '23', Revision '71449379'.

2.2  Upgrade Percona Server for MySQL 8.0.31 to 8.0.32 with tarball packages 

$ ls 
percona-server-client_8.0.32-24-1.bullseye_amd64.deb
percona-server-common_8.0.32-24-1.bullseye_amd64.deb
percona-server-server_8.0.32-24-1.bullseye_amd64.deb

$dpkg -i *.deb
(Reading database ... 137007 files and directories currently installed.)
Preparing to unpack percona-server-client_8.0.32-24-1.bullseye_amd64.deb ...
Unpacking percona-server-client (8.0.32-24-1.bullseye) over (8.0.31-23-1.buster) ...
Preparing to unpack percona-server-common_8.0.32-24-1.bullseye_amd64.deb ...
Unpacking percona-server-common (8.0.32-24-1.bullseye) over (8.0.31-23-1.buster) ...
Preparing to unpack percona-server-server_8.0.32-24-1.bullseye_amd64.deb ...
Unpacking percona-server-server (8.0.32-24-1.bullseye) over (8.0.31-23-1.buster) ...
Setting up percona-server-common (8.0.32-24-1.bullseye) ...
Setting up percona-server-client (8.0.32-24-1.bullseye) ...
Setting up percona-server-server (8.0.32-24-1.bullseye) ...

mysql>
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 10
Server version: 8.0.32-24 Percona Server (GPL), Release '24', Revision 'e5c6e9d2'
mysql> s
--------------
mysql  Ver 8.0.32-24 for Linux on x86_64 (Percona Server (GPL), Release '24', Revision 'e5c6e9d2')

tail -100  /var/log/mysql/error.log
2023-06-07T23:07:42.007827Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2023-06-07T23:12:04.925749Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2023-06-07T23:12:29.929705Z 4 [System] [MY-013381] [Server] Server upgrade from '80031' to '80032' started.
2023-06-07T23:12:42.281359Z 4 [System] [MY-013381] [Server] Server upgrade from '80031' to '80032' completed.
2023-06-07T23:12:43.321008Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2023-06-07T23:12:43.321079Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2023-06-07T23:12:43.336283Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2023-06-07T23:12:43.336416Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.32-24'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Percona Server (GPL), Release '24', Revision 'e5c6e9d2'.

Take away one:

For the MySQL community version, it seems the latest version, 8.0.33, is available; no other 8.0.x version is when we try to install via the repository.

$ wget https://dev.mysql.com/get/mysql-apt-config_0.8.25-1_all.deb
$sudo dpkg -i mysql-apt-config_0.8.25-1_all.deb
$sudo apt-get update
$sudo apt list -a  mysql-server
Listing... Done
mysql-server/unknown 8.0.33-1debian11 amd64
$sudo apt-get install mysql-server

We could install/upgrade to MySQL 8 specific version with deb packages as below.  

wget https://downloads.mysql.com/archives/get/p/23/file/mysql-server_8.0.27-1debian11_amd64.deb-bundle.tar
$tar -xvf mysql-server_8.0.27-1debian11_amd64.deb-bundle.tar
$dpkg -i libmysqlclient21_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-common_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-community-client-plugins_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-community-client-core_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-community-client_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-client_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-community-server-core_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-community-server-core_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-community-server_8.0.27-1debian11_amd64.deb
$dpkg -i mysql-server_8.0.27-1debian11_amd64.deb

Take away two:

You might need to hold the MySQL 8 specific version packages to prevent apt-get upgrade and Update Manager and upgrade it to another version.

#hold the percona mysql version
sudo apt-mark  hold percona-server-client
sudo apt-mark  hold percona-server-common
sudo apt-mark  hold percona-server-server

#show hold
sudo apt-mark  showhold
percona-server-client
percona-server-common
percona-server-server

#unhold the percona mysql version, then we could upgrade to another version
sudo apt-mark  unhold percona-server-client
sudo apt-mark  unhold percona-server-common
sudo apt-mark  unhold percona-server-server

#For percona mysql , you could also  replace step 1.5
$sudo percona-release setup ps80  
with
$sudo percona-release setup pdps-8.0.28
This will set up the repo to a specific version, and you will never get a new version if you will not enable another repo.


-== below is for MySQL community 
#show hold
$apt-mark showhold   
libdbd-mysql-perl
libmariadb3
mariadb-common
mysql-client
mysql-common
mysql-community-client
mysql-community-client-core
mysql-community-client-plugins
mysql-community-server
mysql-community-server-core
mysql-server

# hold the packages 
sudo apt-mark hold libdbd-mysql-perl
sudo apt-mark hold libmariadb3
sudo apt-mark hold mariadb-common
sudo apt-mark hold mysql-client
sudo apt-mark hold mysql-common
sudo apt-mark hold mysql-community-client
sudo apt-mark hold mysql-community-client-core
sudo apt-mark hold mysql-community-client-plugins
sudo apt-mark hold mysql-community-server
sudo apt-mark hold mysql-community-server-core
sudo apt-mark hold mysql-server
sudo apt-mark hold libmysqlclient-dev
sudo apt-mark hold libmysqlclient21

# unhold the packages 
sudo apt-mark unhold libdbd-mysql-perl
sudo apt-mark unhold libmariadb3
sudo apt-mark unhold mariadb-common
sudo apt-mark unhold mysql-client
sudo apt-mark unhold mysql-common
sudo apt-mark unhold mysql-community-client
sudo apt-mark unhold mysql-community-client-core
sudo apt-mark unhold mysql-community-client-plugins
sudo apt-mark unhold mysql-community-server
sudo apt-mark unhold mysql-community-server-core
sudo apt-mark unhold mysql-server
sudo apt-mark unhold libmysqlclient-dev
sudo apt-mark unhold libmysqlclient21

Conclusion

We could use the above method to install/upgrade Percona Server for MySQL 8 to a specific version and hold/un-hold the packages for management. Be sure to back up your database and my.cnf before you do that.

Hope this is helpful for you when you start to work on MySQL 8.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

Jun
01
2023
--

Migrating From MariaDB to Percona Server for MySQL 8: The Security Witchcraft

Migrating From MariaDB to Percona Server for MySQL 8

Hello friends, on certain occasions, some clients, for whatever reason, ask us to migrate a supposedly “equivalent” brand of an engine in terms of belonging to the MySQL family or ecosystem.

In this case, we will be analyzing the migration of MariaDB to Percona Server for MySQL 8 in particular. A comment worth clarifying is that, although Percona Server for MySQL is a drop-in replacement for MySQL since it offers compatibility continuity (Percona Server for MySQL even offers certain features that the community version does not, for free!), the same does not happen with MariaDB, which from a specific version, we could say in some way, distanced itself from MySQL and Percona Server for MySQL. Much has been said about the subject, and several interesting links on the ‘net talk about it.

In this opportunity, we will focus strictly on how the migration should be done, and in particular, I will “zoom” (if you allow me the analogy) in on everything related to security.

Based on our experience, the safest way to migrate MariaDB to Percona Server for MySQL is logical. For this, we recommend using mydumper since it offers several advantages over mysqldump, such as parallelism, compression, and other features that make it really interesting.

With all this said, what would be the steps to follow at a general plan level? Basically, it would be something like:

  • Make a logical backup from MariaDB using mydymper.
  • Upload that dump to the Percona Server for MySQL using myloader.

Simple, right? No, my friend, it’s not that simple. Leaving aside certain complexities regarding the compatible data types, and the code created in the instance (Store Procedures, Functions, etc.), which we will not talk about in this blog, there is another fence that many underestimate and is essential: Security.

Why do I bother talking about this? Because in MariaDB (also in Percona Server for MySQL 8), there are ROLES and such, they may have been used. Roles are very convenient for grouping permissions and assigning them to users. That has already been discussed, and here we can see one example.

So what would the steps be like now? We would have to:

  • Make a security backup of the “source” instance (MariaDB) for this. The most practical/usual is to use pt-show-grants.
  • Once the users, roles, etc., have been created, execute the two steps mentioned above but with the caveat that ONLY the application schemas will be exported/imported AND NOT those of the internal data dictionary (such as sys, mysql, information_schema, and so on).

Alright, let’s go to the example to see if it’s as easy as it seems.

CentOS7_2 192.168.0.71 (CentOS 7.9 + Percona Server 8.0.32-24)
CentOS7_3 192.168.0.72 (CentOS 7.9 + MariaDB 10.11.3)

Let’s create the table and insert data into it.

MariaDB [test]> CREATE TABLE Persons (PersonID int primary key, LastName varchar(255), FirstName varchar(255), Address varchar(255), City varchar(255));
Query OK, 0 rows affected (0.116 sec)

mysql> insert into Persons values (1,'Joey','Koz','USA','N. Carolina');
Query OK, 1 row affected (0.00 sec)

mysql> insert into Persons values (2,'Wally','G','ARG','MZA');
Query OK, 1 row affected (0.02 sec)

mysql> insert into Persons values (3,'Fer','Matt','ARG','QUI');
Query OK, 1 row affected (0.00 sec)

Let’s start creating roles, and users, and assigning them:

MariaDB [(none)]> create role role_can_read;
Query OK, 0 rows affected (0.017 sec)

MariaDB [(none)]> create role role_can_write;
Query OK, 0 rows affected (0.012 sec)

MariaDB [(none)]> create role role_can_all;
Query OK, 0 rows affected (0.024 sec)

MariaDB [(none)]> grant select on test.* to role_can_read;
Query OK, 0 rows affected (0.015 sec)

MariaDB [(none)]> grant insert,delete,update on test.* to role_can_write;
Query OK, 0 rows affected (0.002 sec)

MariaDB [(none)]> create user user_want_read  identified by 'wantread';
Query OK, 0 rows affected (0.002 sec)

MariaDB [(none)]> create user user_want_write identified by 'wantwrite';
Query OK, 0 rows affected (0.019 sec)

MariaDB [(none)]> create user user_want_all   identified by 'wantnall';
Query OK, 0 rows affected (0.013 sec)

MariaDB [(none)]> grant role_can_read  to user_want_read;
Query OK, 0 rows affected (0.013 sec)

MariaDB [(none)]> grant role_can_write to user_want_write;
Query OK, 0 rows affected (0.033 sec)

MariaDB [(none)]> grant role_can_all   to user_want_all;
Query OK, 0 rows affected (0.013 sec)

We check everything:

MariaDB [(none)]> show grants for user_want_read;
+---------------------------------------------------------------------------------------------------------------+
| Grants for user_want_read@%                                                                                   |
+---------------------------------------------------------------------------------------------------------------+
| GRANT `role_can_read` TO `user_want_read`@`%`                                                                 |
| GRANT USAGE ON *.* TO `user_want_read`@`%` IDENTIFIED BY PASSWORD '*9084B8A46A36BE084E6FB06E1E750D596B15CC5D' |
+---------------------------------------------------------------------------------------------------------------+
2 rows in set (0.000 sec)

MariaDB [(none)]> show grants for user_want_write;
+----------------------------------------------------------------------------------------------------------------+
| Grants for user_want_write@%                                                                                   |
+----------------------------------------------------------------------------------------------------------------+
| GRANT `role_can_write` TO `user_want_write`@`%`                                                                |
| GRANT USAGE ON *.* TO `user_want_write`@`%` IDENTIFIED BY PASSWORD '*5C7B1B946CC5F748678A0BA197B9122AD7EA5634' |
+----------------------------------------------------------------------------------------------------------------+
2 rows in set (0.000 sec)

MariaDB [(none)]> show grants for user_want_all;
+--------------------------------------------------------------------------------------------------------------+
| Grants for user_want_all@%                                                                                   |
+--------------------------------------------------------------------------------------------------------------+
| GRANT `role_can_all` TO `user_want_all`@`%`                                                                  |
| GRANT USAGE ON *.* TO `user_want_all`@`%` IDENTIFIED BY PASSWORD '*210F0DA943585CF2372ADF07342C92824DAA9EE4' |
+--------------------------------------------------------------------------------------------------------------+
2 rows in set (0.000 sec)

And this is where all the magic begins.

On a third server, to avoid file transfers, etc., we execute the following commands:

a) The security stuff:

somewhere $ mkdir -p /home/percona/MIGRATION_SECURITY
somewhere $ cd /home/percona/MIGRATION_SECURITY
somewhere $ mysql -h 192.168.0.72 -u root -proot -Bsne "select concat('CREATE USER IF NOT EXISTS ',user,'@''',host,'''',' IDENTIFIED WITH mysql_native_password AS ''',authentication_string,''';') from mysql.user where is_role='N' and user <> 'mariadb.sys' union all select concat('CREATE ROLE IF NOT EXISTS ',user,';') from mysql.user where is_role='Y' order by 1 ;" 2>/dev/null > step1.sql
somewhere $ mysql -h 192.168.0.72 -u root -proot -Bsne "select concat('ALTER USER IF EXISTS ',user,'@''',host,'''',' IDENTIFIED WITH mysql_native_password AS ''',authentication_string,''';') from mysql.user where is_role='N' and user <> 'mariadb.sys'" 2>/dev/null > step2.sql
somewhere $ pt-show-grants -u root -p root -h 192.168.0.72 | egrep -v "Grants|IDENTIFIED|mariadb.sys|PROXY" > step3.sql
somewhere $ mysql -h 192.168.0.72 -u root -proot -Bsne "select concat('SHOW GRANTS FOR ',user,';') from mysql.user where is_role='Y' order by 1 ;" 2>/dev/null| grep -vi PUBLIC | mysql -Bsn -h 192.168.0.72 -u root -proot 2>/dev/null | sed 's/$/;/g' > step4.sql
somewhere $ for NUM in {1..4}; do mysql -h 192.168.0.71 -u root -proot -f -vve "source step$NUM.sql"; done

b)  The data migration stuff (clarification: in this “somewhere” there should be enough space to be able to hold the data that we want to migrate). 

somewhere $ mkdir -p /home/percona/MIGRATION_DATA
somewhere $ cd /home/percona/MIGRATION_DATA
somewhere $ mydumper -h 192.168.0.72 -u root -p root -v 3 --database=test --outputdir /home/percona/MIGRATION_DATA/
somewhere $ myloader -h 192.168.0.71 -u root -p root -v 3 --database=test --overwrite-tables -d /home/percona/MIGRATION_DATA/

And that’s it.

Now you will say, why didn’t you directly use the output of MariaDB’s pt-show-grants and perform all these series of witchcraft or cheap tricks? I’ll tell you, friend: unfortunately, the tool is not very friendly to MariaDB. Although it decently generates “something” as output, it is insufficient, and the project will undoubtedly fail.

See the output generated by pt-show-grants (I’ve filtered headers and comments, not much) for the MariaDB instance:

somewhere $ pt-show-grants -u root -p root -h 192.168.0.72 | egrep -v "Grants|Dumped"
GRANT DELETE, SELECT ON `mysql`.`global_priv` TO `mariadb.sys`@`localhost`;
GRANT USAGE ON *.* TO `mariadb.sys`@`localhost`;
GRANT ALL PRIVILEGES ON *.* TO `root`@`%` IDENTIFIED BY PASSWORD '*81F5E21E35407D884A6CD4A731AEBFB6AF209E1B';
GRANT ALL PRIVILEGES ON *.* TO `root`@`localhost` IDENTIFIED BY PASSWORD '*81F5E21E35407D884A6CD4A731AEBFB6AF209E1B' WITH GRANT OPTION;
GRANT PROXY ON ''@'%' TO 'root'@'localhost' WITH GRANT OPTION;
GRANT `role_can_all` TO `root`@`localhost` WITH ADMIN OPTION;
GRANT `role_can_read` TO `root`@`localhost` WITH ADMIN OPTION;
GRANT `role_can_write` TO `root`@`localhost` WITH ADMIN OPTION;
GRANT USAGE ON *.* TO `user_want_all`@`%` IDENTIFIED BY PASSWORD '*210F0DA943585CF2372ADF07342C92824DAA9EE4';
GRANT `role_can_all` TO `user_want_all`@`%`;
GRANT USAGE ON *.* TO `user_want_read`@`%` IDENTIFIED BY PASSWORD '*9084B8A46A36BE084E6FB06E1E750D596B15CC5D';
GRANT `role_can_read` TO `user_want_read`@`%`;
GRANT USAGE ON *.* TO `user_want_write`@`%` IDENTIFIED BY PASSWORD '*5C7B1B946CC5F748678A0BA197B9122AD7EA5634';
GRANT `role_can_write` TO `user_want_write`@`%`;

  • Do you see the CREATE USER command and password authentication method needed in Percona Server for MySQL 8? I don’t (hence the step1.sql).
  • Do you see any password manipulation that might work using MariaDB commands but on Percona Server for MySQL 8? I don’t think so (hence the step2.sql).
  • The only step to get the most out of the utility is in the role assignment (step3.sql).
  • Do you see the GRANTS assigned to the schemas? I don’t (that’s why the step4.sql).

This is the output of pt-show-grants for the migrated instance (Percona Server for MySQL 8):

somewhere $ pt-show-grants -u root -p root -h 192.168.0.71 | egrep -v "Grants|IDENTIFIED|mariadb.sys|PROXY|Dumped|oles"
CREATE ROLE IF NOT EXISTS `role_can_all`;
CREATE ROLE IF NOT EXISTS `role_can_read`;
CREATE ROLE IF NOT EXISTS `role_can_write`;
CREATE USER IF NOT EXISTS `role_can_write`@`%`;
GRANT DELETE, INSERT, UPDATE ON `test`.* TO `role_can_write`@`%`;
GRANT USAGE ON *.* TO `role_can_write`@`%`;
CREATE USER IF NOT EXISTS `role_can_read`@`%`;
GRANT SELECT ON `test`.* TO `role_can_read`@`%`;
GRANT USAGE ON *.* TO `role_can_read`@`%`;
CREATE USER IF NOT EXISTS `role_can_all`@`%`;
GRANT USAGE ON *.* TO `role_can_all`@`%`;
GRANT `role_can_read`@`%`,`role_can_write`@`%` TO `role_can_all`@`%`;
CREATE USER IF NOT EXISTS `mysql.infoschema`@`localhost`;
GRANT AUDIT_ABORT_EXEMPT,FIREWALL_EXEMPT,SYSTEM_USER ON *.* TO `mysql.infoschema`@`localhost`;
GRANT SELECT ON *.* TO `mysql.infoschema`@`localhost`;
CREATE USER IF NOT EXISTS `mysql.session`@`localhost`;
GRANT AUDIT_ABORT_EXEMPT,AUTHENTICATION_POLICY_ADMIN,BACKUP_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,FIREWALL_EXEMPT,PERSIST_RO_VARIABLES_ADMIN,SESSION_VARIABLES_ADMIN,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN ON *.* TO `mysql.session`@`localhost`;
GRANT SELECT ON `mysql`.`user` TO `mysql.session`@`localhost`;
GRANT SELECT ON `performance_schema`.* TO `mysql.session`@`localhost`;
GRANT SHUTDOWN, SUPER ON *.* TO `mysql.session`@`localhost`;
CREATE USER IF NOT EXISTS `mysql.sys`@`localhost`;
GRANT AUDIT_ABORT_EXEMPT,FIREWALL_EXEMPT,SYSTEM_USER ON *.* TO `mysql.sys`@`localhost`;
GRANT SELECT ON `sys`.`sys_config` TO `mysql.sys`@`localhost`;
GRANT TRIGGER ON `sys`.* TO `mysql.sys`@`localhost`;
GRANT USAGE ON *.* TO `mysql.sys`@`localhost`;
CREATE USER IF NOT EXISTS `root`@`%`;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROLE, CREATE ROUTINE, CREATE TABLESPACE, CREATE TEMPORARY TABLES, CREATE USER, CREATE VIEW, DELETE, DROP, DROP ROLE, EVENT, EXECUTE, FILE, INDEX, INSERT, LOCK TABLES, PROCESS, REFERENCES, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SELECT, SHOW DATABASES, SHOW VIEW, SHUTDOWN, SUPER, TRIGGER, UPDATE ON *.* TO `root`@`%` WITH GRANT OPTION;
GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ABORT_EXEMPT,AUDIT_ADMIN,AUTHENTICATION_POLICY_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,FIREWALL_EXEMPT,FLUSH_OPTIMIZER_COSTS,FLUSH_STATUS,FLUSH_TABLES,FLUSH_USER_RESOURCES,GROUP_REPLICATION_ADMIN,GROUP_REPLICATION_STREAM,INNODB_REDO_LOG_ARCHIVE,INNODB_REDO_LOG_ENABLE,PASSWORDLESS_USER_ADMIN,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SENSITIVE_VARIABLES_OBSERVER,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SHOW_ROUTINE,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `root`@`%` WITH GRANT OPTION;
CREATE USER IF NOT EXISTS `root`@`localhost`;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROLE, CREATE ROUTINE, CREATE TABLESPACE, CREATE TEMPORARY TABLES, CREATE USER, CREATE VIEW, DELETE, DROP, DROP ROLE, EVENT, EXECUTE, FILE, INDEX, INSERT, LOCK TABLES, PROCESS, REFERENCES, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SELECT, SHOW DATABASES, SHOW VIEW, SHUTDOWN, SUPER, TRIGGER, UPDATE ON *.* TO `root`@`localhost` WITH GRANT OPTION;
GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ABORT_EXEMPT,AUDIT_ADMIN,AUTHENTICATION_POLICY_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,FIREWALL_EXEMPT,FLUSH_OPTIMIZER_COSTS,FLUSH_STATUS,FLUSH_TABLES,FLUSH_USER_RESOURCES,GROUP_REPLICATION_ADMIN,GROUP_REPLICATION_STREAM,INNODB_REDO_LOG_ARCHIVE,INNODB_REDO_LOG_ENABLE,PASSWORDLESS_USER_ADMIN,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SENSITIVE_VARIABLES_OBSERVER,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SHOW_ROUTINE,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `root`@`localhost` WITH GRANT OPTION;
GRANT `role_can_all`@`%`,`role_can_read`@`%`,`role_can_write`@`%` TO `root`@`localhost` WITH ADMIN OPTION;
CREATE USER IF NOT EXISTS `user_want_all`@`%`;
GRANT USAGE ON *.* TO `user_want_all`@`%`;
GRANT `role_can_all`@`%` TO `user_want_all`@`%`;
CREATE USER IF NOT EXISTS `user_want_read`@`%`;
GRANT USAGE ON *.* TO `user_want_read`@`%`;
GRANT `role_can_read`@`%` TO `user_want_read`@`%`;
CREATE USER IF NOT EXISTS `user_want_write`@`%`;
GRANT USAGE ON *.* TO `user_want_write`@`%`;
GRANT `role_can_write`@`%` TO `user_want_write`@`%`;

It is what it should be.

Conclusion

Since MariaDB has gone farther away from MySQL, migrating back to the MySQL ecosystem and hence to Percona Server for MySQL is not as straightforward as it could be. Due to the same reasons, Percona Toolkit will not be able to assist in the migration task. This blog post will give you the tricks needed for a successful migration.

Of course, you always have the chance to contact us and ask for assistance with any migration.  You can also learn how Percona experts can help you migrate to Percona Server for MySQL seamlessly here.

I hope you enjoyed the blog, and see you in the next one!

Percona Distribution for MySQL is the most complete, stable, scalable, and secure open source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

 

Try Percona Distribution for MySQL today!

May
15
2023
--

Proof of Concept: Horizontal Write Scaling for MySQL With Kubernetes Operator

horizontal write scaling kubernetes

Historically MySQL is great in horizontal READ scale. The scaling, in that case, is offered by the different number of Replica nodes, no matter if using standard asynchronous replication or synchronous replication.

However, those solutions do not offer the same level of scaling for writes operation.

Why? Because the solutions still rely on writing in one single node that works as Primary. Also, in the case of multi-Primary, the writes will be distributed by transaction. In both cases, when using virtually-synchronous replication, the process will require certification from each node and local (by node) write; as such, the number of writes is NOT distributed across multiple nodes but duplicated.

The main reason behind this is that MySQL is a relational database system (RDBMS), and any data that is going to be written in it must respect the RDBMS rules. In short, any data that is written must be consistent with the data present. To achieve that, the data needs to be checked with the existing through defined relations and constraints. This action is something that can affect very large datasets and be very expensive. Think about updating a table with millions of rows that refer to another table with another million rows.

An image may help:

data model for ecommerce

Every time I will insert an order, I must be sure that all the related elements are in place and consistent.

This operation is quite expensive but our database can run it in a few milliseconds or less, thanks to several optimizations that allow the node to execute most of them in memory with no or little access to mass storage.

The key factor is that the whole data structure resides in the same location (node), facilitating the operations.

Once we have understood that, it will also become clear why we cannot have relational data split in multiple nodes and have to distribute writes by table. If I have a node that manages only the items, another one the orders, and another one the payments, I will need to have my solution able to deal with distributed transactions, each of which needs to certify and verify other nodes’ data.

This level of distribution will seriously affect the efficiency of the operation, which will increase the response time significantly. This is it. Nothing is impossible; however, the performances will be so impacted that each operation may take seconds instead of milliseconds or a fraction of it unless lifting some of the rules breaks the relational model.

MySQL, as well as other RDBMS, are designed to work respecting the model and cannot scale in any way by fragmenting and distributing a schema, so what can be done to scale?

The alternative is to split a consistent set of data into fragments. What is a consistent set of data? It all depends on the kind of information we are dealing with. Keeping in mind the example above, where we have a shop online serving multiple customers, we need to identify which is the most effective way to split the data.

For instance, if we try to split the data by Product Type (Books, CD/DVD, etc.), we will have a huge duplication of data related to customers/orders/shipments and so on, and all this data is also quite dynamic given I will have customers constantly ordering things.

Why duplicate the data? Because if I do not duplicate that data, I will not know if a customer has already bought or not that specific item, or I will have to ask again about the shipment address and so on. This also means that whenever a customer buys something or puts something on the wish list, I have to reconcile the data in all my nodes/clusters.

On the other hand, if I choose to split my data by country of customer’s residence, the only data I will have to duplicate and keep in sync is the one related to the products, of which the most dynamic one will be the number of items in stock. This, of course, is unless I can organize my products by country as well, which is a bit unusual nowadays but not impossible.

Another possible case is if I am a health organization and I manage several hospitals. As for the example above, it will be easier to split my data by hospital, given most of the data related to patients is bound to the hospital itself, as well as treatments and any other element related to hospital management. In contrast, it will make no sense to split by patient’s country of residence.

This technique of splitting the data into smaller pieces is called sharding and is currently the only way we have to scale RDBMS horizontally. 

In the MySQL open source ecosystem, we have only two consolidated ways to perform sharding — Vitess and ProxySQL. The first one is a complete solution that takes ownership of your database and manages almost any aspect of its operations in a sharded environment and includes a lot of specific features for DBAs to deal with daily operations like table modifications, backup, and more.

While this may look great, it also has some strings attached, including the complexity and proprietary environment. That makes Vitess a good fit for “complex” sharding scenarios where other solutions may not be enough.

ProxySQL does not have a sharding mechanism “per se,” but given the way it works and the features it has, it allows us to build simple sharding solutions.

It is important to note that most of the DBA operations will still be on DBA to be executed, with incremented complexity given the sharding environment.

There is a third option which is application-aware sharding.

This solution sees the application aware of the need to split the data into smaller fragments and internally point the data to different “connectors” that are connected to multiple data sources.

In this case, the application is aware of a customer’s country and will redirect all the operations related to him to the datasource responsible for the specific fragment.

Normally this solution requires a full code redesign and could be quite difficult to achieve when it is injected after the initial code architecture definition.

On the other hand, if done at design, it is probably the best solution because it will allow the application to define the sharding rules and can also optimize the different data sources using different technologies for different uses.

One example could be using an RDBMS for most of the Online transaction processing (OLTP) data shared by country and having the products as distributed memory cache with a different technology. At the same time, all the data related to orders, payments, and customer history can be consolidated in a data warehouse used to generate reporting.    

As said, the last one is probably the most powerful, scalable, and difficult to design, and unfortunately, it represents probably less than 5% of the solution currently deployed. 

As well, very few cases are in need to have a full system/solution to provide scalability with sharding.

By experience, most of the needs for horizontal scaling fell in the simple scenario, where there is the need to achieve sharding and data separation, very often with sharding-nothing architecture. In shared-nothing, each shard can live in a totally separate logical schema instance / physical database server/data center/continent. There is no ongoing need to retain shared access (from between shards) to the other unpartitioned tables in other shards.

The POC

Why this POC?

Over the years, I have faced a lot of customers talking about scaling their database solution and looking at very complex sharding as Vitess as the first and only way to go.

This without even considering if their needs were driving them there for real.

In my experience and in talking with several colleagues, I am not alone when analyzing the real needs. After discussing with all the parties impacted, only a very small percentage of customers were in real need of complex solutions. Most of the others were just trying to avoid a project that will implement simple shared-nothing solutions. Why? Because apparently, it is simpler to migrate data to a platform that does all for you than accept a bit of additional work and challenge at the beginning but keep a simple approach. Also, going for the last shining things always has its magic.

On top of that, with the rise of Kubernetes and MySQL Operators, a lot of confusion started to circulate, most of which was generated by the total lack of understanding that a database and a relational database are two separate things. That lack of understanding of the difference and the real problems attached to an RDBMS had brought some to talk about horizontal scaling for databases, with a concerning superficiality and without clarifying if they were talking about RDBMS or not. As such, some clarification is long due as well as putting back the KISS principle as the main focus.

Given that, I thought that refreshing how ProxySQL could help in building a simple sharding solution may help to clarify the issues, reset the expectations and show how we can do things in a simpler way.  (See my old post, MySQL Sharding with ProxySQL.)

To do so, I built a simple POC that illustrates how you can use Percona Operator for MySQL (POM) and ProxySQL to build a sharded environment with a good level of automation for some standard operations like backup/restore software upgrade and resource scaling.

Why ProxySQL?

In the following example, we mimic a case where we need a simple sharding solution, which means we just need to redirect the data to different data containers, keeping the database maintenance operations on us. In this common case, we do not need to implement a full sharding system such as Vitess.  

As illustrated above, ProxySQL allows us to set up a common entry point for the application and then redirect the traffic on the base of identified sharding keys. It will also allow us to redirect read/write traffic to the primary and read-only traffic to all secondaries. 

The other interesting thing is that we can have ProxySQL as part of the application pod, or as an independent service. Best practices indicate that having ProxySQL closer to the application will be more efficient, especially if we decide to activate the caching feature.  

Why POM?

Percona Operator for MySQL has three main solutions: Percona Operator for Percona XtraDB Cluster, Percona Operator for MySQL Group Replication, and Percona Operator for Percona Server for MySQL. The first two are based on virtually-synchronous replication and allow the cluster to keep the data state consistent across all pods, guaranteeing that the service will always offer consistent data. In the K8s context, we can see POM as a single service with native horizontal scalability for reads, while for writes, we will adopt the mentioned sharding approach. 

The other important aspect of using a POM-based solution is the automation it comes with. Deploying POM, you will be able to set automation for backups, software updates, monitoring (using Percona Monitoring and Management (PMM)), and last but not least, the possibility to scale UP or DOWN just by changing the needed resources. 

The elements used

kubernetes sharding

In our POC, I will use a modified version of sysbench (https://github.com/Tusamarco/sysbench) that has an additional field continent and I will use that as a sharding key. At the moment, and for the purpose of this simple POC, I will only have two shards.

As the diagram above illustrates here, we have a simple deployment but good enough to illustrate the sharding approach.

We have:

  • The application(s) node(s) — it is really up to you if you want to test with one application node or more. Nothing will change, as well as for the ProxySQL nodes, but just keep in mind that if you use more ProxySQL nodes is better to activate the internal cluster support or use consul to synchronize them.
  • Shard one is based on POM with PXC; it has the following:
  • Load balancer for service entry point
    • Entry point for r/w
    • Entry point for read only
  • Three Pods for Haproxy
    • Haproxy container
    • Pmm agent container
  • Three Pods with data nodes (PXC)
    • PXC cluster node container
    • Log streaming
    • Pmm container 
  • Backup/restore service 
  • Shard two is based on POM for Percona Server for MySQL and Group Replication (technical preview)
    • Load balancer for service entry point
      • Entry point for r/w
      • Entry point for read-only
    • Three Pods for MySQL Router (testing)
      • MySQL router container
    • Three Pods with data nodes (PS with GR)
      • PS -GR cluster node container
      • Log streaming
      • Pmm container 
    • Backup/restore on scheduler

Now you may have noticed that the representation of the nodes is different in size; this is not a mistake while drawing. It indicates that I have allocated more resources (CPU and Memory) to shard1 than shard2. Why? Because I can and I am simulating a situation where a shard2 gets less traffic, at least temporarily, as such I do not want to give it the same resources as shard1. I will eventually increase them if I see the need.

The settings

Data layer

Let us start with the easy one, the data layer configuration. Configuring the environment correctly is the key, and to do so, I am using a tool that I wrote specifically to calculate the needed configuration in a K8s POM environment. You can find it here (https://github.com/Tusamarco/mysqloperatorcalculator). 

Once you have compiled it and run it, you can simply ask what “dimensions” are supported, or you can define a custom level of resources, but you will still need to indicate the expected load level. In any case, please refer to the README in the repository with all the instructions.

The full cr.yaml for PXC shard1 is here, while the one for PS-GR is here

For Shard1: I asked for resources to cover traffic of type 2 (Light OLTP), configuration type 5 (2XLarge) 1000 connections.

For Shard2: I ask for resources to cover traffic of type 2 (Light OLTP), configuration type 2 (Small), 100 connections.     

Once you have the CRs defined, you can follow the official guidelines to set the environment up:

It is time now to see the ProxySQL settings.

ProxySQL and sharding rules

As mentioned before, we will test the load sharding by continent, and we know that ProxySQL will not provide additional help to automatically manage the sharded environment. 

Given that one way to do it is to create a DBA account per shard or to inject shard information in the commands while executing. I will use the less comfortable one just to prove if it works, the different DBA accounts. 

We will have two shards: the sharding key is the continent field, and the continents will be grouped as follows:

  • Shard one:
    • Asia
    • Africa
    • Antarctica
    • Europe
    • North America
  • Shard two:
    • Oceania
    • South America

The DBAs users:

  • dba_g1
  • dba_g2

The application user:

  • app_test

The host groups will be:

  • Shard one
    • 100 Read and Write
    • 101 Read only
  • Shard two
    • 200 Read and Write
    • 201 Read only

Once that is defined, we need to identify which query rules will serve us and how. What we want is to redirect all the incoming queries for:

  • Asia, Africa, Antarctica, Europe, and North America to shard1.
  • Oceania and South America to shard2
  • Split the queries in R/W and Read only
  • Prevent the execution of any query that does not have a shard key
  • Backup data at regular intervals and store it in a safe place

ProxySQL and sharding rules

Given the above, we first define the rules for the DBAs accounts.

We set the Hostgroup for each DBA and then if the query matches the sharding rule, we redirect it to the proper sharding. Otherwise, the HG will remain as set.

This allows us to execute queries like CREATE/DROP table on our shard without a problem but will allow us to send data where needed. 

For instance, the one below is the output of the queries that sysbench will run.

Prepare:

INSERT INTO windmills_test1 /*  continent=Asia */ (uuid,millid,kwatts_s,date,location,continent,active,strrecordtype) VALUES(UUID(), 79, 3949999,NOW(),'mr18n2L9K88eMlGn7CcctT9RwKSB1FebW397','Asia',0,'quq')

In this case, I have the application simply injecting a comment in the INSERT SQL declaring the shard key; given I am using the account dba_g1 to create/prepare the schemas, rules 32/32 will be used and given I have sett apply=1, ProxySQL will exit the query rules parsing and send the command to the relevant hostgroup.

Run:

SELECT id, millid, date,continent,active,kwatts_s FROM windmills_test1 WHERE id BETWEEN ? AND ? AND continent='South America'

SELECT SUM(kwatts_s) FROM windmills_test1 WHERE id BETWEEN ? AND ?  and active=1  AND continent='Asia'
SELECT id, millid, date,continent,active,kwatts_s  FROM windmills_test1 WHERE id BETWEEN ? AND ?  AND continent='Oceania' ORDER BY millid

SELECT DISTINCT millid,continent,active,kwatts_s   FROM windmills_test1 WHERE id BETWEEN ? AND ? AND active =1  AND continent='Oceania' ORDER BY millid

UPDATE windmills_test1 SET active=? WHERE id=?  AND continent='Asia'
UPDATE windmills_test1 SET strrecordtype=? WHERE id=?  AND continent='North America'

DELETE FROM windmills_test1 WHERE id=?  AND continent='Antarctica'

INSERT INTO windmills_test1 /* continent=Antarctica */ (id,uuid,millid,kwatts_s,date,location,continent,active,strrecordtype) VALUES (?, UUID(), ?, ?, NOW(), ?, ?, ?,?) ON DUPLICATE KEY UPDATE kwatts_s=kwatts_s+1

The above are executed during the tests.  In all of them, the sharding key is present, either in the WHERE clause OR as a comment. 

Of course, if I execute one of them without the sharding key, the firewall rule will stop the query execution, i.e.:

mysql> SELECT id, millid, date,continent,active,kwatts_s FROM windmills_test1 WHERE id BETWEEN ? AND ?;
ERROR 1148 (42000): It is impossible to redirect this command to a defined shard. Please be sure you Have the Continent definition in your query, or that you use a defined DBA account (dba_g{1/2})


Check
here for the full command list.

Setting up the dataset

Once the rules are set, it is time to set up the schemas and the data using sysbench (https://github.com/Tusamarco/sysbench). Remember to use windmills_sharding tests.  

The first operation is to build the schema on SHARD2 without filling it with data. This is a DBA action; as such, we will execute it using the dba_g2 account:

sysbench ./src/lua/windmills_sharding/oltp_read.lua  --mysql-host=10.0.1.96  --mysql-port=6033 --mysql-user=dba_g2 --mysql-password=xxx --mysql-db=windmills_large --mysql_storage_engine=innodb --db-driver=mysql --tables=4 --table_size=0 --table_name=windmills --mysql-ignore-errors=all --threads=1  prepare

Setting table_size and pointing to the ProxySQL IP/port will do, and I will have the following:

mysql> select current_user(), @@hostname;
+----------------+-------------------+
| current_user() | @@hostname        |
+----------------+-------------------+
| dba_g2@%       | ps-mysql1-mysql-0 |
+----------------+-------------------+
1 row in set (0.01 sec)

mysql> use windmills_large;
Database changed
mysql> show tables;
+---------------------------+
| Tables_in_windmills_large |
+---------------------------+
| windmills1                |
| windmills2                |
| windmills3                |
| windmills4                |
+---------------------------+
4 rows in set (0.01 sec)

mysql> select count(*) from windmills1;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.09 sec)

All set but empty.

Now let us do the same but with the other DBA user:

sysbench ./src/lua/windmills_sharding/oltp_read.lua  --mysql-host=10.0.1.96  --mysql-port=6033 --mysql-user=dba_g1 --mysql-password=xxx --mysql-db=windmills_large --mysql_storage_engine=innodb --db-driver=mysql --tables=4 --table_size=400 --table_name=windmills --mysql-ignore-errors=all --threads=1  prepare

If I do now the select above with user dba_g2:

mysql> select current_user(), @@hostname;select count(*) from windmills1;
+----------------+-------------------+
| current_user() | @@hostname        |
+----------------+-------------------+
| dba_g2@%       | ps-mysql1-mysql-0 |
+----------------+-------------------+
1 row in set (0.00 sec)

+----------+
| count(*) |
+----------+
|      113 |
+----------+
1 row in set (0.00 sec)

While If I reconnect and use dba_g1:

mysql> select current_user(), @@hostname;select count(*) from windmills1;
+----------------+--------------------+
| current_user() | @@hostname         |
+----------------+--------------------+
| dba_g1@%       | mt-cluster-1-pxc-0 |
+----------------+--------------------+
1 row in set (0.00 sec)

+----------+
| count(*) |
+----------+
|      287 |
+----------+
1 row in set (0.01 sec)

I can also check on ProxySQL to see which rules were utilized:

select active,hits,destination_hostgroup, mysql_query_rules.rule_id, match_digest, match_pattern, replace_pattern, cache_ttl, apply,flagIn,flagOUT FROM mysql_query_rules NATURAL JOIN stats.stats_mysql_query_rules ORDER BY mysql_query_rules.rule_id;

+------+-----------------------+---------+---------------------+----------------------------------------------------------------------------+-------+--------+---------+
| hits | destination_hostgroup | rule_id | match_digest        | match_pattern                                                              | apply | flagIN | flagOUT |
+------+-----------------------+---------+---------------------+----------------------------------------------------------------------------+-------+--------+---------+
| 3261 | 100                   | 20      | NULL                | NULL                                                                       | 0     | 0      | 500     |
| 51   | 200                   | 21      | NULL                | NULL                                                                       | 0     | 0      | 600     |
| 2320 | 100                   | 31      | NULL                | scontinents*(=|like)s*'*(Asia|Africa|Antarctica|Europe|North America)'* | 1     | 500    | 0       |
| 880  | 200                   | 32      | NULL                | scontinents*(=|like)s*'*(Oceania|South America)'*                       | 1     | 500    | 0       |
| 0    | 100                   | 34      | NULL                | scontinents*(=|like)s*'*(Asia|Africa|Antarctica|Europe|North America)'* | 1     | 600    | 0       |
| 0    | 200                   | 35      | NULL                | scontinents*(=|like)s*'*(Oceania|South America)'*                       | 1     | 600    | 0       |
| 2    | 100                   | 51      | NULL                | scontinents*(=|like)s*'*(Asia|Africa|Antarctica|Europe|North America)'* | 0     | 0      | 1001    |
| 0    | 200                   | 54      | NULL                | scontinents*(=|like)s*'*(Oceania|South America)'*                       | 0     | 0      | 1002    |
| 0    | 100                   | 60      | NULL                | NULL                                                                       | 0     | 50     | 1001    |
| 0    | 200                   | 62      | NULL                | NULL                                                                       | 0     | 60     | 1002    |
| 7    | NULL                  | 2000    | .                   | NULL                                                                       | 1     | 0      | NULL    |
| 0    | 100                   | 2040    | ^SELECT.*FOR UPDATE | NULL                                                                       | 1     | 1001   | NULL    |
| 2    | 101                   | 2041    | ^SELECT.*$          | NULL                                                                       | 1     | 1001   | NULL    |
| 0    | 200                   | 2050    | ^SELECT.*FOR UPDATE | NULL                                                                       | 1     | 1002   | NULL    |
| 0    | 201                   | 2051    | ^SELECT.*$          | NULL                                                                       | 1     | 1002   | NULL    |
+------+-----------------------+---------+---------------------+----------------------------------------------------------------------------+-------+--------+---------+

Running the application

Now that the data load test was successful let us do the real load following the indication as above but use 80 Tables and just a bit more records like 20000, nothing huge. 

Once the data is loaded, we will have the two shards with different numbers of records. If all went well, the shard2 should have ¼ of the total and shard1 ¾.

When the load is over, I have, as expected:

mysql> select current_user(), @@hostname;select count(*) as shard1 from windmills_large.windmills80;select /* continent=shard2 */ count(*) as shard2 from windmills_large.windmills80;
+----------------+--------------------+
| current_user() | @@hostname         |
+----------------+--------------------+
| dba_g1@%       | mt-cluster-1-pxc-0 |
+----------------+--------------------+
1 row in set (0.00 sec)

+--------+
| shard1 |
+--------+
|  14272 | ← Table windmills80 in SHARD1
+--------+
+--------+
| shard2 |
+--------+
|   5728 | ← Table windmills80 in SHARD2
+--------+

As you may have already noticed, I used a trick to query the other shard using the dba_g1 user, I just passed in the query the shard2 definition as a comment. That is all we need.

Let us execute the run command for writes in sysbench and see what happens.

The first thing we can notice while doing writes is the query distribution:

+--------+-----------+----------------------------------------------------------------------------+----------+--------+----------+----------+--------+---------+-------------+---------+
| weight | hostgroup | srv_host                                                                   | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | MaxConnUsed | Queries |
+--------+-----------+----------------------------------------------------------------------------+----------+--------+----------+----------+--------+---------+-------------+---------+
| 10000  | 100       | ac966f7d46c04400fb92a3603f0e2634-193113472.eu-central-1.elb.amazonaws.com  | 3306     | ONLINE | 24	     | 0        | 138    | 66      | 25          | 1309353 |
| 100    | 101       | a5c8836b7c05b41928ca84f2beb48aee-1936458168.eu-central-1.elb.amazonaws.com | 3306     | ONLINE | 0	     | 0        | 0      | 0       | 0           |       0 |
| 10000  | 200       | a039ab70e9f564f5e879d5e1374d9ffa-1769267689.eu-central-1.elb.amazonaws.com | 3306     | ONLINE | 24	     | 1        | 129    | 66      | 25          |  516407 |
| 10000  | 201       | a039ab70e9f564f5e879d5e1374d9ffa-1769267689.eu-central-1.elb.amazonaws.com | 6447     | ONLINE | 0	     | 0        | 0      | 0       | 0           |       0 |
+--------+-----------+----------------------------------------------------------------------------+----------+--------+----------+----------+--------+---------+-------------+---------+

Where we can notice that the load in connection is evenly distributed, while the load is mainly going to shard1 as we expected, given we have an unbalanced sharding by design.

At the MySQL level, we had:

Questions

MySQL

Com Type

The final point is, what is the gain of using this sharding approach?

Well, we still need to consider the fact we are testing on a very small set of data. However, if we can already identify some benefits here, that will be an interesting result. 

Let’s see the write operations with 24 and 64 threads:

MySQL writes

MySQL latency

We get a gain of ~33% just using sharding, while for latency, we do not have a cost. On the contrary, also with a small load increase, we can see how the sharded solution performs better. Of course, we are still talking about a low number of rows and running threads but the gain is there. 

Backup

The backup and restore operation when using POM is completely managed by the operator (see instructions in the POM documentation https://docs.percona.com/percona-operator-for-mysql/pxc/backups.html and https://docs.percona.com/percona-operator-for-mysql/ps/backups.html). 

The interesting part is that we can have multiple kinds of backup solutions, like:

  • On-demand
  • Scheduled 
  • Full Point in time recovery with log streaming

Automation will allow us to set a schedule as simple as this:

schedule:
     - name: "sat-night-backup"
        schedule: "0 0 * * 6"
        keep: 3
        storageName: s3-eu-west
      - name: "daily-backup"
        schedule: "0 3 * * *"
        keep: 7
        storageName: s3-eu-west

Or, if you want to run the on-demand:

kubectl apply -f backup.yaml

Where the backup.yaml file has very simple information:

apiVersion: ps.percona.com/v1alpha1
kind: PerconaServerMySQLBackup
metadata:
  name: ps-gr-sharding-test-2nd-of-may
#  finalizers:
#    - delete-backup
spec:
  clusterName: ps-mysql1
  storageName: s3-ondemand

Using both methods, we will be able to soon have a good set of backups like:

POM (PXC)

cron-mt-cluster-1-s3-eu-west-20234293010-3vsve   mt-cluster-1   s3-eu-west    s3://mt-bucket-backup-tl/scheduled/mt-cluster-1-2023-04-29-03:00:10-full   Succeeded   3d9h        3d9h
cron-mt-cluster-1-s3-eu-west-20234303010-3vsve   mt-cluster-1   s3-eu-west    s3://mt-bucket-backup-tl/scheduled/mt-cluster-1-2023-04-30-03:00:10-full   Succeeded   2d9h        2d9h
cron-mt-cluster-1-s3-eu-west-2023513010-3vsve    mt-cluster-1   s3-eu-west    s3://mt-bucket-backup-tl/scheduled/mt-cluster-1-2023-05-01-03:00:10-full   Succeeded   33h         33h
cron-mt-cluster-1-s3-eu-west-2023523010-3vsve    mt-cluster-1   s3-eu-west    s3://mt-bucket-backup-tl/scheduled/mt-cluster-1-2023-05-02-03:00:10-full   Succeeded   9h          9h

POM (PS) *

NAME                             STORAGE       DESTINATION                                                                     STATE       COMPLETED   AGE
ps-gr-sharding-test              s3-ondemand   s3://mt-bucket-backup-tl/ondemand/ondemand/ps-mysql1-2023-05-01-15:10:04-full   Succeeded   21h         21h
ps-gr-sharding-test-2nd-of-may   s3-ondemand   s3://mt-bucket-backup-tl/ondemand/ondemand/ps-mysql1-2023-05-02-12:22:24-full   Succeeded   27m         27m

Note that as DBA, we still need to validate the backups with a restore procedure. That part is not automated (yet). 

*Note that Backup for POM PS is available only on demand, given the solution is still in technical preview.

When will this solution fit in?

As mentioned multiple times, this solution can cover simple cases of sharding; better if you have shared-nothing. 

It also requires work from the DBA side in case of DDL operations or resharding. 

You also need to be able to change some SQL code to be sure to have present the sharding key/information in any SQL executed.

When will this solution not fit in?

Several things could prevent you from using this solution. The most common ones are:

  • You need to query multiple shards at the same time. This is not possible with ProxySQL.
  • You do not have a DBA to perform administrative work and need to rely on an automated system.
  • Distributed transaction cross-shard.
  • No access to SQL code.

Conclusions

We do not have the Amletic dilemma about sharding or not sharding. 

When using an RDBMS like MySQL, if you need horizontal scalability, you need to shard. 

The point is there is no magic wand or solution; moving to sharding is an expensive and impacting operation. If you choose it at the beginning, before doing any application development, the effort can be significantly less. 

Doing sooner will also allow you to test proper solutions, where proper is a KISS solution. Always go for the less complex things, because in two years you will be super happy about your decision.  

If, instead, you must convert a current solution, then prepare for bloodshed, or at least for a long journey. 

In any case, we need to keep in mind a few key points:

  • Do not believe most of the articles on the internet that promise you infinite scalability for your database. If there is no distinction in the article between a simple database and an RDBMS, run away. 
  • Do not go for the last shiny things just because they shine. Test them and evaluate IF it makes sense for you. Better to spend a quarter testing now a few solutions than fight for years with something that you do not fully comprehend.  
  • Using containers/operators/Kubernetes does not scale per se; you must find a mechanism to have the solution scaling. There is absolutely NO difference with premises. What you may get is a good level of automation. However, that will come with a good level of complexity, and it is up to you to evaluate if it makes sense or not.  

As said at the beginning, for MySQL, the choice is limited. Vitess is the full complete solution, with a lot of coding to provide you with a complete platform to deal with your scaling needs.

However, do not be so fast to exclude ProxySQL as a possible solution. There are out there already many using it also for sharding. 

This small POC used a synthetic case, but it also shows that with just four rules, you can achieve a decent solution. A real scenario could be a bit more complex … or not. 

References

Vitess (https://vitess.io/docs/)

ProxySQL (https://proxysql.com/documentation/)

Firewalling with ProxySQL (https://www.tusacentral.com/joomla/index.php/mysql-blogs/197-proxysql-firewalling)

Sharding:

 

The Percona Kubernetes Operators automate the creation, alteration, or deletion of members in your Percona Distribution for MySQL, MongoDB, or PostgreSQL environment.

 

Learn More About Percona Kubernetes Operators

Mar
01
2023
--

MySQL 5.7 End of Life Six Months Away – Switch to Percona Server for MySQL Today!

try percona server for MySQL

Oracle’s MySQL 5.7 has had a good, long run, but the official End of Life is October of 2023. The Era of MySQL 5.x will be over, and only MySQL 8.0 will be officially supported. Yup, six months away. So if you are running MySQL, you need to consider upgrading to version 8.0 N-O-W!!

What does an upgrade provide?

MySQL 8.0 has many really cool features and improvements that are well worth the upgrade. The default character set of UTF8MB4 gives Unicode version 9.0 support. So you get the Umaluts, Cedils, and C-J-K Language support in your data, plus emojis. 8.0 is optimized around this character set. This gives you all the international characters you probably need to support global operations.

The Structured Query Language has been greatly enhanced. If you have trouble writing subqueries, you rejoice in lateral-derived joins and Common Table Expressions (CTEs). There is a new intersect clause to aid with sets.

EXPLAIN ANALYZE is a big boon to query tuning. EXPLAIN without the ANALYZE gives you the server’s estimated analysis of the performance of your query. Adding ANALYZE causes the query to execute, and the numbers returned to report the real numbers of the query’s performance. Add in INVISIBLE INDEXes, and you can test the efficiency of an index without risking a disastrous rebuild after a delete.

There are many other features that you want to have in MySQL 8.0 that you will get in the upgrade. But before going with the MySQL Community Edition, you should look at Percona Server for MySQL.

Why Percona Server for MySQL?

You get all the MySQL 8.0 features that you expect and more. That more is enterprise server features that you would have to pay Oracle extra to obtain.

  • ThreadPool enables scaling to 10,000+ connections per server.
  • PAM (Plugin Authentication Modules) authenticates MySQL users.
  • Vault keyring plug-in, encryption for InnoDB general tablespaces, and encryption for binary log files.
  • Audit Plugin enables policy-based monitoring and connection and query activity logging.
  • Data Masking to hide, mask, or obscure data like credit cards or identification numbers.
  • RocksDB support for disk space saving and performance enhancements.

And Percona Server for MySQL is free. While you are downloading, please look at Percona Xtrabackup for MySQL and the Percona Toolkit. Free from locked-in licenses or commercial upsells, our products are open source, enterprise-ready, and backed by 24/7 support and on-demand services. You’ll always own access to and availability of your data.

Try Percona Server for MySQL today!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com