May
10
2022
--

Spring Cleaning: Discontinuing RHEL 6/CentOS 6 (glibc 2.12) and 32-bit Binary Builds of Percona Software

Discontinuing RHEL 6/CentOS 6

Discontinuing RHEL 6/CentOS 6As you are probably aware, Red Hat Enterprise Linux 6 (RHEL 6 or EL 6 in short) officially reached “End of Life” (EOL) on 2020-11-30 and is now in the so-called Extended Life Phase, which basically means that Red Hat will no longer provide bug fixes or security fixes.

Even though EL 6 and its compatible derivatives like CentOS 6 had reached EOL some time ago already, we continued providing binary builds for selected MySQL-related products for this platform.

However, this became increasingly difficult, as the MySQL code base continued to evolve and now depends on tools and functionality that are no longer provided by the operating system out of the box. This meant we already had to perform several modifications in order to prepare binary builds for this platform, e.g. installing custom compiler versions or newer versions of various system libraries.

As of MySQL 8.0.26, Oracle announced that they deprecated the TLSv1 and TLSv1.1 connection protocols and plan to remove these in a future MySQL version in favor of the more secure TLSv1.2 and TLSv1.3 protocols. TLSv1.3 requires that both the MySQL server and the client application be compiled with OpenSSL 1.1.1 or higher. This version of OpenSSL is not available in binary package format on EL 6 anymore, and manually rebuilding it turned out to be a “yak shaving exercise” due to the countless dependencies.

Our build & release team was able to update the build environments on all of our supported platforms (EL 7, EL 8, supported Debian and Ubuntu versions) for this new requirement. However, we have not been successful in getting all the required components and their dependencies to build on EL 6, as it would have required rebuilding quite a significant amount of core OS packages and libraries to achieve this.

Moreover, switching to this new OpenSSL version would have also required us to include some additional shared libraries in our packages to satisfy the runtime dependencies, adding more complexity and potential security issues.

In general, we believe that running a production system on an OS that is no longer actively supported by a vendor is not a recommended best practice from a security perspective, and we do not want to encourage such practices.

Because of these reasons and to simplify our build/release and QA processes, we decided to drop support for EL 6 for all products now. Percona Server for MySQL 8.0.27 was the last version for which we built binaries for EL 6 against the previous version of OpenSSL.

Going forward, the following products will no longer be built and released on this platform:

  • Percona Server for MySQL 5.7 and 8.0
  • Percona XtraDB Cluster 5.7
  • Percona XtraBackup 2.4 and 8.0
  • Percona Toolkit 3.2

This includes stopping both building RPM packages for EL 6 and providing binary tarballs that are linked against glibc 2.12.

Note that this OS platform was also the last one on which we still provided 32-bit binaries.

Most of the Enterprise Linux distributions have stopped providing 32-bit versions of their operating systems quite some time ago already. As an example, Red Hat Enterprise Linux 7 (released in June 2014) was the first release to no longer support installing directly on 32-bit Intel/AMD hardware (i686/x86). Already back in 2018, we had taken the decision that we will no longer be offering 32-bit binaries on new platforms or new major releases of our software.

Given today’s database workloads, we also think that 32-bit systems are simply not adequate anymore, and we already stopped building newer versions of our software for this architecture.

The demand for 32-bit downloads has also been declining steadily. A recent analysis of our download statistics revealed that only 2.3% of our total binary downloads are referring to i386 binaries. Looking at IP addresses, these downloads originated from 0.4% of the total range of addresses.

This change affects the following products:

  • Percona Server for MySQL 5.7
  • Percona XtraDB Cluster 5.7
  • Percona XtraBackup 2.4
  • Percona Toolkit

We’ve updated the Percona Release Lifecycle Overview web page accordingly to reflect this change. Previously released binaries for these platforms and architectures will of course remain accessible from our repositories.

If you’re still running EL 6 or a 32-bit database or OS, we strongly recommend upgrading to a more modern platform. Our Percona Services team would be happy to help you with that!

Mar
10
2022
--

Using Percona Server for MySQL 8.0 and Percona XtraBackup 8.0 with HashiCorp Vault Enterprise KMIP Secrets Engine

Percona HashiCorp Vault Enterprise KMIP Secrets Engine

Percona HashiCorp Vault Enterprise KMIP Secrets EngineKMIP (Key Management Interoperability Protocol) is an open standard developed by OASIS (Organization for Advancement of Structured Information Standards) for the encryption of stored data and cryptographic key management.

Percona Server for MySQL 8.0.27 and Percona XtraBackup 8.0.27 now include a KMIP keyring plugin to enable the exchange of cryptographic keys between a key management server and the database for encryption purposes. The procedure to use them with HashiCorp Vault Enterprise is described below.

Install Hashicorp Vault Enterprise

We will first install Hashicorp Vault Enterprise on Ubuntu Linux “Bionic” and then enable the KMIP secrets engine. The KMIP secrets engine is only available with the Enterprise version of HashiCorp Vault, hence a valid license for it is required.

Add HashiCorp repository and install enterprise vault package:

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install vault-enterprise

Export the license as an environment variable:

export VAULT_LICENSE=XXXX

Create a configuration file to be used with the vault, vault_config.hcl:

disable_mlock = true
default_lease_ttl = "24h"
max_lease_ttl = "24h"
storage "file" {
  path    = "/home/manish.chawla/vault/data"
}

listener "tcp" {
  address     = "127.0.0.1:8200"
  tls_cert_file = "/home/manish.chawla/test_mode/certificates/vault.crt"
  tls_key_file = "/home/manish.chawla/test_mode/certificates/vault.key"
}

Note: Vault root certificates and key need to be created separately and are not covered here.

Start vault server with the configuration file:

vault server -config=$HOME/vault_config.hcl 2>&1 &

Note: To configure and start the vault using systemd, refer to the instructions here.

Initialize the vault:

vault operator init -address=https://127.0.0.1:8200

This will generate five unseal keys and the initial root token.

Unseal Key 1: rf4C6gY87tN0UkJbJY96Aq4+Zext1YqgwaDnm+0gBBAH
Unseal Key 2: f9KA7EdlF391cIUzDqaS1N21JjML/36sZFl1x/OfCPn4
Unseal Key 3: 8Nh3EFYF7S0S9qqzdUIilPPRZRmWaDG+3El4rr4FmZNX
Unseal Key 4: iJaCKBIzxulLvA/6vbF3fRK1RXVZ0zLEZoVdlv/s13Sc
Unseal Key 5: rA4pwT6EZLwmVXPQJfU9fjgeGwPaHl260qM9CVNiUw13

Initial Root Token: s.WXEZ26Yb3MtvzbvNMMIG8bve

Unseal the vault.

Use any three unseal keys to unseal the vault. Three keys are required to unseal the vault.

vault operator unseal -address=https://127.0.0.1:8200 rf4C6gY87tN0UkJbJY96Aq4+Zext1YqgwaDnm+0gBBAH
Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       5
Threshold          3
Unseal Progress    1/3
Unseal Nonce       aa1a25c8-aa90-49b3-b127-875524de38f8
Version            1.9.3+ent
Storage Type       file
HA Enabled         false

vault operator unseal -address=https://127.0.0.1:8200 f9KA7EdlF391cIUzDqaS1N21JjML/36sZFl1x/OfCPn4
Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       5
Threshold          3
Unseal Progress    2/3
Unseal Nonce       aa1a25c8-aa90-49b3-b127-875524de38f8
Version            1.9.3+ent
Storage Type       file
HA Enabled         false

vault operator unseal -address=https://127.0.0.1:8200 8Nh3EFYF7S0S9qqzdUIilPPRZRmWaDG+3El4rr4FmZNX
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    5
Threshold       3
Version         1.9.3+ent
Storage Type    file
Cluster Name    vault-cluster-7d9b43de
Cluster ID      c047dcb5-5038-5a29-f4af-47c1ad560f9c
HA Enabled      false

The vault is unsealed.

To use the vault in any terminal, run:

export VAULT_ADDR=https://127.0.0.1:8200
export VAULT_TOKEN=s.WXEZ26Yb3MtvzbvNMMIG8bve
export VAULT_CACERT=/home/manish.chawla/test_mode/certificates/root.cer

Configure KMIP Secrets Engine in Vault

Enable KMIP secrets engine:

vault secrets enable kmip
Success! Enabled the kmip secrets engine at: kmip/

View the secrets list:

vault secrets list
Path          Type         Accessor              Description
----          ----         --------              -----------
cubbyhole/    cubbyhole    cubbyhole_ec12856f    per-token private secret storage
identity/     identity     identity_12d9670d     identity store
kmip/         kmip         kmip_5fb3d4c6         n/a
sys/          system       system_1733eece       system endpoints used for control, policy and debugging

Change the kmip server listening address and port:

vault write kmip/config listen_addrs=0.0.0.0:5696
Success! Data written to: kmip/config

Note: Here kmip is the default path of the secret engine and not the type of the engine.

By default, the kmip generates certificates in EC(Elliptic Curve). We need RSA for MySQL, so specify the certificate type (tls_ca_key_type) and bits (tls_ca_key_bits) to configure the kmip server.

vault write kmip/config tls_ca_key_type="rsa" tls_ca_key_bits=2048
Success! Data written to: kmip/config

vault read kmip/config
Key                            Value
---                            -----
default_tls_client_key_bits    256
default_tls_client_key_type    ec
default_tls_client_ttl         336h
listen_addrs                   [0.0.0.0:5696]
server_hostnames               [localhost]
server_ips                     [127.0.0.1 ::1]
tls_ca_key_bits                2048
tls_ca_key_type                rsa
tls_min_version                tls12

The KMIP secrets engine uses scopes to partition object storage into multiple named buckets. Within a scope, roles can be created with a set of allowed operations that the particular role can perform.

Create a scope:

vault write -f kmip/scope/my-service
Success! Data written to: kmip/scope/my-service

Create a role within the scope, specifying the set of operations to allow or deny.

vault write kmip/scope/my-service/role/admin operation_all=true
Success! Data written to: kmip/scope/my-service/role/admin

Client Certificate Generation for the scope and role created above.

Retrieve the generated CA certificate:

vault read kmip/ca

Copy and save the CA certificate as ca.pem.

Generate a certificate in PEM format, and save it in a JSON file named credential.json.

vault write -format=json \
    kmip/scope/my-service/role/admin/credential/generate \
    format=pem > credential.json

Extract the certificate from the credential.json using jq tool and save it in a file named cert.pem.

jq -r .data.certificate < credential.json > cert.pem

Extract the private key from the credential.json using jq tool and save it in a file named key.pem.

jq -r .data.private_key < credential.json > key.pem

The KMIP configuration is now complete.

Percona Server for MySQL 8.0.27 Configuration for KMIP

This section describes the KMIP configuration in Percona Server for MySQL. KMIP is configured as a component in Percona Server for MySQL.

Create the global manifest file(mysqld.my) in the mysqld installation directory.

{
  "components": "file://component_keyring_kmip"
}

Create the global configuration file, component_keyring_kmip.cnf in the directory, where the component_keyring_kmip library resides.

{ "path": "/home/manish.chawla/keyring_kmip", "server_addr": "0.0.0.0", "server_port": "5696", "client_ca": "/home/manish.chawla/cert.pem", "client_key": "/home/manish.chawla/key.pem", "server_ca": "/home/manish.chawla/ca.pem" }

Note: SElinux/AppArmor rules may have to be adjusted, so that Percona Server for MySQL and Percona XtraBackup can access the certificates.

Initialize and start mysqld with encryption options(add in my.cnf): 

--innodb-undo-log-encrypt --innodb-redo-log-encrypt --binlog-encryption --default-table-encryption=ON --log-replica-updates --gtid-mode=ON --enforce-gtid-consistency --binlog-format=row --source-verify-checksum=ON --binlog-checksum=CRC32 --table-encryption-privilege-check=ON

Check the KMIP component status:

8.0.27>SELECT * FROM performance_schema.keyring_component_status;
+---------------------+------------------------------+
| STATUS_KEY          | STATUS_VALUE                 |
+---------------------+------------------------------+
| Component_name      | component_keyring_kmip       |
| Author              | Percona Corporation          |
| License             | GPL                          |
| Implementation_name | component_keyring_kmip       |
| Version             | 1.0                          |
| Component_status    | Active                       |
| Server_addr         | 0.0.0.0                      |
| Server_port         | 5696                         |
| Client_ca           | /home/manish.chawla/cert.pem |
| Client_key          | /home/manish.chawla/key.pem  |
| Server_ca           | /home/manish.chawla/ca.pem   |
| Object_group        | <NONE>                       |
+---------------------+------------------------------+

Create some encrypted tables and add data in the Percona Server for MySQL.

Backup and Restore of Percona Server for MySQL 8.0.27 Using Percona XtraBackup 8.0.27

This section describes the procedure for taking backup and restore of Percona Server for MySQL 8.0.27 when the KMIP component is enabled and the KMIP vault server is running. Percona XtraBackup reads the KMIP configuration in Percona Server for MySQL automatically, and it is not required to pass this information separately.

Take full backup:

xtrabackup --user=backup --password=* --backup --target-dir=backup_directory

Prepare full backup:

xtrabackup --prepare --target_dir=backup_directory

Stop Percona Server for MySQL and move the data directory to another location. Disable SElinux/AppArmor before restoring the backup.

Restore full backup:

xtrabackup --copy-back --target-dir=backup_directory

Change the ownership of the copied files in the Percona Server for MySQL data directory to the MySQL user.

Start Percona Server for MySQL and check the data. Enable SElinux/AppArmor, if disabled previously.

Jan
27
2022
--

Percona Distribution for MySQL Operator Based on Percona Server for MySQL – Alpha Release

Percona Distribution for MySQL Operator

Percona Distribution for MySQL OperatorOperators are a software framework that extends Kubernetes API and enables application deployment and management through the control plane. For such complex technologies as databases, Operators play a crucial role by automating deployment and day-to-day operations. At Percona we have the following production-ready and enterprise-grade Kubernetes Operators for databases:

Today we are glad to announce an alpha version of our new Operator for MySQL. In this blog post, we are going to answer some frequently asked questions.

Why the New Operator?

As mentioned above, our existing operator for MySQL is based on the Percona XtraDB Cluster (PXC). It is feature-rich and provides virtually-synchronous replication by utilizing Galera Write-Sets. Sync replication ensures data consistency and proved itself useful for critical applications, especially on Kubernetes.

But there are two things that we want to address:

  1. Our community and customers let us know that there are numerous use cases where asynchronous replication would be a more suitable solution for MySQL on Kubernetes.
  2. Support Group Replication (GR) – a native way to provide synchronous replication in MySQL without the need to use Galera.

We heard you! That is why our new Operator is going to run Percona Server for MySQL (PS) and provide both regular asynchronous (with semi-sync support) and virtually-synchronous replication based on GR.

What Is the Name of the New Operator?

We will have two Operators for MySQL and follow the same naming as we have for Distributions:

  • Percona Distribution for MySQL Operator – PXC (Existing Operator)
  • Percona Distribution for MySQL Operator – PS (Percona Server for MySQL)

Is It the Successor of the Existing Operator for MySQL?

Not in the short run. We want to provide our users MySQL clusters on Kubernetes with three replication capabilities:

  • Percona XtraDB Cluster
  • Group Replication
  • Regular asynchronous replication with semi-sync support

Will I Be Able to Switch From One Operator to Another?

We are going to provide instructions and tools for migrations through replication or backup and restore. As the underlying implementation is totally different there is no direct, automated path to switch available.

Can I Use the New Operator Now?

Yes, but please remember that it is an alpha version and we do not recommend it for production workloads. 

Our Operator is licensed under Apache 2.0 and can be found in the percona-server-mysql-operator repository on GitHub.

To learn more about our operator please see the documentation.

Quick Deploy

Run these two commands to spin up a MySQL cluster with 3 nodes with asynchronous replication:

$ kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mysql-operator/main/deploy/bundle.yaml
$ kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mysql-operator/main/deploy/cr.yaml

In a couple of minutes, the cluster is going to be up and running. Verify:

$ kubectl get ps
NAME       MYSQL   ORCHESTRATOR   AGE
cluster1   ready   ready          6m16s

$ kubectl get pods
NAME                                                READY   STATUS    RESTARTS   AGE
cluster1-mysql-0                                    1/1     Running   0          6m27s
cluster1-mysql-1                                    1/1     Running   1          5m11s
cluster1-mysql-2                                    1/1     Running   1          3m36s
cluster1-orc-0                                      2/2     Running   0          6m27s
percona-server-for-mysql-operator-c8f8dbccb-q7lbr   1/1     Running   0          9m31s

Connect to the Cluster

First, you need to get the root user password, which was automatically generated by the Operator. By default system users’ passwords are stored in cluster1-secrets Secret resource:

$ kubectl get secrets cluster1-secrets -o yaml | grep root | awk '{print $2}' | base64 --decode

Start another container with a MySQL client in it:

$ kubectl run -i --rm --tty percona-client --image=percona:8.0 --restart=Never -- bash -il

Connect to a primary node of our MySQL cluster from this container:

$ mysql -h cluster1-mysql-primary -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 138
Server version: 8.0.25-15 Percona Server (GPL), Release 15, Revision a558ec2

Copyright (c) 2009-2021 Percona LLC and/or its affiliates
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

Consult the documentation to learn more about other operational capabilities and options.

What Is Currently Supported?

The following functionality is available in the Operator:

  • Deploy asynchronous and semi-sync replication MySQL clusters with Orchestrator on top of it
  • Expose clusters with regular Kubernetes Services
  • Monitor the cluster with Percona Monitoring and Management
  • Customize MySQL configuration

When Will It Be GA? What Is Going to Be Included?

Our goal is to release the GA version late in Q2 2022. We plan to include the following:

  • Support for both sync and async replication
  • Backups and restores, proxies integration
  • Certifications on various Kubernetes platforms and flavors
  • Deploy and manage MySQL Clusters with PMM DBaaS

Call for Action

Percona Distribution for MySQL Operator – PS just hatched and your feedback is highly appreciated. 

Open a bug or create a Pull Request for a chance to get awesome Percona Swag!

Jan
18
2022
--

Attack No-PK Replication Lag with MySQL/Percona Server 8 Invisible Columns!

no primary key replication lag mysql

no primary key replication lag mysqlThe most common issue when using row-based replication (RBR) is replication lag due to the lack of Primary keys.

The problem is that any replicated DML will do a full table scan for each modified row on the replica. This bug report explains it more in-depth: https://bugs.mysql.com/bug.php?id=53375

For example, if a delete is executed on the following table definition:

CREATE TABLE `joinit` (
  `i` int NOT NULL,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1

 

With this amount of rows:

mysql> select count(*) from joinit;
+----------+
| count(*) |
+----------+
|  1048576 |
+----------+

 

The delete being:

mysql> flush status ;

mysql> delete from joinit where i > 5 and i < 150;
Query OK, 88 rows affected (0.04 sec)

mysql> show status like '%handler%';
+----------------------------+---------+
| Variable_name              | Value   |
+----------------------------+---------+
| Handler_commit             | 2       |
| Handler_delete             | 1       |
…
| Handler_read_rnd_next      | 1048577 |
…

It can be seen that the delete on the Primary requires a full table scan (Handler_read_rnd_next matches row amount + 1) to delete 88 rows.

The additional problem is that each of the rows being deleted will be recorded in the binary log individually like this:

#220112 18:29:05 server id 1  end_log_pos 3248339 CRC32 0xdd9d1cb2 Delete_rows: table id 106 flags: STMT_END_F
### DELETE FROM `test2`.`joinit`
### WHERE
###   @1=6
###   @2='764d302b-73d5-11ec-afc8-00163ef3b519'
###   @3='18:28:39'
###   @4=27
### DELETE FROM `test2`.`joinit`
### WHERE
###   @1=7
###   @2='764d30bc-73d5-11ec-afc8-00163ef3b519'
###   @3='18:28:39'
###   @4=5
…
{88 items}

Which will result in 88 full table scans on the replica, and hence the performance degradation.

For these cases, the recommendation is to add a primary key to the table, but sometimes adding a PK might not be easy because:

  • There are no existing columns that could be considered a PK.
  • Or adding a new column (as the PK) is not possible as it might impact queries from a 3rd party tool that we have no control over (or too complex to fix with query rewrite plugin).

The solution is to use MySQL/Percona Server for MySQL 8 and add an invisible column

Adding a new column (named “newc”) invisible as a primary key can be done with the following line:

ALTER TABLE joinit ADD COLUMN newc INT UNSIGNED NOT NULL AUTO_INCREMENT INVISIBLE PRIMARY KEY FIRST;

Note, adding a PK is an expensive operation that requires a table rebuild as shown here.

After adding an invisible PK, the table will look like this:

CREATE TABLE `joinit` (
  `newc` int NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
  `i` int NOT NULL,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int NOT NULL,
  PRIMARY KEY (`newc`)
) ENGINE=InnoDB AUTO_INCREMENT=1048576 DEFAULT CHARSET=latin1

 

Deleting a row now will be recorded in the binary log like this:

### DELETE FROM `test`.`joinit`
### WHERE
###   @1=1048577
###   @2=1
###   @3='string'
###   @4='17:23:04'
###   @5=5
# at 430
#220112 17:24:56 server id 1  end_log_pos 461 CRC32 0x826f3af6 Xid = 71
COMMIT/*!*/;

Where @1 is the first column ( the PK in this case) which the replica can use to find the matching row without having to do a full table scan.

The operation executed on the replica would be similar to the following which requires only one scan to find the matching row:

mysql> flush status ;
Query OK, 0 rows affected (0.01 sec)

mysql> delete from joinit where newc = 1048578;
Query OK, 1 row affected (0.00 sec)

mysql> show status like '%handler%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Handler_commit             | 2     |
…
| Handler_read_key           | 1     |
…
| Handler_read_rnd_next      | 0     |
…

 

Also as the name suggests, an invisible column won’t show nor it needs to be referenced when doing operations over the table, i.e:

mysql> select * from joinit limit 2; 
+---+--------------------------------------+----------+----+
| i | s                                    | t        | g  |
+---+--------------------------------------+----------+----+
| 2 | ecc6cbed-73c9-11ec-afc8-00163ef3b519 | 17:06:03 | 58 |
| 3 | ecc7d9bb-73c9-11ec-afc8-00163ef3b519 | 17:06:03 | 56 |
+---+--------------------------------------+----------+----+
2 rows in set (0.00 sec)

mysql> insert into joinit values (4, "string", now(), 5);
Query OK, 1 row affected (0.01 sec)

 

But if needed, the new column (newc) can be fetched if explicitly queried:

mysql> select newc, i, s, t, g from joinit limit 2; 
+------+---+--------------------------------------+----------+----+
| newc | i | s                                    | t        | g  |
+------+---+--------------------------------------+----------+----+
|    1 | 2 | ecc6cbed-73c9-11ec-afc8-00163ef3b519 | 17:06:03 | 58 |
|    2 | 3 | ecc7d9bb-73c9-11ec-afc8-00163ef3b519 | 17:06:03 | 56 |
+------+---+--------------------------------------+----------+----+
2 rows in set (0.00 sec)

 

What If…?

What if MySQL automatically detects that the PK is missing for InnoDB tables and adds the invisible PK?

Taking into account that an internal six bytes PK is already added when the PK is missing, it might be a good idea to allow the possibility of making the PK visible if you need to. 

This means that when you execute this CREATE TABLE statement:

CREATE TABLE `joinit` (
  `i` int NOT NULL,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int NOT NULL,
  PRIMARY KEY (`newc`)
) ENGINE=InnoDB AUTO_INCREMENT=1048576 DEFAULT CHARSET=latin1

 

Will be automatically translated to:

CREATE TABLE `joinit` (
  `newc` int NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
  `i` int NOT NULL,
  `s` varchar(64) DEFAULT NULL,
  `t` time NOT NULL,
  `g` int NOT NULL,
  PRIMARY KEY (`newc`)
) ENGINE=InnoDB AUTO_INCREMENT=1048576 DEFAULT CHARSET=latin1

 

And then we can execute this command:

ALTER TABLE joint ALTER COLUMN newc SET VISIBLE;

To make it visible.

Conclusion

Missing primary keys is a problem for scaling databases, as replication will require a full table scan for each updated/delete row, and the more data the more lag.

Adding a PK might not be always possible because of 3rd party tools or restrictions, but adding an invisible primary key will do the trick and have the benefits of adding a PK without impacting syntax and operations from 3rd party clients/tools. What will be awesome is to make MySQL able to detect the missing PK, add it automatically, and change it to visible if you need to.

Oct
06
2021
--

Reminder: TokuDB Storage Engine Will Be Disabled by Default in Percona Server for MySQL 8.0.26

TokuDB Disabled in Percona Server for MySQL

TokuDB Disabled in Percona Server for MySQLAs we’ve communicated in our blog post in May, the TokuDB Storage Engine has been marked as “deprecated” in Percona Server for MySQL 8.0. It will be removed in a future version (Percona Server for MySQL 8.0.28, expected to ship in Q1 2022).

With the release of Percona Server for MySQL 8.0.26, the storage engine will still be included in the binary builds and packages but will be disabled by default. If you are upgrading from a previous version, the TokuDB Storage Engine plugin will fail with an error message at server startup if it is installed.

You will still be able to re-enable it manually so that you can perform the necessary migration steps.

Re-enabling the TokuDB Engine in Percona Server for MySQL 8.0.26 is fairly straightforward. You need to add the options

tokudb_enabled=TRUE

and

tokudb_backup_enabled=TRUE

options to your my.cnf file and restart your server instance.

Once the server is up and running again, you can migrate your data to an alternative storage engine like RocksDB or InnoDB before disabling and removing TokuDB. See the chapter Removing the TokuDB storage engine in our documentation for details.

Let us know if you have any questions or comments about this process!

If you need any assistance with migrating your data or have any questions or concerns about this, don’t hesitate to reach out to us – our experts are here to help!

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Sep
16
2021
--

Repoint Replica Servers in MySQL/Percona Server for MySQL 8.0

Repoint Replica Servers in MySQL

When doing migrations or failovers in MySQL, there is usually a need to do a topology change and repoint replica servers to obtain replication data from a different server.

For example, given servers {A, B, and C} and the following topology:

MySQL Topology

If you need to repoint C to be a replica of B, i.e:

repoint mysql

You can follow the next steps:

Note: log_replica_updates should be enabled on the soon-to-be primary as it is a prerequisite for chain replication.

Note: It is assumed that both replicas only stream from Server A and there are no conflicting replication filters in place that might break replication later on.

If Using File/Position-Based Replication:

1) Stop B and C

STOP REPLICA;

2) If replicas are multi-threaded, correct MTS gaps and make them single-threaded until all changes are applied. To do so, execute the following commands on BOTH nodes:

START REPLICA UNTIL SQL_AFTER_MTS_GAPS;
SHOW REPLICA STATUS\G -- repeat this until you see "Replica_SQL_Running: No"
STOP REPLICA;
SELECT @@global.replica_parallel_workers; -- take note to restore later
SET GLOBAL replica_parallel_workers=0; -- disable MTS during the operations

3) Then check which is the node that is more up to date by looking at Relay_Source_Log_File and Exec_Source_Log_Pos. Run on BOTH nodes:

SHOW REPLICA STATUS\G
# Take note of Relay_Source_Log_File/Exec_Source_Log_Pos from the most up to date node.

4) Sync replicas with UNTIL. Run on the most delayed node with above outputs:

START REPLICA UNTIL SOURCE_LOG_FILE='<Relay_Source_Log_File>', SOURCE_LOG_POS=<Exec_Source_Log_Pos>;
SHOW REPLICA STATUS\G -- repeat this until you see "Replica_SQL_Running: No"

5) If followed above steps, at this point both replicas should have the exact same data set and should be in sync at the same point in time.
# Double check that both replicas are stopped and with the same coords as doing topology changes while replication is ongoing and with diffs coords can cause inconsistencies:

SHOW REPLICA STATUS\G
# Replica_IO_Running must be “NO” in both replicas
# Replica_SQL_Running must be “NO” in both replicas
# Relay_Source_Log_File must match in both replicas
# Exec_Source_Log_Pos must match in both replicas

6) Get current coordinates from B (new intermediate primary). Execute on B:

SHOW MASTER STATUS \G
# Take note of File and Position

7) Repoint C to B. Execute on C with coords from previous step:

CHANGE REPLICATION SOURCE TO SOURCE_HOST='<ip-address-of-B>', SOURCE_LOG_FILE='<File>', SOURCE_LOG_POS='<Position>';

8) If you had disabled MTS, you should re-enable here for both B and C;

SET GLOBAL replica_parallel_workers=X; -- see output of step 2 for correct value

9) Restart replication normally. Run on both nodes:

START REPLICA;

If Using GTID-Based Replication:

1) Stop B and C:
STOP REPLICA;

2) If replicas are multi-threaded, correct MTS gaps and make them single-threaded until all changes are applied. Run on BOTH nodes:

SHOW REPLICA STATUS\G -- repeat this until you see "Replica_SQL_Running: No"
STOP REPLICA;
SELECT @@global.replica_parallel_workers; -- take note to restore later
SET GLOBAL replica_parallel_workers=0; -- disable MTS during the operations

3) Then check which is the node that is more up to date by looking at sequence numbers in Executed_Gtid_Set. Run on BOTH nodes:
?
SHOW REPLICA STATUS\G
# Take note of Executed_Gtid_Set with the largest sequence number. If there is a mismatch in the gtid sets it means there were either local writes or writes coming from some other server. In that case you should check data consistency between the servers, for example with pt-table-checksum . Then you need to fix gtid differences by either restoring the replica from scratch or fix errant transactions as explained on this other blogpost

4) Bring up all nodes to the same point in time. Run on node with smallest GTID sequence number;

START REPLICA UNTIL SQL_AFTER_GTIDS='<Executed_Gtid_Set>';
SHOW REPLICA STATUS\G -- repeat this until you see "Replica_SQL_Running: No"

5) If followed above steps, at this point both replicas should have the exact same data set and should be in sync at the same point in time.
# Double check that both replicas are stopped and with the same coords as doing topology changes while replication is ongoing and with diffs coords can cause inconsistencies:

SHOW REPLICA STATUS\G
# Replica_IO_Running must be “NO” in both replicas
# Replica_SQL_Running must be “NO” in both replicas
# Executed_Gtid_Set must match in both replicas

6) Now both replicas have identical data, so you can re-point C to replicate from B. Run on C:

CHANGE REPLICATION SOURCE TO SOURCE_HOST='<ip-address-of-B>'

7) If you had disabled MTS, you should re-enable here for both B and C;

SET GLOBAL replica_parallel_workers=X; -- see output of step 2 for correct value

8) Restart replication normally. Run on both nodes

START REPLICA;

Doing the opposite replication change from chain replication (A->B->C) into one primary with two replicas should be simpler:

If Using File/Position-Based Replication:

1) Stop replication on B and make sure B is not receiving any write activity:

STOP REPLICA;

2) Check current binary log position on B:

SHOW MASTER STATUS \G

3) On C check replication until C does catch up with B. On C:

SHOW REPLICA STATUS \G

# For C to have catch up with B, the following conditions should be met:
# “File” from B on step 2) should match Relay_Source_Log_File from 3)
# “Position” from B on step2) should match Exec_Source_Log_Pos from 3)

# After catchup, both servers will be in sync with the same data set.

4) Check current replication coords from B:

SHOW REPLICA STATUS \G
# Write down Relay_Source_Log_File and Exec_Source_Log_Pos from B, as we will be using this coords on C

5) Re point C to replicate from A. File and positions used should be the ones taken from B on last step: 

CHANGE REPLICATION SOURCE TO SOURCE_HOST='<ip-address-of-A>', SOURCE_LOG_FILE='<File>', SOURCE_LOG_POS='<Position>'

6) Restart replication normally. Run on both nodes:

START REPLICA;

If Using GTID-Based Replication:

1) Stop replication on B and make sure B is not receiving any write activity:

STOP REPLICA;

2) Check current binary log position on B:

SHOW MASTER STATUS \G

3) On C check replication until C does catch up with B. On C:

SHOW REPLICA STATUS \G

# For C to have catch up with B, the following conditions should be met:
# Executed_Gtid_Set from B step 2) should match Executed_Gtid_Set from 3)
# After catchup, both servers will be in sync with the same data set.

4) Re point C to replicate from A:

CHANGE REPLICATION SOURCE TO SOURCE_HOST='<ip-address-of-A>'

5) Restart replication normally. Run on both nodes

START REPLICA;

Conclusion:

Doing topology changes might seem hard at first, but with the above procedure, it should be easy and error-free! If you do not want to do the manual approach, then you can consider using tools like Orchestrator which allows for automatic failover and promotions.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

Download Percona Distribution for MySQL Today

Aug
13
2021
--

Get an Amazon Gift Card for a Review!

Percona Amazon Gift Card MySQL

Percona Amazon Gift Card MySQLThe next 10 user reviews for Percona Server for MySQL will receive a $10 Amazon gift card from Sourceforge. Add yours now!

Why does Percona value honest user reviews? They provide valuable information for engineers to guide product development. They help more users become aware of Percona software.  They help inspire our development process!

If you are uncertain what to write about, here are some ideas. Remember to describe your environment and use case if relevant.

  1. What features are most useful for you?
  2. What missing features matter most to you?
  3. How useful do you find Percona Server for MySQL in your circumstances?

To grab your $10 bonus, submit a review on Sourceforge for Percona Server for MySQL using this referral link. Hurry up; only the next 10 reviewers will receive the gift! This offer is managed by Sourceforge and is wholly independent of Percona.

Sourceforge, Capterra, and G2 also allow you to leave your review on other Percona products, but they are not eligible for the Amazon gift card giveaway.  See these links to all major review sites.

Percona Product Capterra G2 SourceForge
Percona Monitoring and Management Capterra G2 SourceForge
Percona Server For MySQL Capterra G2 SourceForge
Percona XtraDB Cluster Capterra G2 SourceForge
Percona XtraBackup Capterra G2 SourceForge
Percona Distribution for PostgreSQL G2 SourceForge
Percona Backup for MongoDB G2 SourceForge
Percona Server for MongoDB G2 SourceForge
Percona Kubernetes Operator G2 SourceForge

 

Percona gratefully reads all received reviews and greatly appreciates them. If you have any questions or concerns, don’t hesitate to get in touch with us at community-team@percona.com.

Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

Download Percona Distribution for MySQL Today

Jun
01
2021
--

Skipping Percona Server for MySQL Version 8.0.24 and Releasing 8.0.25 Next

Percona Server for MySQL Version 8.0.24

After Oracle released MySQL version 8.0.24 on April 20, 2021, our engineering team got started right away with merging our enhancements to prepare the corresponding 8.0.24 version of Percona Server for MySQL.

However, Oracle released MySQL version 8.0.25 shortly afterward on May 11, 2021, to fix a critical bug that we also observed during our initial testing and reported back to them.

Therefore, we have decided to skip releasing Percona Server for MySQL 8.0.24 both as a standalone product and a distribution as well as the matching Percona XtraDB Cluster release.

Our next public release after Percona Server for MySQL version 8.0.23 will be version 8.0.25, so don’t be surprised if you’re looking for version 8.0.24 and can’t find it. Thank you!

Apr
15
2021
--

Online DDL with Group Replication In Percona Server for MySQL 8.0.22 (and MySQL 8.0.23)

Online DDL with Group Replication MySQL

Online DDL with Group Replication MySQLWhile I was working on my grFailOver POC, I have also done some additional parallel testing. One of them was to see how online DDL is executed inside a Group Replication cluster.

The online DDL feature provides support for instant and in-place table alterations and concurrent DML. Checking the Group Replication (GR) official documentation, I was trying to identify if any limitation exists, but the only thing I have found was this:

“Concurrent DDL versus DML Operations.  Concurrent data definition statements and data manipulation statements executing against the same object but on different servers is not supported when using multi-primary mode. During execution of Data Definition Language (DDL) statements on an object, executing concurrent Data Manipulation Language (DML) on the same object but on a different server instance has the risk of conflicting DDL executing on different instances not being detected.”

This impacts only when you have a multi-primary scenario, which is NOT recommended and not my case. So, in theory, GR should be able to handle the online DDL without problems. 

My scenario:

group replication MySQL

I have two DCs and I am going to do actions on my DC1 and see how it propagates all over, and what impact it will have.

The Test

To do the test, I will run and insert from select. 

insert into windmills_test  select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills7 limit 1;

And a select, on my Primary node gr1, while on another connection execute the ALTER:

ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE;

As you may have noticed, I am EXPLICITLY asking for INPLACE and lock NONE. So in this case, MySQL cannot satisfy;  it should exit and not execute the command.

In the meantime, on all other nodes, I will run a check command to see WHEN my ALTER is taking place. Let us roll the ball:

On my Primary, the command to insert the data:

[root@gr1 grtest]# while [ 1 = 1 ];do da=$(date +'%s.%3N');/opt/mysql_templates/mysql-8P/bin/mysql --defaults-file=./my.cnf -uroot -D windmills_s -e "insert into windmills_test  select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills7 limit 1;" -e "select count(*) from windmills_s.windmills_test;" > /dev/null;db=$(date +'%s.%3N'); echo "$(echo "($db - $da)"|bc)";sleep 1;done

Again on Primary another session to execute the ALTER:

DC1-1(root@localhost) [windmills_s]>ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE;

On other nodes to monitor when ALTER will start:

while [ 1 = 1 ];do echo "$(date +'%T.%3N')";/opt/mysql_templates/mysql-8P/bin/mysql --defaults-file=./my.cnf -uroot -D windmills_s -e "show processlist;"|grep -i "alter";sleep 1;done

What Happens

Data is inserted by the loop.

Group Replication MySQL ALTER

ALTER starts, but I can still insert data in my table, and most importantly, the data is propagated to all nodes of the DC1 cluster.

No ALTER action on the other nodes.

.559
.502
.446
.529
.543
.553
.533
.602
.458  <---- end of the alter locally

Once ALTER is complete on the local node (Primary) it is then executed (broadcast) to all the nodes participating in the cluster.

[ERROR 1205 (HY000) at line 1: Lock wait timeout exceeded; try restarting transaction <--- waiting for waiting for handler commit    No INSERTS are allowed

But writes are suspended, waiting for:

37411 | root            | localhost          | windmills_s | Query            |    19 | Waiting for table metadata lock                                 | insert into windmills_test  select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype

And eventually, it will timeout.

The other point is that any write hangs until the slowest node had applied the ALTER. It is important to note that all nodes, not only the PRIMARY, remain pending waiting for the slow node: the slowest drives all.

GR3:

11:01:28.649  48 system user windmills_s Query 171 altering table ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE
11:01:29.674  48 system user windmills_s Query 172 waiting for handler commit ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE

GR2:

Start 11:00:14.438  18 system user windmills_s Query 97 altering table ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE
Ends 11:02:00.107  18 system user windmills_s Query 203 altering table ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE

Finally, when the last node in the GR cluster has applied the ALTER, the writes will resume, and the Replica node on DC2 will start its ALTER operation on PRIMARY first, then on the other nodes.

Summarizing:

  1. Writes are executed on Primary
  2. ALTER is executed on the Primary
    • DDL does not impact the write operation and respects the not blocking directive.
  3. ALTER is completed on Primary and passed to all nodes
    • Meta lock is raised on nodes
  4. ALL cluster waits for the slowest node to complete
  5. When all is done in the DC1 then the action is replicated to DC2
    • Goto point 2

Conclusion

It seems that at the moment we have partial coverage of the online ddl feature when using group_replication. Of course, to have to wait for the SECONDARY nodes is better and less impacting than to wait for PRIMARY first and then the SECONDARIES.

But it is confusing, given I was expecting to have either full online coverage (I had explicitly asked for that in the DDL command) or a message telling me it cannot be executed online.  Of course, I would prefer to have FULL online coverage. ;0) 

Keep in mind my setup was also pretty standard and that changing group_replication_consistency does not affect the outcome. But not sure I can classify this as a bug, more an unexpected undesirable behavior.

Apr
05
2021
--

Which Version of MySQL Should I Use for MyRocks?

Version of MySQL Should I Use for MyRocks

Version of MySQL Should I Use for MyRocksAs database footprints continue to explode, many companies are looking for ways to deal with such rapid growth.  One approach is to refactor traditional relational databases to fit into a NoSQL engine, where horizontal scalability is easier.  However, in many cases, this is in no way a trivial undertaking.

Another approach that has been gaining interest is the use of MyRocks as an alternative storage engine to the traditional InnoDB.  While not for everyone, in certain use cases it could be a potential solution.  As with so many things open source, the next standard questions are: which version should I use?  Any differences with the engine if I use MyRocks with MySQL 5.7 vs 8.0?

In this post, I wanted to touch on this and give some high-level thoughts on MyRocks when it comes to the version of MySQL.

Engine Overview

At a high level, RocksDB is an embedded key-value database with data stored in a log-structured merge tree (LSM).  MyRocks is an abstraction layer that allows RocksDB to serve as a MySQL storage engine.  With RocksDB as the underlying storage layer, there are numerous advantages including faster replication, better compression, and faster data loading.

In contrast, InnoDB is B-Tree based structure.  MySQL was designed using a plugin architecture that separates the storage engine logic from the main server functionality.  This allows users to choose which storage engine they want based on their use case.  Historically, this was the MyISAM storage engine.  In recent years, InnoDB replaced MyISAM as the defacto standard and later the default engine.

Which Engine is Better?

Choosing which engine to use is very use case-specific.  Even at Facebook, where MyRocks was first developed, it was not meant to be a universal replacement for InnoDB.  Features like gap locks remain absent from MyRocks.  If a workload is dependent on gap locking, it will be a non-starter with the MyRocks engine.

Also, data access patterns should dictate the engine choice.  If the workload is write-intensive with limited range scans, it may be a good fit for MyRocks.  If you have a traditional OLTP workload with several reporting access patterns, InnoDB would remain a better option.  As mentioned in MyRocks Engine: Things to Know Before You Start:

“MyRocks is not an enhanced InnoDB, nor a one-size-fits-all replacement for InnoDB. It has its own pros/cons just like InnoDB. You need to decide which engine to use based on your application’s data access patterns.”

Which Version of MySQL Should I Use?

Assuming you have done your research and found MyRocks would be a good fit, the next choice is which version to use.  Facebook runs MyRocks on a heavily patched, internal version of MySQL 5.6Percona Server for MySQL includes MyRocks in both 5.7 and 8.0.  So what are the differences?

At a high level, the MyRocks/RocksDB code is essentially the same in all three versions.  Percona Server for MySQL uses the upstream version of RocksDB and only changes the linked version when Facebook MySQL updates the version.  This is due to the fact that it inherits the integration testing between RocksDB and MyRocks from Facebook.

As such, the biggest differences are based solely on the server version and not the server version and MyRocks combination.  If your infrastructure is already running with MySQL 5.7 in production and not ready to finalize a move to 8.0, there would be no discernible difference running the MyRocks engine with your current version.  MyRocks running on Percona Server for MySQL 5.7 is stable and shouldn’t be discarded as an option.  You should still plan to upgrade to 8.0, but there shouldn’t be any unique challenges in the process compared to using InnoDB.  It would just require the standard upgrade process and significant testing.

Moving forward (as 5.7 approaches EOL in 2023), you can expect to see fewer enhancements with MyRocks on Percona Server for MySQL 5.7.  Most new active development will be done against 8.0 while 5.7 will remain primarily in extended support, which includes critical bug fixes and security fixes only.

Conclusion

With all the version combinations and options, it can be overwhelming to pick the best option for your organization.  For a new deployment, I would recommend starting with MyRocks on MySQL 8.0.  With an existing MySQL 5.7 deployment (while you should be actively working towards an upgrade to 8.0), MyRocks is a viable and stable option as well.  The biggest driver for MyRocks should be space concerns combined with the workload.  As nice as it would be to say MyRocks is the silver bullet that works for everything, that just isn’t the case.  You can always reach out to the Professional Services team at Percona and let us help you determine if MyRocks would be a good fit for your team!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com