Jul
11
2022
--

Percona Operator for MySQL Supports Group Replication

Percona Operator for MySQL Supports Group Replication

Percona Operator for MySQL Supports Group ReplicationThere are two Operators at Percona to deploy MySQL on Kubernetes:

We wrote a blog post in the past explaining the thought process and reasoning behind creating the new Operator for MySQL. The goal for us is to provide production-grade solutions to run MySQL in Kubernetes and support various replication configurations:

  • Synchronous replication
    • with Percona XtraDB Cluster
    • with Group Replication
  • Asynchronous replication

With the latest 0.2.0 release of Percona Operator for MySQL (based on Percona Server for MySQL), we have added Group Replication support. In this blog post, we will briefly review the design of our implementation and see how to set it up. 

Design

This is a high-level design of running MySQL cluster with Group Replication:

MySQL cluster with Group Replication

MySQL Router acts as an entry point for all requests and routes the traffic to the nodes. 

This is a deeper look at how the Operator deploys these components in Kubernetes:
kubernetes deployment

Going from right to left:

  1. StatefulSet to deploy a cluster of MySQL nodes with Group Replication configured. Each node has its storage attached to it.
  2. Deployment object for stateless MySQL Router. 
  3. Deployment is exposed with a Service. We use various TCP ports here:
    1. MySQL Protocol ports
      1. 6446 – read/write, routing traffic to Primary node
      2. 6447 – read-only, load-balancing the traffic across Replicas 
    2. MySQL X Protocol – can be useful for CRUD operations, ex. asynchronous calls. Ports follow the same logic:
      1. 6448 – read/write
      2. 6449 – read-only 

Action

Prerequisites: you need a Kubernetes cluster. Minikube would do.

The files used in this blog post can be found in this Github repo.

Deploy the Operator

kubectl apply --server-side -f https://raw.githubusercontent.com/spron-in/blog-data/master/ps-operator-gr-demo/bundle.yaml

Note

–server-side

flag, without it you will get the error:

The CustomResourceDefinition "perconaservermysqls.ps.percona.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

Our Operator follows OpenAPIv3 schema to have proper validation. This unfortunately increases the size of our Custom Resource Definition manifest and as a result, requires us to use

–server-side

flag.

Deploy the Cluster

We are ready to deploy the cluster now:

kubectl apply -f https://raw.githubusercontent.com/spron-in/blog-data/master/ps-operator-gr-demo/cr.yaml

I created this Custom Resource manifest specifically for this demo. Important to note variables:

  1. Line 10:
    clusterType: group-replication

    – instructs Operator that this is going to be a cluster with Group Replication.

  2. Lines 31-47: are all about MySQL Router. Once Group Replication is enabled, the Operator will automatically deploy the router. 

Get the status

The best way to see if the cluster is ready is to check the Custom Resource state:

$ kubectl get ps
NAME         REPLICATION         ENDPOINT        STATE   AGE
my-cluster   group-replication   35.223.42.238   ready   18m

As you can see, it is

ready

. You can also see

initializing

if the cluster is still not ready or

error

if something went wrong.

Here you can also see the endpoint where you can connect to. In our case, it is a public IP-address of the load balancer. As described in the design section above, there are multiple ports exposed:

$ kubectl get service my-cluster-router
NAME                TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                                                       AGE
my-cluster-router   LoadBalancer   10.20.22.90   35.223.42.238   6446:30852/TCP,6447:31694/TCP,6448:31515/TCP,6449:31686/TCP   18h

Connect to the Cluster

To connect we will need the user first. By default, there is a root user with a randomly generated password. The password is stored in the Secret object. You can always fetch the password with the following command:

$ kubectl get secrets my-cluster-secrets -ojson | jq -r .data.root | base64 -d
SomeRandomPassword

I’m going to use port 6446, which would grant me read/write access and lead me directly to the Primary node through MySQL Router:

mysql -u root -p -h 35.223.42.238 --port 6446
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 156329
Server version: 8.0.28-19 Percona Server (GPL), Release 19, Revision 31e88966cd3

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Group Replication in action

Let’s see if Group Replication really works. 

List the members of the cluster by running the following command:

$ mysql -u root -p -P 6446 -h 35.223.42.238 -e 'SELECT member_host, member_state, member_role FROM performance_schema.replication_group_members;'
+-------------------------------------------+--------------+-------------+
| member_host                               | member_state | member_role |
+-------------------------------------------+--------------+-------------+
| my-cluster-mysql-0.my-cluster-mysql.mysql | ONLINE       | PRIMARY     |
| my-cluster-mysql-1.my-cluster-mysql.mysql | ONLINE       | SECONDARY   |
| my-cluster-mysql-2.my-cluster-mysql.mysql | ONLINE       | SECONDARY   |
+-------------------------------------------+--------------+-------------+

Now we will delete one Pod (MySQL node), which also happens to have a Primary role, and see what happens:

$ kubectl delete pod my-cluster-mysql-0
pod "my-cluster-mysql-0" deleted


$ mysql -u root -p -P 6446 -h 35.223.42.238 -e 'SELECT member_host, member_state, member_role FROM performance_schema.replication_group_members;'
+-------------------------------------------+--------------+-------------+
| member_host                               | member_state | member_role |
+-------------------------------------------+--------------+-------------+
| my-cluster-mysql-1.my-cluster-mysql.mysql | ONLINE       | PRIMARY     |
| my-cluster-mysql-2.my-cluster-mysql.mysql | ONLINE       | SECONDARY   |
+-------------------------------------------+--------------+-------------+

One node is gone as expected.

my-cluster-mysql-1

node got promoted to a Primary role. I’m still using port 6446 and the same host to connect to the database, which indicates that MySQL Router is doing its job.

After some time Kubernetes will recreate the Pod and the node will join the cluster again automatically:

$ mysql -u root -p -P 6446 -h 35.223.42.238 -e 'SELECT member_host, member_state, member_role FROM performance_schema.replication_group_members;'
+-------------------------------------------+--------------+-------------+
| member_host                               | member_state | member_role |
+-------------------------------------------+--------------+-------------+
| my-cluster-mysql-0.my-cluster-mysql.mysql | RECOVERING   | SECONDARY   |
| my-cluster-mysql-1.my-cluster-mysql.mysql | ONLINE       | PRIMARY     |
| my-cluster-mysql-2.my-cluster-mysql.mysql | ONLINE       | SECONDARY   |
+-------------------------------------------+--------------+-------------+

The recovery phase might take some time, depending on the data size and amount of the changes, but eventually, it will come back ONLINE:

$ mysql -u root -p -P 6446 -h 35.223.42.238 -e 'SELECT member_host, member_state, member_role FROM performance_schema.replication_group_members;'
+-------------------------------------------+--------------+-------------+
| member_host                               | member_state | member_role |
+-------------------------------------------+--------------+-------------+
| my-cluster-mysql-0.my-cluster-mysql.mysql | ONLINE       | SECONDARY   |
| my-cluster-mysql-1.my-cluster-mysql.mysql | ONLINE       | PRIMARY     |
| my-cluster-mysql-2.my-cluster-mysql.mysql | ONLINE       | SECONDARY   |
+-------------------------------------------+--------------+-------------+

What’s coming up next?

Some exciting capabilities and features that we are going to ship pretty soon:

  • Backup and restore support for clusters with Group Replication
    • We have backups and restores in the Operator, but they currently do not work with Group Replication
  • Monitoring of MySQL Router in Percona Monitoring and Management (PMM)
    • Even though the Operator integrates nicely with PMM, it is possible to monitor MySQL nodes only, but not MySQL Router.
  • Automated Upgrades of MySQL and database components in the Operator
    • We have it in all other Operators and it is just logical to add it here

Percona is an open source company and we value our community and contributors. You are greatly encouraged to contribute to Percona Software. Please read our Contributions guide and visit our community webpage.

Aug
08
2016
--

Docker Images for MySQL Group Replication 5.7.14

MySQL Group Replication

MySQL Group ReplicationIn this post, I will point you to Docker images for MySQL Group Replication testing.

There is a new release of MySQL Group Replication plugin for MySQL 5.7.14. It’s a “beta” plugin and it is probably the last (or at lease one of the final pre-release packages) before Group Replication goes GA (during Oracle OpenWorld 2016, in our best guess).

Since it is close to GA, it would be great to get a better understanding of this new technology. Unfortunately, MySQL Group Replication installation process isn’t very user-friendly.

Or, to put it another way, totally un-user-friendly! It consists of a mere “50 easy steps” – by which I think they mean “easy” to mess up.

Matt Lord, in his post http://mysqlhighavailability.com/mysql-group-replication-a-quick-start-guide/, acknowledges: “getting a working MySQL service consisting of 3 Group Replication members is not an easy “point and click” or automated single command style operation.”

I’m not providing a review of MySQL Group Replication 5.7.14 yet – I need to play around with it a lot more. To make this process easier for myself, and hopefully more helpful to you, I’ve prepared Docker images for the testing of MySQL Group Replication.

Docker Images

To start the first node, run:

docker run -d --net=cluster1 --name=node1  perconalab/mysql-group-replication --group_replication_bootstrap_group=ON

To join all following nodes:

docker run -d --net=cluster1 --name=node2  perconalab/mysql-group-replication --group_replication_group_seeds='node1:6606'

Of course, you need to have Docker Network running:

docker network create cluster1

I hope this will make the testing process easier!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com