In this blog post, I will show how to install the MySQL-compatible Percona XtraDB Cluster (PXC) Operator on Minikube as well as perform some basic actions. I am by no means a Kubernetes expert and this blog post is the result of my explorations preparing for a local MySQL Meetup, so if you have some comments or suggestions on how to do things better, please comment away!
For my experiments, I used Minikube version 1.26 with the docker driver in the most basic installation on Ubuntu 22.04 LTS, though it should work with other combinations, too. You also can find the official “Running Percona XtraDB Cluster on Minikube” documentation here.
You will also need kubectl installed for this tutorial to work. Alternatively, you can use “minikube kubectl” instead of “kubectl” in the examples.
You will also need MySQL client, “jq” and sysbench utilities installed.
I also made the commands and Yaml configurations I’m using throughout this tutorial available in the minikube-pxc-tutorial GitHub repository
Enabling Metrics Server in Minikube
As we may want to look into resource usage in some of our experiments, we can consider enabling the Metrics Server add-on which is done by running:
minikube addons enable metrics-server
Getting basic MySQL up and running on Kubernetes
If you use Percona XtraDB Cluster Operator for your MySQL deployment on Kubernetes, the first thing you need to do is to install that operator:
kubectl apply -f https://raw.githubusercontent.com/percona/percona-xtradb-cluster-operator/v1.11.0/deploy/bundle.yaml
Next, we can create our cluster which we’ll call “minimal-cluster”:
kubectl apply -f 0-cr-minimal.yaml
Note that the completion of this command does not mean the cluster is provisioned and ready for operation, but rather that the process is started and it may take a bit of time to complete. You can verify that it is provisioned and ready for operation by running:
# watch -n1 -d kubectl get pods NAME READY STATUS RESTARTS AGE minimal-cluster-haproxy-0 2/2 Running 0 3m1s minimal-cluster-pxc-0 3/3 Running 0 3m1s percona-xtradb-cluster-operator-79949dc46d-tdslj 1/1 Running 0 3m55s
You can see three pods running – one is the MySQL (Percona XtraDB Cluster), one is pxc-0 node, and one is the HAproxy haproxy-0 node, which is used to provide high availability for “real” clusters with more than one cluster node in operation.
If you do not want to experiment with high availability options, such a single node deployment is all you need for basic development tasks.
Percona Operator for MySQL Custom Resource Manifest explained
Before we go further let’s look at the YAML file we just deployed to check what’s in it. For a complete list of supported options check out the documentation:
apiVersion: pxc.percona.com/v1-11-0 kind: PerconaXtraDBCluster metadata: name: minimal-cluster
This resource definition corresponds to Percona XtraDB Cluster Operator version 1.11 Our cluster will be named minimal-cluster.
spec: crVersion: 1.11.0 secretsName: minimal-cluster-secrets
Once again, we specify the version of the custom resource definition and the name of the “secrets” resource that this cluster will use. It is derived from cluster name for convenience but actually could be anything.
allowUnsafeConfigurations: true
We are deploying a single node cluster (at first) which is considered an unsafe configuration, so one needs to allow an unsafe configuration for such deployment to succeed. For production deployments, you should not allow unsafe configuration unless you really know what you’re doing.
upgradeOptions: apply: 8.0-recommended schedule: "0 4 * * *"
This section automatically checks for updates every day at 4 AM and performs upgrades to the recommended version if available.
pxc: size: 1 image: percona/percona-xtradb-cluster:8.0.27-18.1 volumeSpec: persistentVolumeClaim: resources: requests: storage: 6G
PXC is the main container of this pod containing the cluster itself – we need to store data on a persistent volume, so we’re defining it here. Use the default volume ask for a 6GB size parameter to define how many nodes to provision and image which particular image of Percona XtraDB Cluster to deploy.
haproxy: enabled: true size: 1 image: percona/percona-xtradb-cluster-operator:1.11.0-haproxy
Deploy HAProxy to manage the high availability of the cluster. We only deploy one for testing purposes. In production, though, you need at least two to ensure you do not have a single point of failure.
logcollector: enabled: true image: percona/percona-xtradb-cluster-operator:1.11.0-logcollector
Collect logs from the Percona XtraDB Cluster container, so they are persisted even if that container is destroyed.
Accessing the MySQL server you provisioned
But how do we access our newly provisioned MySQL/Percona XtraDB Cluster instance?
By default, the instance is not accessible outside of the Kubernetes cluster for security purposes, which means outside of the Minikube environment, but in our case, and if we want it to be, we need to expose it:
kubectl expose service minimal-cluster-haproxy --type=NodePort --port=3306 --name=mysql-test
We can see the result of this command by running:
l# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34m minimal-cluster-haproxy ClusterIP 10.110.6.3 <none> 3306/TCP,3309/TCP,33062/TCP,33060/TCP 11m minimal-cluster-haproxy-replicas ClusterIP 10.100.229.205 <none> 3306/TCP 11m minimal-cluster-pxc ClusterIP None <none> 3306/TCP,33062/TCP,33060/TCP 11m minimal-cluster-pxc-unready ClusterIP None <none> 3306/TCP,33062/TCP,33060/TCP 11m mysql-test NodePort 10.108.103.151 <none> 3306:30504/TCP 24s percona-xtradb-cluster-operator ClusterIP 10.101.38.244 <none> 443/TCP 12m
Unlike the rest of the services which are only accessible inside the Kubernetes cluster (hence ClusterIP type) our “mysql-test” service has NodePort type which means it is exposed on the port on the local node. In our case, port 3306, which is the standard port MySQL listens on, is mapped to port 30504.
Note that IP 10.108.103.151 mentioned here is internal cluster IP and it is not accessible. To find out which IP you should utilize, you can use:
# minikube service list |-------------|----------------------------------|--------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-------------|----------------------------------|--------------|---------------------------| | default | kubernetes | No node port | | default | minimal-cluster-haproxy | No node port | | default | minimal-cluster-haproxy-replicas | No node port | | default | minimal-cluster-pxc | No node port | | default | minimal-cluster-pxc-unready | No node port | | default | mysql-test | 3306 | http://192.168.49.2:30504 | | default | percona-xtradb-cluster-operator | No node port | | kube-system | kube-dns | No node port | | kube-system | metrics-server | No node port | |-------------|----------------------------------|--------------|---------------------------|
Even though this is not an HTTP protocol service, the URL is quite helpful in showing us the IP and port we can use to access MySQL.
An alternative to exposing by running “expose service command” is you can also set haproxy.serviceType=NodePort in manifest.
Next, we need the MySQL password! To keep things secure, Percona Operator for MySQL does not have a default password but instead generates a unique secure password which is stored in Kubernetes Secrets. In fact, there are two Secrets created: one for internal use and another intended to be accessible by the user – this is the one we will use:
# kubectl get secrets NAME TYPE DATA AGE internal-minimal-cluster Opaque 7 94m minimal-cluster-secrets Opaque 7 94m
Let’s look deeper into minimal-cluster Secrets:
# kubectl get secret minimal-cluster-secrets -o yaml apiVersion: v1 data: clustercheck: a3VMZktDUUhLd1JjM3BJaw== monitor: ZTdoREJoQXNhdDRaWlB4eg== operator: SnZySWttRWFNNDRreWlvVlR5aA== proxyadmin: ZVcxTGpIQXNLT0NjemhxVEV3 replication: OTEzY09mUnN2MjM1d3RlNUdOUA== root: bm1oemtOaDVRakxBcG05OUdQTw== xtrabackup: RHNTNkExYlhqNDkxZjdvY3k= kind: Secret metadata: creationTimestamp: "2022-07-07T18:33:03Z" name: minimal-cluster-secrets namespace: default resourceVersion: "1538" uid: 3ab006f1-01e2-41a7-b9f5-465aecb38a69 type: Opaque
We can see Percona Operator for MySQL has a number of users created and it stores their passwords in Kubernetes Secrets. Note those values are base64 encoded.
Let’s now get all the MySQL connection information we discussed and store it in various variables:
export MYSQL_HOST=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}") export MYSQL_TCP_PORT=$(kubectl get services/mysql-test -o go-template='{{(index .spec.ports 0).nodePort}}') export MYSQL_PWD=`kubectl get secrets minimal-cluster-secrets -ojson | jq -r .data.root | base64 -d`
We’re using these specific names as “mysql” command line client (but not mysqladmin) and will use those variables by default, making it quite convenient.
Let’s check if MySQL is accessible using those credentials:
# mysql -e "select 1" +---+ | 1 | +---+ | 1 | +---+
Works like a charm!
Running Sysbench on MySQL on Kubernetes
Now that we know how to access MySQL, let’s create a test database and load some data:
# mysql -e "create database sbtest" sysbench --db-driver=mysql --threads=4 --mysql-host=$MYSQL_HOST --mysql-port=$MYSQL_TCP_PORT --mysql-user=root --mysql-password=$MYSQL_PWD --mysql-db=sbtest /usr/share/sysbench/oltp_point_select.lua --report-interval=1 --table-size=1000000 prepare sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3) Initializing worker threads... Creating table 'sbtest1'... Inserting 1000000 records into 'sbtest1' Creating a secondary index on 'sbtest1'...
And now we can run a test:
# sysbench --db-driver=mysql --threads=8 --mysql-host=$MYSQL_HOST --mysql-port=$MYSQL_TCP_PORT --mysql-user=root --mysql-password=$MYSQL_PWD --mysql-db=sbtest /usr/share/sysbench/oltp_point_select.lua --report-interval=1 --table-size=1000000 --report-interval=1 --time=60 run Running the test with following options: Number of threads: 8 Report intermediate results every 1 second(s) Initializing random number generator from current time Initializing worker threads... Threads started! [ 1s ] thds: 8 tps: 27299.11 qps: 27299.11 (r/w/o: 27299.11/0.00/0.00) lat (ms,95%): 0.52 err/s: 0.00 reconn/s: 0.00 [ 2s ] thds: 8 tps: 29983.13 qps: 29983.13 (r/w/o: 29983.13/0.00/0.00) lat (ms,95%): 0.46 err/s: 0.00 reconn/s: 0.00 … [ 60s ] thds: 8 tps: 30375.00 qps: 30375.00 (r/w/o: 30375.00/0.00/0.00) lat (ms,95%): 0.42 err/s: 0.00 reconn/s: 0.00 SQL statistics: queries performed: read: 1807737 write: 0 other: 0 total: 1807737 transactions: 1807737 (30127.61 per sec.) queries: 1807737 (30127.61 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 60.0020s total number of events: 1807737 Latency (ms): min: 0.10 avg: 0.26 max: 30.91 95th percentile: 0.42 sum: 478882.64 Threads fairness: events (avg/stddev): 225967.1250/1916.69 execution time (avg/stddev): 59.8603/0.00
Great, we can see our little test cluster can run around 30,000 queries per second!
Let’s make MySQL on Kubernetes highly available!
Let’s see if we can convert our single-node Percona XtraDB Cluster to fake a highly available cluster (fake – because with MinKube running on a single node, we’re not going to be protected from actual hardware failures).
To achieve this we can simply scale our cluster to three nodes:
# kubectl scale --replicas=3 pxc/minimal-cluster perconaxtradbcluster.pxc.percona.com/minimal-cluster scaled
As usual, Kubernetes “cluster scaled” here does not mean what cluster actually got scaled but what new desired state was configured and the operator started to work on scaling the cluster.
You can wait for a few minutes and then notice that one of the pods is stuck in a pending state and it does not look like it’s progressing:
# kubectl get pods NAME READY STATUS RESTARTS AGE minimal-cluster-haproxy-0 2/2 Running 0 127m minimal-cluster-pxc-0 3/3 Running 0 127m minimal-cluster-pxc-1 0/3 Pending 0 3m31s percona-xtradb-cluster-operator-79949dc46d-tdslj 1/1 Running 0 128m
To find out what’s going on we can describe the pod:
# kubectl describe pod minimal-cluster-pxc-1 … Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 5m26s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. Warning FailedScheduling 5m25s default-scheduler 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. Warning FailedScheduling 21s default-scheduler 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
There is a lot of information in the output but for our purpose, it is the events section that is most important.
What the warning message here is saying is only one node is available while anti-affinity rules prevent scheduling more pods on this node. This makes sense – in a real production environment, it would not be acceptable to schedule more than one Percona XtraDB Cluster pod on the same physical server as this would end up being a single point of failure.
This is for production, though, but for testing, we can allow running multiple pods on the same physical machine, as one physical machine is all we have. So we modify the resource definition by disabling anti-affinity protection:
pxc: size: 1 image: percona/percona-xtradb-cluster:8.0.27-18.1 affinity: antiAffinityTopologyKey: "none"
Let’s apply this modified configuration to the cluster. (You can find this and another YAML file in the GitHub repository).
# kubectl apply -f 1-affinity.yaml perconaxtradbcluster.pxc.percona.com/minimal-cluster configured
And try scaling it again:
#kubectl scale --replicas=3 pxc/minimal-cluster perconaxtradbcluster.pxc.percona.com/minimal-cluster scaled
Instead of constantly checking for provisioning of all new nodes to be complete, we can watch as pods progress through initialization and check if there is any problem:
#kubectl get po -l app.kubernetes.io/component=pxc --watch NAME READY STATUS RESTARTS AGE minimal-cluster-pxc-0 3/3 Running 0 64s minimal-cluster-pxc-1 0/3 Pending 0 0s minimal-cluster-pxc-1 0/3 Pending 0 0s minimal-cluster-pxc-1 0/3 Init:0/1 0 0s minimal-cluster-pxc-1 0/3 PodInitializing 0 2s minimal-cluster-pxc-1 2/3 Running 0 4s minimal-cluster-pxc-1 3/3 Running 0 60s minimal-cluster-pxc-2 0/3 Pending 0 0s minimal-cluster-pxc-2 0/3 Pending 0 0s minimal-cluster-pxc-2 0/3 Pending 0 2s minimal-cluster-pxc-2 0/3 Init:0/1 0 2s minimal-cluster-pxc-2 0/3 PodInitializing 0 3s minimal-cluster-pxc-2 2/3 Running 0 6s minimal-cluster-pxc-2 3/3 Running 0 62s
Great! We see two additional “pxc” pods were provisioned and running!
Setting up resource limits for your MySQL on Kubernetes
As of now, we have not specified any resource limits (besides persistent volume size) for our MySQL deployment. This means it will use all resources currently available on the host, and while this might be the preferred way for testing an environment for production, you’re likely to be looking for both more predictable performance and more resource isolation so that a single cluster can’t oversaturate the node degrading performance of other pods running on the same node.
In order to place resource limits you need to place an appropriate section in the custom resource definition:
resources: requests: memory: 1G cpu: 200m limits: memory: 1G cpu: 500m
This resource definition means we’re requesting at least 1GB of memory and 0.2 CPU core, and limit resources this instance will be used to 0.5 CPU core and the same 1GB of memory.
Note that if the “requests” conditions can’t be satisfied, i.e. if the required amount of CPU or memory is not available in the cluster, the pod will not be scheduled, waiting for resources to become available (possibly forever).
Let’s apply those limits to our cluster:
kubectl apply -f 2-small-limits.yaml perconaxtradbcluster.pxc.percona.com/minimal-cluster configured
If we look at what happens to the pods after we apply this command, we see:
#kubectl get po -l app.kubernetes.io/component=pxc --watch root@localhost:~/minikube-pxc-tutorial# sh cluster-watch.sh NAME READY STATUS RESTARTS AGE minimal-cluster-pxc-0 3/3 Running 0 3h52m minimal-cluster-pxc-1 3/3 Running 0 3h49m minimal-cluster-pxc-2 3/3 Terminating 0 3h48m … minimal-cluster-pxc-0 0/3 Init:0/1 0 1s minimal-cluster-pxc-0 0/3 PodInitializing 0 2s minimal-cluster-pxc-0 2/3 Running 0 4s minimal-cluster-pxc-0 3/3 Running 0 30s
So changing resource limits requires cluster restart.
If you set resource limits, the operator will automatically set some of MySQL configuration parameters (for example max_connections) according to requests (resources guaranteed to be available). If no resource constraints are specified, the operator will not perform any automatic configuration.
If we run the same Sysbench test again, we will see:
queries: 274076 (4563.18 per sec.)
A quite more modest result than running with no limits.
We can validate actual CPU usage by the pods as we run benchmarks by running:
# watch kubectl top pods NAME CPU(cores) MEMORY(bytes) minimal-cluster-haproxy-0 277m 15Mi minimal-cluster-pxc-0 500m 634Mi minimal-cluster-pxc-1 8m 438Mi minimal-cluster-pxc-2 8m 443Mi percona-xtradb-cluster-operator-79949dc46d-tdslj 6m 23Mi
We can observe a few interesting things here:
- The limit for Percona XtraDB Cluster looks well respected.
- We can see only one of the Percona XtraDB Cluster nodes getting the load. This is expected as HAProxy does not load balance queries among cluster nodes, just ensures high availability. If you want to see queries load balanced, you either can use a separate port for load balanced reads or deploy ProxySQL for intelligent query routing.
- HAProxy pod required about 50% of the resources of Percona XtraDB Cluster pod (this is a worst-case scenario of very simple in-memory queries), so do not underestimate its resource usage needs!
Pausing and resuming MySQL on Kubernetes
If you have a MySQL instance that you use for testing/development, or otherwise do not need it running all the time, if you are running Linux you can just Stop MySQL Process, or Stop MySQL Container if you’re running Docker. How do you achieve the same in Kubernetes?
First, you need to be careful – if you delete the cluster instance, it will destroy both compute and storage resources and you will lose all the data. Instead of deleting you need to pause the cluster.
To pause the cluster, you can just add a pause option to the cluster custom resource definition:
pause: true
Let’s apply CR with this option enabled and see what happens:
# kubectl apply -f 4-paused.yaml perconaxtradbcluster.pxc.percona.com/minimal-cluster configured
After the pause process is complete, we will not see any cluster pods running, just operator:
l# kubectl get pod NAME READY STATUS RESTARTS AGE percona-xtradb-cluster-operator-79949dc46d-tdslj 1/1 Running 0 20h
However, you still will see persistent volume claims which hold cluster data present:
# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-minimal-cluster-pxc-0 Bound pvc-866c58c9-65b8-4cb4-ba2c-7b904c0eecf0 6G RWO standard 95m datadir-minimal-cluster-pxc-1 Bound pvc-11f1ac22-caf0-4eb9-99a8-772167cf62bd 6G RWO standard 94m datadir-minimal-cluster-pxc-2 Bound pvc-c949cc2f-9b54-403d-ba32-a739abd5256d 6G RWO standard 93m
We can also see the cluster is in a paused state by this command:
# kubectl get pxc NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE minimal-cluster minimal-cluster-haproxy.default paused 97m
We can unpause the cluster by applying the same CR with the value pause: false and the cluster will be back online in a couple of minutes:
# kubectl apply -f 5-resumed.yaml # kubectl get pxc NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE minimal-cluster minimal-cluster-haproxy.default ready 3 1 100m
MySQL on Kubernetes backup (and restore)
If you care about your data you need backups, and Percona Operator for MySQL provides quite a few backup features including scheduled backups, point-in-time recovery, backing up to S3 compatible storage, and many others. In this walkthrough, though, we will look at the most simple backup and restore – taking a backup to persistent volume.
First, we need the cluster configured for backups; we need to configure the storage to which the backup will be performed, as well as provide an image for the container which will be responsible for managing backups:
backup: image: percona/percona-xtradb-cluster-operator:1.11.0-pxc8.0-backup storages: fs-pvc: type: filesystem volume: persistentVolumeClaim: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 6Gi
As before, we can apply a new configuration which includes this section by
kubectl apply -f 6-1-backup-config.yaml
. While we can schedule backups to take place automatically, we will focus on running backups manually. To do this, you can apply the following backup CR:
apiVersion: pxc.percona.com/v1 kind: PerconaXtraDBClusterBackup metadata: name: backup1 spec: pxcCluster: minimal-cluster storageName: fs-pvc
It basically defines creating a backup named “backup1” for the cluster named “minimal-cluster” and storing it on “fs-pvc” storage volume:
#kubectl apply -f 6-2-0-backup-exec.yaml #kubectl get pxc-backups NAME CLUSTER STORAGE DESTINATION STATUS COMPLETED AGE backup1 minimal-cluster fs-pvc pvc/xb-backup1 Succeeded 2m14s 2m42s
After the backup job has successfully completed, we can see its status.
Let’s now mess up our database before attempting to restore:
mysql -e "drop database sbtest";
The restore happens similar to backup by running Restore Job on the cluster with the following configuration:
apiVersion: "pxc.percona.com/v1" kind: "PerconaXtraDBClusterRestore" metadata: name: "restore1" spec: pxcCluster: "minimal-cluster" backupName: "backup1"
We can do this by applying the configuration and checking its status:
# kubectl apply -f 6-3-1-restore-exec.yaml # kubectl get pxc-restore NAME CLUSTER STATUS COMPLETED AGE restore1 minimal-cluster Restoring 61s
What if you mess up the database again (not uncommon in a development environment) and want to restore the same backup again?
If you just run restore again, it will not work.
root@localhost:~/minikube-pxc-tutorial# kubectl apply -f 6-3-1-restore-exec.yaml perconaxtradbclusterrestore.pxc.percona.com/restore1 unchanged
Because a job with the same name already exists in the system, you can either create a restore job with a different name or delete the restore object and run it again:
# kubectl delete pxc-restore restore1 perconaxtradbclusterrestore.pxc.percona.com "restore1" deleted # kubectl apply -f 6-3-1-restore-exec.yaml perconaxtradbclusterrestore.pxc.percona.com/restore1 created
Note, while deleting the restore job does not affect backup data or a running cluster, removing “backup” removes the data stored in this backup, too:
# kubectl delete pxc-backup backup1 perconaxtradbclusterbackup.pxc.percona.com "backup1" deleted
Monitoring your deployment with Percona Monitoring and Management (PMM)
Your MySQL deployment would not be complete without setting it up with monitoring, and in Percona Operator for MySQL, Percona Monitoring and Management (PMM) support is built in and can be enabled quite easily.
According to Kubernetes best practices, credentials to access PMM are not stored in the resource definition, but rather in secrets.
As recommended, we’re going to use PMM API Keys instead of a user name and password. We can get them in the API Keys configuration section:
After we’ve created the API Key we can store it in our cluster’s secret:
# export PMM_API_KEY="<KEY>" # kubectl patch secret/minimal-cluster-secrets -p '{"data":{"pmmserverkey": "'$(echo -n $PMM_API_KEY | base64 -w0)'"}}' secret/minimal-cluster-secrets patched
Next, we need to add the PMM configuration section to the cluster custom resource, specifying the server where our PMM server is deployed.
pmm: enabled: true image: percona/pmm-client:2.28.0 serverHost: pmm.server.host resources: requests: memory: 150M cpu: 300m
Apply complete configuration:
kubectl apply -f 7-2-pmm.yaml
As deployment is complete you should see that the MySQL/PXC and HAProxy instance statistics are visible in the Percona Monitoring and Management instance you specified:
Note, as of this writing (Percona Monitoring and Management 2.28), PMM is not fully Kubernetes aware, so it will report CPU and memory usage in the context of the node, not what has been made available for a specific pod.
Configuring MySQL on Kubernetes
So far we’ve been running MySQL (Percona XtraDB Cluster) with the default settings, and even though Percona Operator for MySQL automatically configures some of the options for optimal performance, chances are you will want to change some of the configuration options.
Percona Operator for MySQL provides multiple ways to change options, and we will focus on what I consider the most simple and practical – including them in the CR definition.
Simply add as a section:
configuration: | [mysqld] max_connections=100 innodb_log_file_size=128M
Which will be added to your MySQL configuration file, hence overriding any default values or any adjustments the operator has done during deployment.
As usual, you can apply a new configuration to see those applied by running kubectl apply -f 8-config.yaml.
If you ever misspelled a configuration option name, you might wonder what would happen.
The operator will try applying configuration to one of the nodes, which will fail with one of the nodes repeatedly crashing and restarting, but the cluster will be available in a degraded state.
# kubectl get pods NAME READY STATUS RESTARTS AGE minimal-cluster-haproxy-0 3/3 Running 0 56m minimal-cluster-pxc-0 4/4 Running 0 17m minimal-cluster-pxc-1 4/4 Running 0 28m minimal-cluster-pxc-2 3/4 CrashLoopBackOff 6 (85s ago) 11m percona-xtradb-cluster-operator-79949dc46d-tdslj 1/1 Running 0 24h restore-job-restore1-minimal-cluster-lp4jg 0/1 Completed 0 165m
To fix the issue, just correct the configuration error and reapply.
Accessing MySQL logs
To troubleshoot your MySQL on Kubernetes deployment you may need to access MySQL logs. Logs are provided in the “logs” container in each PXC node pod. (More info) To access them, you can use something like:
# kubectl logs minimal-cluster-pxc-0 -c logs --tail 5 {"log":"2022-07-08T19:08:36.664908Z 0 [Note] [MY-000000] [Galera] (70d46e6b-9729, 'tcp://0.0.0.0:4567') turning message relay requesting off\n","file":"/var/lib/mysql/mysqld-error.log"} {"log":"2022-07-08T19:08:36.696000Z 0 [Note] [MY-000000] [Galera] Member 0.0 (minimal-cluster-pxc-2) requested state transfer from 'minimal-cluster-pxc-1,'. Selected 2.0 (minimal-cluster-pxc-1)(SYNCED) as donor.\n","file":"/var/lib/mysql/mysqld-error.log"} {"log":"2022-07-08T19:08:37.000623Z 0 [Note] [MY-000000] [Galera] 2.0 (minimal-cluster-pxc-1): State transfer to 0.0 (minimal-cluster-pxc-2) complete.\n","file":"/var/lib/mysql/mysqld-error.log"} {"log":"2022-07-08T19:08:37.000773Z 0 [Note] [MY-000000] [Galera] Member 2.0 (minimal-cluster-pxc-1) synced with group.\n","file":"/var/lib/mysql/mysqld-error.log"} {"log":"2022-07-08T19:08:37.881614Z 0 [Note] [MY-000000] [Galera] 0.0 (minimal-cluster-pxc-2): State transfer from 2.0 (minimal-cluster-pxc-1) complete.\n","file":"/var/lib/mysql/mysqld-error.log"}
Note: as you can see, logs are provided in JSON format and can be processed with “jq” command, if needed.
Deleting the MySQL deployment
Deleting your deployed cluster is very easy, so remember with great power comes great responsibility. Just one command will destroy the entire cluster, with all its data:
# kubectl delete pxc minimal-cluster perconaxtradbcluster.pxc.percona.com "minimal-cluster" deleted
Note: this only applies to the PXC cluster; any backups taken from the cluster are considered separate objects and will not be deleted by this operation.
Protecting PVCs from deletion
This might NOT be good enough for your production environment and you may want to have more protection for your data from accidental loss, and Percona Operator for MySQL allows that. If finalizers.delete-pxc-pvc is NOT configured, persistent volumes claims (PVCs) are not going to be deleted after cluster deletion, so re-creating a cluster with the same will allow you to recover your data. If this is the route you take, remember to have your own process to manage PVCs – otherwise, you may see massive space usage from old clusters.
Summary
I hope this walkthrough was helpful to introduce you to the power of Percona Operator for MySQL and what it looks like to run MySQL on Kubernetes. I would encourage you not to end with a read but to deploy your own Minikube and play around with the scenarios mentioned in this blog. I’ve included all the scripts (and more) in the GitHub repository, and by practicing rather than reading alone you can learn much faster!