Percona Toolkit 3.6.0 has been released on June 12, 2024. The most important updates in this version are: The possibility to resume pt-online-schema-change if it is interrupted. eu-stack support in pt-pmp that significantly improves this tool’s performance and decreases the load it causes on production servers. New tool pt-eustack-resolver Packages for Ubuntu 24.04 (Noble Numbat) […]
19
2022
Testing LDAP Authentication and Authorization on Percona Operator for MongoDB
As of Percona Operator for MongoDB 1.12.0, the documentation now has instructions on how to configure LDAP Authentication and Authorization. It already contains an example of how to configure the operator if OpenLDAP is your LDAP server. Here is another example of setting it up but using Samba as your LDAP server.
To simplify the installation and configuration, I will use Ubuntu Jammy 22.04 LTS since the distribution repository contains the packages to install Samba and Kubernetes.
This is the current configuration of the test server:
OS: Ubuntu Jammy 22.04 LTS
Hostname: samba.percona.local
IP Address: 192.168.0.101
Setting up Samba
Let’s install the necessary packages to install Samba as PDC and troubleshooting tools:
$ sudo apt update $ sudo apt -y upgrade $ sudo apt -y install samba net-tools winbind ldap-utils
Disable smbd, winbind, and systemd-resolved services because we will need to reconfigure samba as a PDC and DNS resolver. Also remove current samba configuration, /etc/samba/smb.conf.
$ sudo systemctl stop smbd $ sudo systemctl stop systemd-resolved $ sudo systemctl stop winbind $ sudo systemctl disable smbd $ sudo systemctl disable systemd-resolved $ sudo systemctl disable winbind $ sudo rm /etc/samba/smb.conf
Delete the symlink on /etc/resolv.conf and replace the content with “nameserver 127.0.0.1” to use the samba’s DNS service:
$ sudo rm -f /etc/resolv.conf $ sudo echo -e "nameserver 127.0.0.1" | sudo tee /etc/resolv.conf
Create a domain environment with the following settings:
Realm: PERCONA.LOCAL
Domain: PERCONA
Administrator Password: PerconaLDAPTest2022
$ sudo samba-tool domain provision --realm percona.local --domain percona --admin=PerconaLDAPTest2022
Edit /etc/samba/smb.conf and set DNS forwarder to 8.8.8.8 to resolve other zones. We will also disable mandatory TLS authentication since Percona Operator does not support LDAP with TLS at the time of writing this article.
$ cat /etc/samba/smb.conf # Global parameters [global] dns forwarder = 8.8.8.8 netbios name = SAMBA realm = PERCONA.LOCAL server role = active directory domain controller workgroup = PERCONA ldap server require strong auth = No [sysvol] path = /var/lib/samba/sysvol read only = No [netlogon] path = /var/lib/samba/sysvol/percona.local/scripts read only = No
Symlink krb5.conf configuration.
$ sudo ln -s /var/lib/samba/private/krb5.conf /etc
Unmask samba-ad-dc service and start it. Ensure it will start at boot time.
$ sudo systemctl unmask samba-ad-dc $ sudo systemctl start samba-ad-dc $ sudo systemctl enable samba-ad-dc
Check if the Samba services are up and running
$ sudo netstat -tapn|grep samba tcp 0 0 0.0.0.0:389 0.0.0.0:* LISTEN 4376/samba: task[ld tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN 4406/samba: task[dn tcp 0 0 0.0.0.0:636 0.0.0.0:* LISTEN 4376/samba: task[ld tcp 0 0 0.0.0.0:135 0.0.0.0:* LISTEN 4371/samba: task[rp tcp6 0 0 :::389 :::* LISTEN 4376/samba: task[ld tcp6 0 0 :::53 :::* LISTEN 4406/samba: task[dn tcp6 0 0 :::636 :::* LISTEN 4376/samba: task[ld tcp6 0 0 :::135 :::* LISTEN 4371/samba: task[rp $ host google.com google.com has address 172.217.194.101 $ host samba.percona.local samba.percona.local has address 192.168.0.101
Adding users and groups
Now that Samba is up and running, we can now perform user and group management. We will create Samba users and groups and assign users to groups with samba-tool.
$ sudo samba-tool user add dbauser01 --surname=User01 --given-name=Dba --mail-address=dbauser01@percona.local DbaPassword1 $ sudo samba-tool user add devuser01 --surname=User01 --given-name=Dev --mail-address=devuser01@percona.local DevPassword1 $ sudo samba-tool user add searchuser01 --surname=User01 --given-name=Search --mail-address=searchuser01@percona.local SearchPassword1 $ sudo samba-tool group add developers $ sudo samba-tool group add dbadmins $ sudo samba-tool group addmembers developers devuser01 $ sudo samba-tool group addmembers dbadmins dbauser01
Use samba-tool again to view the details of the users and groups:
$ sudo samba-tool user show devuser01 dn: CN=Dev User01,CN=Users,DC=percona,DC=local objectClass: person objectClass: user cn: Dev User01 sn: User01 givenName: Dev name: Dev User01 sAMAccountName: devuser01 mail: devuser01@percona.local memberOf: CN=developers,CN=Users,DC=percona,DC=local $ sudo samba-tool group show dbadmins dn: CN=dbadmins,CN=Users,DC=percona,DC=local objectClass: group cn: dbadmins name: dbadmins sAMAccountName: dbadmins member: CN=Dba User01,CN=Users,DC=percona,DC=local
Searching with ldapsearch
Troubleshooting LDAP starts with being able to use the ldapsearch tool to specify the credentials and filters. Once you are successful with authentication and searching, it’s easier to plug the same or similar parameters used in ldapsearch in the configuration of the Percona operator. Here are some examples of useful ldapsearch commands:
1. Logging in as “CN=Dev User01,CN=Users,DC=percona,DC=local”. If authenticated, return the DN, First Name, Last Name, email and sAMAccountName for that record.
$ ldapsearch -LLL -W -x -H ldap://samba.percona.local -b "CN=Dev User01,CN=Users,DC=percona,DC=local" -D "CN=Dev User01,CN=Users,DC=percona,DC=local" "givenName" "sn" "mail" "sAMAccountName" Enter LDAP Password: dn: CN=Dev User01,CN=Users,DC=percona,DC=local sn: User01 givenName: Dev sAMAccountName: devuser01 mail: devuser01@percona.local
Essentially, without mapping,you will need to supply the username as the full DN to login to MongoDB. Eg. mongo -u “CN=Dev User01,CN=Users,DC=percona,DC=local”
2. Logging in as “CN=Search User01,CN=Users,DC=percona,DC=local” and looking for users in “DC=percona,dc=local” where sAMAccountName is “dbauser01”. If there’s a match, it will return the DN, First Name, Last Name, mail and sAMAccountName for that record.
$ ldapsearch -LLL -W -x -H ldap://samba.percona.local -b "DC=percona,dc=local" -D "CN=Search User01,CN=Users,DC=percona,DC=local" "(&(objectClass=person)(sAMAccountName=dbauser01))" "givenName" "sn" "mail" "sAMAccountName" Enter LDAP Password: dn: CN=Dba User01,CN=Users,DC=percona,DC=local sn: User01 givenName: Dba sAMAccountName: dbauser01 mail: dbauser01@percona.local
With mapping, you can now authenticate by specifying sAMAaccountName or mail depending on how mapping is defined. Eg. mongo -u dbauser01 or mongo -u “dbauser01@percona.local”
3. Logging in as “CN=Search User01,CN=Users,DC=percona,DC=local”, looking for groups in “DC=percona,dc=local” where “CN=Dev User01,CN=Users,DC=percona,DC=local” is a member. If there’s a match, it will return the DN and common name of the group.
$ ldapsearch -LLL -W -x -H ldap://samba.percona.local -b "DC=percona,dc=local" -D "CN=Search User01,CN=Users,DC=percona,DC=local" "(&(objectClass=group)(member=CN=Dev User01,CN=Users,DC=percona,DC=local))" "cn" Enter LDAP Password: dn: CN=developers,CN=Users,DC=percona,DC=local cn: developers
This type of search is important to enumerate the groups of that user for we can define the privileges of that user based on its group membership.
Kubernetes installation and configuration
Now that authenticating to LDAP and search filters are working, we are ready to test this in the Percona Operator. Since this is just for testing, we might as well use the same server to deploy Kubernetes. In this example, we will use Microk8s.
$ sudo snap install microk8s --classic $ sudo usermod -a -G microk8s $USER $ sudo chown -f -R $USER ~/.kube $ newgrp microk8s $ microk8s status --wait-ready $ microk8s enable dns $ microk8s enable hostpath-storage $ alias kubectl='microk8s kubectl'
Once installed, check system pods when all are running before we continue to the next step:
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-bj9c4 1/1 Running 0 3m12s kube-system coredns-66bcf65bb8-l9hwb 1/1 Running 0 65s kube-system calico-kube-controllers-644d5c79cb-fhhkc 1/1 Running 0 3m11s kube-system hostpath-provisioner-85ccc46f96-qmjrq 1/1 Running 0 3m
Deploying the Percona Operator for MongoDB
Now that Kubernetes is running, we can download the Percona Operator for MongoDB. Let’s download version 1.13.0 with git:
$ git clone -b v1.13.0 https://github.com/percona/percona-server-mongodb-operator
Then let’s go to the deploy directory and apply bundle.yaml to install the Percona operator:
$ cd percona-server-mongodb-operator/deploy $ kubectl apply -f bundle.yaml customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com created customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com created role.rbac.authorization.k8s.io/percona-server-mongodb-operator created serviceaccount/percona-server-mongodb-operator created rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created deployment.apps/percona-server-mongodb-operator created
Check if the operator is up and running:
$ kubectl get pods NAME READY STATUS RESTARTS AGE percona-server-mongodb-operator-547c499bd8-p8k74 1/1 Running 0 41s
Now that it is running we need to apply cr.yaml to create the MongoDB instances and services. We will just use minimal deployment in cr-minimal.yaml which is provided in the deploy directory.
$ kubectl apply -f cr-minimal.yaml perconaservermongodb.psmdb.percona.com/my-cluster-name created
Wait until all pods are created:
$ kubectl get pods NAME READY STATUS RESTARTS AGE percona-server-mongodb-operator-547c499bd8-p8k74 1/1 Running 0 5m16s minimal-cluster-cfg-0 1/1 Running 0 3m25s minimal-cluster-rs0-0 1/1 Running 0 3m24s minimal-cluster-mongos-0 1/1 Running 0 3m24s
Setting up roles on the Percona Operator
Now that MongoDB pods are running, let’s add the groups for role-based mapping. We need to add this configuration from the primary config server which will be used by mongos and replicaset for authorization when logging in.
First, let’s get the username and password of the admin user:
$ kubectl get secrets NAME TYPE DATA AGE minimal-cluster Opaque 10 4m3s internal-minimal-cluster-users Opaque 10 4m3s minimal-cluster-mongodb-keyfile Opaque 1 4m3s minimal-cluster-mongodb-encryption-key Opaque 1 4m3s $ kubectl get secrets minimal-cluster -o yaml apiVersion: v1 data: MONGODB_BACKUP_PASSWORD: b2NNNkFjOHdEUU42OUpmYnE= MONGODB_BACKUP_USER: YmFja3Vw MONGODB_CLUSTER_ADMIN_PASSWORD: aElBWlVyajFkZWF0eEhWSzI= MONGODB_CLUSTER_ADMIN_USER: Y2x1c3RlckFkbWlu MONGODB_CLUSTER_MONITOR_PASSWORD: V1p6YkFhN1o3T2RkSm5Gbg== MONGODB_CLUSTER_MONITOR_USER: Y2x1c3Rlck1vbml0b3I= MONGODB_DATABASE_ADMIN_PASSWORD: U0hMR3Y3WlF2SVpxZ1dhcUFh MONGODB_DATABASE_ADMIN_USER: ZGF0YWJhc2VBZG1pbg== MONGODB_USER_ADMIN_PASSWORD: eW5TZjRzQjkybm5UdjdVdXduTQ== MONGODB_USER_ADMIN_USER: dXNlckFkbWlu kind: Secret metadata: creationTimestamp: "2022-09-15T15:57:42Z" name: minimal-cluster namespace: default resourceVersion: "5673" uid: d3f4f678-a3db-4578-b10c-69e8c4410b00 type: Opaque $ echo `echo "dXNlckFkbWlu"|base64 --decode` userAdmin $ echo `echo "eW5TZjRzQjkybm5UdjdVdXduTQ=="|base64 --decode` ynSf4sB92nnTv7UuwnM
Next, let’s connect to the primary config server:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.152.183.1 443/TCP 22m minimal-cluster-cfg ClusterIP None 27017/TCP 7m27s minimal-cluster-rs0 ClusterIP None 27017/TCP 7m27s minimal-cluster-mongos ClusterIP 10.152.183.220 27017/TCP 7m27s $ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0.11-10 --restart=Never -- bash -il [mongodb@percona-client /]$ mongo --host minimal-cluster-cfg -u userAdmin -p ynSf4sB92nnTv7UuwnM Percona Server for MongoDB shell version v5.0.11-10 connecting to: mongodb://minimal-cluster-cfg:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("5f1f7db8-d75f-4658-a579-86b9bbf22471") } Percona Server for MongoDB server version: v5.0.11-10 cfg:PRIMARY>
From the console, we can create two roles “CN=dbadmins,CN=Users,DC=percona,DC=local” and “CN=developers,CN=Users,DC=percona,DC=local” with their corresponding privileges:
use admin db.createRole( { role: "CN=dbadmins,CN=Users,DC=percona,DC=local", roles: [ "root"], privileges: [] } ) db.createRole( { role: "CN=developers,CN=Users,DC=percona,DC=local", roles: [ "readWriteAnyDatabase" ], privileges: [] } )
Note that the role names defined here correspond to the Samba groups I created with samba-tool. Also, you will need to add the same roles in the replicaset endpoint if you want your LDAP users to have these privileges when connecting to the replicaset directly.
Finally, exit the mongo console by typing exit and pressing Enter. Do the same to exit the pod as well.
Applying the LDAP configuration to the replicaset, mongos, and config servers
Now, we can add the LDAP configuration to the config server. Our first test configuration is to supply the full DN when logging in so the configuration will be:
$ cat fulldn-config.yaml security: authorization: "enabled" ldap: authz: queryTemplate: 'DC=percona,DC=local??sub?(&(objectClass=group)(member:={PROVIDED_USER}))' servers: "192.168.0.101" transportSecurity: none bind: queryUser: "CN=Search User01,CN=Users,DC=percona,DC=local" queryPassword: "SearchPassword1" setParameter: authenticationMechanisms: 'PLAIN,SCRAM-SHA-1,SCRAM-SHA-256'
Next, apply the configuration to the config servers:
$ kubectl create secret generic minimal-cluster-cfg-mongod --from-file=mongod.conf=fulldn-config.yaml
Additionally, if you want to log in to the replica set with LDAP, you can apply the same configuration as well:
$ kubectl create secret generic minimal-cluster-rs0-mongod --from-file=mongod.conf=fulldn-config.yaml
As for mongos, you will still need to omit the settings for authorization because this will come from the config server:
$ cat fulldn-mongos-config.yaml security: ldap: servers: "192.168.0.101" transportSecurity: none bind: queryUser: "CN=Search User01,CN=Users,DC=percona,DC=local" queryPassword: "SearchPassword1" setParameter: authenticationMechanisms: 'PLAIN,SCRAM-SHA-1,SCRAM-SHA-256'
Then apply the configuration for mongos:
$ kubectl create secret generic minimal-cluster-mongos --from-file=mongos.conf=fulldn-mongos-config.yaml
One-by-one the pods will be recreated. Wait until all of them are recreated:
$ kubectl get pods NAME READY STATUS RESTARTS AGE percona-server-mongodb-operator-547c499bd8-p8k74 1/1 Running 0 24m minimal-cluster-cfg-0 1/1 Running 0 4m27s minimal-cluster-rs0-0 1/1 Running 0 3m34s minimal-cluster-mongos-0 1/1 Running 0 65s
Now you can test authentication in one of the endpoints:
$ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0.11-10 --restart=Never -- mongo --host minimal-cluster-mongos -u "CN=Dba User01,CN=Users,DC=percona,DC=local" -p DbaPassword1 --authenticationDatabase '$external' --authenticationMechanism 'PLAIN' --eval "db.runCommand({connectionStatus:1})" + exec mongo --host minimal-cluster-mongos -u 'CN=Dba User01,CN=Users,DC=percona,DC=local' -p DbaPassword1 --authenticationDatabase '$external' --authenticationMechanism PLAIN --eval 'db.runCommand({connectionStatus:1})' Percona Server for MongoDB shell version v5.0.11-10 connecting to: mongodb://minimal-cluster-mongos:27017/?authMechanism=PLAIN&authSource=%24external&compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("7eca812d-ad04-4ae2-8484-3b55dee1a673") } Percona Server for MongoDB server version: v5.0.11-10 { "authInfo" : { "authenticatedUsers" : [ { "user" : "CN=Dba User01,CN=Users,DC=percona,DC=local", "db" : "$external" } ], "authenticatedUserRoles" : [ { "role" : "CN=dbadmins,CN=Users,DC=percona,DC=local", "db" : "admin" }, { "role" : "root", "db" : "admin" } ] } } pod "percona-client" deleted
As you can see above, the user,”CN=Dba User01,CN=Users,DC=percona,DC=local” has assumed the role as root. You can test other endpoints using these commands.
$ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0.11-10 --restart=Never -- mongo --host minimal-cluster-rs0 -u "CN=Dba User01,CN=Users,DC=percona,DC=local" -p DbaPassword1 --authenticationDatabase '$external' --authenticationMechanism 'PLAIN' --eval "db.runCommand({connectionStatus:1})" $ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0.11-10 --restart=Never -- mongo --host minimal-cluster-cfg -u "CN=Dba User01,CN=Users,DC=percona,DC=local" -p DbaPassword1 --authenticationDatabase '$external' --authenticationMechanism 'PLAIN' --eval "db.runCommand({connectionStatus:1})"
Using userToDNMapping to simplify usernames
Obviously, you may not want the users to authenticate with the full DN. Perhaps, you want the users to specify just the first CN. You can use match and substitution mapping for this:
$ cat mapping1-config.yaml security: authorization: "enabled" ldap: authz: queryTemplate: 'DC=percona,DC=local??sub?(&(objectClass=group)(member:={USER}))' servers: "192.168.0.101" transportSecurity: none bind: queryUser: "CN=Search User01,CN=Users,DC=percona,DC=local" queryPassword: "SearchPassword1" userToDNMapping: >- [ { match: "(.+)", substitution: "CN={0},CN=users,DC=percona,DC=local" } ] setParameter: authenticationMechanisms: 'PLAIN,SCRAM-SHA-1,SCRAM-SHA-256' $ cat mapping1-mongos-config.yaml security: ldap: servers: "192.168.0.101" transportSecurity: none bind: queryUser: "CN=Search User01,CN=Users,DC=percona,DC=local" queryPassword: "SearchPassword1" userToDNMapping: >- [ { match: "(.+)", substitution: "CN={0},CN=users,DC=percona,DC=local" } ] setParameter: authenticationMechanisms: 'PLAIN,SCRAM-SHA-1,SCRAM-SHA-256'
You will need to delete the old configuration and apply the new ones:
$ kubectl delete secret minimal-cluster-cfg-mongod $ kubectl delete secret minimal-cluster-rs0-mongod $ kubectl delete secret minimal-cluster-mongos $ kubectl create secret generic minimal-cluster-cfg-mongod --from-file=mongod.conf=mapping1-config.yaml $ kubectl create secret generic minimal-cluster-rs0-mongod --from-file=mongod.conf=mapping1-config.yaml $ kubectl create secret generic minimal-cluster-mongos --from-file=mongos.conf=mapping1-mongos-config.yaml
With userToDNMapping, match and substitution you can now just specify the first CN. Once all of the pods are restarted, try logging in with a shorter username:
$ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0.11-10 --restart=Never -- mongo --host minimal-cluster-mongos -u "Dba User01" -p DbaPassword1 --authenticationDatabase '$external' --authenticationMechanism 'PLAIN' --eval "db.runCommand({connectionStatus:1})"
Perhaps, it still seems awkward to have usernames with spaces and you would like to login based on other attributes such as sAMAccountName or mail. You can use an additional LDAP query in userToDBMapping to search for the record based on these properties. Once the record is found it will extract the user’s DN for authentication. For the example below, we will use sAMAccountName as input for the username:
$ cat mapping2-config.yaml security: authorization: "enabled" ldap: authz: queryTemplate: 'DC=percona,DC=local??sub?(&(objectClass=group)(member:={USER}))' servers: "192.168.0.101" transportSecurity: none bind: queryUser: "CN=Search User01,CN=Users,DC=percona,DC=local" queryPassword: "SearchPassword1" userToDNMapping: >- [ { match: "(.+)", ldapQuery: "dc=percona,dc=local??sub?(&(sAMAccountName={0})(objectClass=person))" } ] setParameter: authenticationMechanisms: 'PLAIN,SCRAM-SHA-1,SCRAM-SHA-256' $ cat mapping2-mongos-config.yaml security: ldap: servers: "192.168.0.101" transportSecurity: none bind: queryUser: "CN=Search User01,CN=Users,DC=percona,DC=local" queryPassword: "SearchPassword1" userToDNMapping: >- [ { match: "(.+)", ldapQuery: "dc=percona,dc=local??sub?(&(sAMAccountName={0})(objectClass=person))" } ] setParameter: authenticationMechanisms: 'PLAIN,SCRAM-SHA-1,SCRAM-SHA-256'
Again, we will need to delete the old configuration and apply new ones:
$ kubectl delete secret minimal-cluster-cfg-mongod $ kubectl delete secret minimal-cluster-rs0-mongod $ kubectl delete secret minimal-cluster-mongos $ kubectl create secret generic minimal-cluster-cfg-mongod --from-file=mongod.conf=mapping2-config.yaml $ kubectl create secret generic minimal-cluster-rs0-mongod --from-file=mongod.conf=mapping2-config.yaml $ kubectl create secret generic minimal-cluster-mongos --from-file=mongos.conf=mapping2-mongos-config.yaml
Once the pods are recreated, we can now authenticate with regular usernames.
$ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0.11-10 --restart=Never -- mongo --host minimal-cluster-mongos -u devuser01 -p DevPassword1 --authenticationDatabase '$external' --authenticationMechanism 'PLAIN' --eval "db.runCommand({connectionStatus:1})" $ kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:5.0.11-10 --restart=Never -- mongo --host minimal-cluster-mongos -u dbauser01 -p DbaPassword1 --authenticationDatabase '$external' --authenticationMechanism 'PLAIN' --eval "db.runCommand({connectionStatus:1})"
Summary
I hope this article gets you up to speed on setting up LDAP authentication and authorization with Percona Operator for MongoDB.
05
2022
How to Run MongoDB on Kubernetes: Solutions, Pros and Cons
This blog was originally published in August 2022 and was updated in January 2024.In this blog, we’ll examine the increasingly popular practice of deploying MongoDB on Kubernetes and explore various approaches to this setup. From direct deployments as a stateful application to utilizing specialized operators and considering cloud-based solutions, we’ll guide you through the key […]
15
2022
Run PostgreSQL on Kubernetes with Percona Operator & Pulumi
Avoid vendor lock-in, provide a private Database-as-a-Service for internal teams, quickly deploy-test-destroy databases with CI/CD pipeline – these are some of the most common use cases for running databases on Kubernetes with operators. Percona Distribution for PostgreSQL Operator enables users to do exactly that and more.
Pulumi is an infrastructure-as-a-code tool, which enables developers to write code in their favorite language (Python, Golang, JavaScript, etc.) to deploy infrastructure and applications easily to public clouds and platforms such as Kubernetes.
This blog post is a step-by-step guide on how to deploy a highly-available PostgreSQL cluster on Kubernetes with our Percona Operator and Pulumi.
Desired State
We are going to provision the following resources with Pulumi:
- Google Kubernetes Engine cluster with three nodes. It can be any Kubernetes flavor.
- Percona Operator for PostgreSQL
- Highly available PostgreSQL cluster with one primary and two hot standby nodes
- Highly available pgBouncer deployment with the Load Balancer in front of it
- pgBackRest for local backups
Pulumi code can be found in this git repository.
Prepare
I will use the Ubuntu box to run Pulumi, but almost the same steps would work on macOS.
Pre-install Packages
gcloud and kubectl
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - sudo apt-get update sudo apt-get install -y google-cloud-sdk docker.io kubectl jq unzip
python3
Pulumi allows developers to use the language of their choice to describe infrastructure and applications. I’m going to use python. We will also pip (python package-management system) and venv (virtual environment module).
sudo apt-get install python3 python3-pip python3-venv
Pulumi
Install Pulumi:
curl -sSL https://get.pulumi.com | sh
On macOS, this can be installed view Homebrew with
brew install pulumi
You will need to add .pulumi/bin to the $PATH:
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/percona/.pulumi/bin
Authentication
gcloud
You will need to provide access to Google Cloud to provision Google Kubernetes Engine.
gcloud config set project your-project gcloud auth application-default login gcloud auth login
Pulumi
Generate Pulumi token at app.pulumi.com. You will need it later to init Pulumi stack:
Action
This repo has the following files:
-
Pulumi.yaml
– identifies that it is a folder with Pulumi project
-
__main__.py
– python code used by Pulumi to provision everything we need
-
requirements.txt
– to install required python packages
Clone the repo and go to the
pg-k8s-pulumi
folder:
git clone https://github.com/spron-in/blog-data cd blog-data/pg-k8s-pulumi
Init the stack with:
pulumi stack init pg
You will need the key here generated before on app.pulumi.com.
__main__.py
Python code that Pulumi is going to process is in __main__.py file.
Lines 1-6: importing python packages
Lines 8-31: configuration parameters for this Pulumi stack. It consists of two parts:
- Kubernetes cluster configuration. For example, the number of nodes.
- Operator and PostgreSQL cluster configuration – namespace to be deployed to, service type to expose pgBouncer, etc.
Lines 33-80: deploy GKE cluster and export its configuration
Lines 82-88: create the namespace for Operator and PostgreSQL cluster
Lines 91-426: deploy the Operator. In reality, it just mirrors the operator.yaml from our Operator.
Lines 429-444: create the secret object that allows you to set the password for pguser to connect to the database
Lines 445-557: deploy PostgreSQL cluster. It is a JSON version of cr.yaml from our Operator repository
Line 560: exports Kubernetes configuration so that it can be reused later
Deploy
At first, we will set the configuration for this stack. Execute the following commands:
pulumi config set gcp:project YOUR_PROJECT pulumi config set gcp:zone us-central1-a pulumi config set node_count 3 pulumi config set master_version 1.21 pulumi config set namespace percona-pg pulumi config set pg_cluster_name pulumi-pg pulumi config set service_type LoadBalancer pulumi config set pg_user_password mySuperPass
These commands set the following:
- GCP project where GKE is going to be deployed
- GCP zone
- Number of nodes in a GKE cluster
- Kubernetes version
- Namespace to run PostgreSQL cluster
- The name of the cluster
- Expose pgBouncer with LoadBalancer object
Deploy with the following command:
$ pulumi up Previewing update (pg) View Live: https://app.pulumi.com/spron-in/percona-pg-k8s/pg/previews/d335d117-b2ce-463b-867d-ad34cf456cb3 Type Name Plan Info + pulumi:pulumi:Stack percona-pg-k8s-pg create 1 message + ?? random:index:RandomPassword pguser_password create + ?? random:index:RandomPassword password create + ?? gcp:container:Cluster gke-cluster create + ?? pulumi:providers:kubernetes gke_k8s create + ?? kubernetes:core/v1:ServiceAccount pgoPgo_deployer_saServiceAccount create + ?? kubernetes:core/v1:Namespace pgNamespace create + ?? kubernetes:batch/v1:Job pgoPgo_deployJob create + ?? kubernetes:core/v1:ConfigMap pgoPgo_deployer_cmConfigMap create + ?? kubernetes:core/v1:Secret percona_pguser_secretSecret create + ?? kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding pgo_deployer_crbClusterRoleBinding create + ?? kubernetes:rbac.authorization.k8s.io/v1:ClusterRole pgo_deployer_crClusterRole create + ?? kubernetes:pg.percona.com/v1:PerconaPGCluster my_cluster_name create Diagnostics: pulumi:pulumi:Stack (percona-pg-k8s-pg): E0225 14:19:49.739366105 53802 fork_posix.cc:70] Fork support is only compatible with the epoll1 and poll polling strategies Do you want to perform this update? yes Updating (pg) View Live: https://app.pulumi.com/spron-in/percona-pg-k8s/pg/updates/5 Type Name Status Info + pulumi:pulumi:Stack percona-pg-k8s-pg created 1 message + ?? random:index:RandomPassword pguser_password created + ?? random:index:RandomPassword password created + ?? gcp:container:Cluster gke-cluster created + ?? pulumi:providers:kubernetes gke_k8s created + ?? kubernetes:core/v1:ServiceAccount pgoPgo_deployer_saServiceAccount created + ?? kubernetes:core/v1:Namespace pgNamespace created + ?? kubernetes:core/v1:ConfigMap pgoPgo_deployer_cmConfigMap created + ?? kubernetes:batch/v1:Job pgoPgo_deployJob created + ?? kubernetes:core/v1:Secret percona_pguser_secretSecret created + ?? kubernetes:rbac.authorization.k8s.io/v1:ClusterRole pgo_deployer_crClusterRole created + ?? kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding pgo_deployer_crbClusterRoleBinding created + ?? kubernetes:pg.percona.com/v1:PerconaPGCluster my_cluster_name created Diagnostics: pulumi:pulumi:Stack (percona-pg-k8s-pg): E0225 14:20:00.211695433 53839 fork_posix.cc:70] Fork support is only compatible with the epoll1 and poll polling strategies Outputs: kubeconfig: "[secret]" Resources: + 13 created Duration: 5m30s
Verify
Get kubeconfig first:
pulumi stack output kubeconfig --show-secrets > ~/.kube/config
Check if Pods of your PG cluster are up and running:
$ kubectl -n percona-pg get pods NAME READY STATUS RESTARTS AGE backrest-backup-pulumi-pg-dbgsp 0/1 Completed 0 64s pgo-deploy-8h86n 0/1 Completed 0 4m9s postgres-operator-5966f884d4-zknbx 4/4 Running 1 3m27s pulumi-pg-787fdbd8d9-d4nvv 1/1 Running 0 2m12s pulumi-pg-backrest-shared-repo-f58bc7657-2swvn 1/1 Running 0 2m38s pulumi-pg-pgbouncer-6b6dc4564b-bh56z 1/1 Running 0 81s pulumi-pg-pgbouncer-6b6dc4564b-vpppx 1/1 Running 0 81s pulumi-pg-pgbouncer-6b6dc4564b-zkdwj 1/1 Running 0 81s pulumi-pg-repl1-58d578cf49-czm54 0/1 Running 0 46s pulumi-pg-repl2-7888fbfd47-h98f4 0/1 Running 0 46s pulumi-pg-repl3-cdd958bd9-tf87k 1/1 Running 0 46s
Get the IP-address of pgBouncer LoadBalancer:
$ kubectl -n percona-pg get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE … pulumi-pg-pgbouncer LoadBalancer 10.20.33.122 35.188.81.20 5432:32042/TCP 3m17s
You can connect to your PostgreSQL cluster through this IP-address. Use pguser password that was set earlier with
pulumi config set pg_user_password
:
psql -h 35.188.81.20 -p 5432 -U pguser pgdb
Clean up
To delete everything it is enough to run the following commands:
pulumi destroy pulumi stack rm
Tricks and Quirks
Pulumi Converter
kube2pulumi is a huge help if you already have YAML manifests. You don’t need to rewrite the whole code, but just convert YAMLs to Pulumi code. This is what I did for operator.yaml.
apiextensions.CustomResource
There are two ways for Custom Resource management in Pulumi:
- apiextensions.CustomResource
- crd2pulumi
crd2pulumi generates libraries/classes out of Custom Resource Definitions and allows you to create custom resources later using these. I found it a bit complicated and it also lacks documentation.
apiextensions.CustomResource on the other hand allows you to create Custom Resources by specifying them as JSON. It is much easier and requires less manipulation. See lines 446-557 in my __main__.py.
True/False in JSON
I have the following in my Custom Resource definition in Pulumi code:
perconapg = kubernetes.apiextensions.CustomResource( … spec= { … "disableAutofail": False, "tlsOnly": False, "standby": False, "pause": False, "keepData": True,
Be sure that you use boolean of the language of your choice and not the “true”/”false” strings. For me using the strings turned into a failure as the Operator was expecting boolean, not the strings.
Depends On…
Pulumi makes its own decisions on the ordering of provisioning resources. You can enforce the order by specifying dependencies
For example, I’m ensuring that Operator and Secret are created before the Custom Resource:
},opts=ResourceOptions(provider=k8s_provider,depends_on=[pgo_pgo_deploy_job,percona_pg_cluster1_pguser_secret_secret])
14
2021
High Availability and Disaster Recovery Recipes for PostgreSQL on Kubernetes
Percona Distribution for PostgreSQL Operator allows you to deploy and manage highly available and production-grade PostgreSQL clusters on Kubernetes with minimal manual effort. In this blog post, we are going to look deeper into High Availability, Disaster Recovery, and Scaling of PostgreSQL clusters.
High Availability
Our default custom resource manifest deploys a highly available (HA) PostgreSQL cluster. Key components of HA setup are:
- Kubernetes Services that point to pgBouncer and replica nodes
- pgBouncer – a lightweight connection pooler for PostgreSQL
- Patroni – HA orchestrator for PostgreSQL
- PostgreSQL nodes – we have one primary and 2 replica nodes in hot standby by default
Kubernetes Service is the way to expose your PostgreSQL cluster to applications or users. We have two services:
-
clusterName-pgbouncer
– Exposing your PostgreSQL cluster through pgBouncer connection pooler. Both reads and writes are sent to the Primary node.
-
clusterName-replica
– Exposes replica nodes directly. It should be used for reads only. Also, keep in mind that connections to this service are not pooled. We are working on a better solution, where the user would be able to leverage both connection pooling and read-scaling through a single service.
By default we use ClusterIP service type, but you can change it in
pgBouncer.expose.serviceType
or
pgReplicas.hotStandby.expose.serviceType,
respectively.
Every PostgreSQL container has Patroni running. Patroni monitors the state of the cluster and in case of Primary node failure switches the role of the Primary to one of the Replica nodes. PgBouncer always knows where Primary is.
As you see we distribute PostgreSQL cluster components across different Kubernetes nodes. This is done with Affinity rules and they are applied by default to ensure that single node failure does not cause database downtime.
Multi-Datacenter with Multi-AZ
Good architecture design is to run your Kubernetes cluster across multiple datacenters. Public clouds have a concept of availability zones (AZ) which are data centers within one region with a low-latency network connection between them. Usually, these data centers are at least 100 kilometers away from each other to minimize the probability of regional outage. You can leverage multi-AZ Kubernetes deployment to run cluster components in different data centers for better availability.
To ensure that PostgreSQL components are distributed across availability zones, you need to tweak affinity rules. Now it is only possible through editing Deployment resources directly:
$ kubectl edit deploy cluster1-repl2 … - topologyKey: kubernetes.io/hostname + topologyKey: topology.kubernetes.io/zone
Scaling
Scaling PostgreSQL to meet the demand at peak hours is crucial for high availability. Our Operator provides you with tools to scale PostgreSQL components both horizontally and vertically.
Vertical Scaling
Scaling vertically is all about adding more power to a PostgreSQL node. The recommended way is to change resources in the Custom Resource (instead of changing them in Deployment objects directly). For example, change the following in the
cr.yaml
to get 256 MBytes of RAM for all PostgreSQL Replica nodes:
pgReplicas: hotStandby: resources: requests: - memory: "128Mi" + memory: "256Mi"
Apply
cr.yaml
:
$ kubectl apply -f cr.yaml
Use the same approach to tune other components in their corresponding sections.
You can also leverage Vertical Pod Autoscaler (VPA) to react to load spikes automatically. We create a Deployment resource for Primary and each Replica node. VPA objects should target these deployments. The following example will track one of the replicas Deployment resources of cluster1 and scale automatically:
apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: pxc-vpa spec: targetRef: apiVersion: "apps/v1" kind: Deployment name: cluster1-repl1 namespace: pgo updatePolicy: updateMode: "Auto"
Please read more about VPA and its capabilities in its documentation.
Horizontal Scaling
Adding more replica nodes or pgBouncers can be done by changing size parameters in the Custom Resource. Do the following change in the default
cr.yaml
:
pgReplicas: hotStandby: - size: 2 + size: 3
Apply the change to get one more PostgreSQL Replica node:
$ kubectl apply -f cr.yaml
Starting from release 1.1.0 it is also possible to scale our cluster using kubectl scale command. Execute the following to have two PostgreSQL replica nodes in cluster1:
$ kubectl scale --replicas=2 perconapgcluster/cluster1 perconapgcluster.pg.percona.com/cluster1 scaled
In the latest release, it is not possible to use Horizontal Pod Autoscaler (HPA) yet and we will have it supported in the next one. Stay tuned.
Disaster Recovery
It is important to understand that Disaster Recovery (DR) is not High Availability. DR’s goal is to ensure business continuity in the case of a massive disaster, such as a full region outage. Recovery in such cases can be of course automated, but not necessarily – it strictly depends on the business requirements.
Backup and Restore
I think it is the most common Disaster Recover protocol – take the backup, store it in some 3rd party premises, restore to another datacenter if needed.
This approach is simple, but comes with a long recovery time, especially if the database is big. Use this method only if it passes your Recovery Time Objectives (RTO).
Our Operator handles backup and restore for PostgreSQL clusters. The disaster recovery is built around pgBackrest and looks like the following:
- Configure pgBackrest to upload backups to S3 or GCS (see our documentation for details).
- Create the backup manually (through pgTask) or ensure that a scheduled backup was created.
- Once the Main cluster fails, create the new cluster in the Disaster Recovery data center. The cluster must be running in standby mode and pgBackrest must be pointing to the same repository as the main cluster:
spec: standby: true backup: # same config as on original cluster
Once data is recovered, the user can turn off standby mode and switch the application to DR cluster.
Continuous Restoration
This approach is quite similar to the above: pgBackrest instances continuously synchronize data between two clusters through object storage. This approach minimizes RTO and allows you to switch the application traffic to the DR site almost immediately.
Configuration here is similar to the previous case, but we always run a second PostgreSQL cluster in the Disaster Recovery data center. In case of main site failure just turn off the standby mode:
spec: standby: false
You can use a similar setup to migrate the data to and from Kubernetes. Read more about it in the Migrating PostgreSQL to Kubernetes blog post.
Conclusion
Kubernetes Operators provide ready-to-use service, and in the case of Percona Distribution for PostgreSQL Operator, the user gets a production-grade, highly available database cluster. In addition, the Operator provides day-2 operation capabilities and automates day-to-day routine.
We encourage you to try out our operator. See our GitHub repository and check out the documentation.
Found a bug or have a feature idea? Feel free to submit it in JIRA.
For general questions please raise the topic in the community forum.
Are you a developer and looking to contribute? Please read our CONTRIBUTING.md and send the Pull.
08
2021
Percona Distribution for PostgreSQL Operator 1.1.0 – Notable Features
Percona in 2021 is heavily invested in making the PostgreSQL ecosystem better and contributing to it from different angles:
- We have created pg_stat_monitor – query performance monitoring tool for PostgreSQL
- Percona Distribution for PostgreSQL Operator was released in October
- Greatly improved PostgreSQL monitoring with Percona Monitoring and Management
- At PGConf NYC 2021 we were a Platinum sponsor and had 5 awesome speakers sharing their wisdom with the community
With this in mind let me introduce to you Percona Distribution for PostgreSQL Operator version 1.1.0 and its notable features:
- Smart Update – forget about manual and error-prone database upgrades
- System Users management – add and modify system users with ease with a single Kubernetes Secret resource
- PostgreSQL 14 support – leverage the latest and greatest by running Percona Distribution for PostgreSQL on Kubernetes
Full release notes can be found here.
Smart Update Feature
Updating databases and their components is always a challenge. In our Operators for MySQL and MongoDB we have simplified and automated upgrade procedures, and now it’s time for PostgreSQL. In the 1.1.0 version, we ship this feature as Technical Preview with a plan to promote it to GA in the next release.
This feature consists of two parts:
- Version Service – get the latest or recommended version of the database or other component (PMM for example)
- Smart Update – apply new version without downtime
Version Service
This feature answers the question: which PostgreSQL/pgBackRest/pgBouncer version should I be running with this Operator? It is important to note, that Version Service and Smart Update can only perform minor version upgrades (ex. from 13.1 to 13.4). Major Version upgrades are manual for now and will be automated in the Operator soon.
The way it works is well depicted on the following diagram:
Version Service is an open source tool, see the source code on Github. Percona hosts check.percona.com and Operators use it by default, but users can run their own self-hosted Version Service.
Users who worked with our Operators for MySQL and MongoDB will find the configuration of Version Service and Smart Update quite familiar:
upgradeOptions: versionServiceEndpoint: https://check.percona.com apply: recommended schedule: "0 2 * * *"
- Define Version Service endpoint
- Define PostgreSQL version – Operator will automatically figure out components versions
- Schedule defines the time when the rollout of newer versions is going to take place. Good practice to set this time outside of peak hours.
Smart Update
Okay, now Operator knows the versions that should be used. It is time to apply them and do it with minimal downtime. Here is where the Smart Update feature kicks in.
The heart of Smart Update is smartUpdateCluster function. The goal here is to switch container images versions for database components in a specific order and minimize downtime. Once the image is changed, Kubernetes does the magic. For Deployment resources, which we use in our Operator, Kubernetes first spins up the Pod with a new image and then terminates the old one. This provides minimal downtime. The update itself looks like this:
- Upgrade pgBackRest image in Deployment object in Kubernetes
- Start upgrading PostgreSQL itself
- Percona Monitoring and Management which runs as a sidecar gets the new version here as well
- Same for pgBadger
- We must upgrade replica nodes first here. If we upgrade the primary node first, the cluster will not recover. The tricky part here, is that in an event of failover Primary node can be somewhere in the pgReplicas Deployment. So we need to verify where the primary is first and only after that change the image. See the Smart Update sequence diagram for more details.
- Last, but not least – change the image for pgBouncer. To minimize the downtime here, we recommend running at least two pgBouncer nodes. By default pgBouncer.size is set to 3.
As a result, the user gets the latest, most secure, and performant PostgreSQL and its components automatically with minimal downtime.
System Users Management
Our Operator has multiple system users to manage the cluster and ensure its health. Our users raised two main concerns:
- it is not possible to change system user password with the Operator after cluster deployment
- it is confusing that there is a Secret object per user
In this release, we are moving all system users to a single Secret. The change in the Secret resource is going to trigger the update of the passwords in PostgreSQL automatically.
If the cluster is created from scratch the Secret with system users is going to be created automatically and passwords would be randomly generated. By default the Secret name is
<clusterName>-users
, it can be changed under
spec.secretUsers
variable in the Custom Resource.
spec: secretsName: my-custom-secret
When upgrading from 1.0.0 to 1.1.0, if you want to keep old passwords, please create the Secret resource manually. Otherwise, the passwords for system users are going to be generated randomly and updated by the Operator.
PostgreSQL 14 Support
PostgreSQL 14 provides an extensive set of new features and enhancements to security, performance, usability for client applications, and more.
Most notable of them include the following:
- Expired B-tree index entries can now be detected and removed between vacuum runs. This results in a lesser number of page splits and reduces the index bloat.
- The vacuum process now deletes dead tuples in a single cycle, as opposed to the previous 2-step approach of first marking tuples as deleted and then actually freeing up space in the next run. This speeds up free space cleanup.
- Support for subscripts in JSON is added to simplify data retrieval using a commonly recognized syntax.
- Stored procedures can accept OUT parameters.
- The libpq library now supports the pipeline mode. Previously, the client applications waited for a transaction to be completed before sending the next one. The pipeline mode allows the applications to send multiple transactions at the same time thus boosting performance.
- Large transactions are now streamed to subscribers in-progress, thus increasing the performance. This improvement applies to logical replication.
- LZ4 compression is added for TOAST operations. This speeds up large data processing and also improves the compression ratio.
- SCRAM is made the default authentication mechanism. This mechanism improves security and simplifies regulatory compliance for data security.
In the 1.1.0 version of PostgreSQL Distribution for PostgreSQL Operator, we enable our users to run the latest and greatest PostgreSQL 14. PostgreSQL 14 is the default version since this release, but you still can use versions 12 and 13.
Conclusion
Kubernetes Operators are mainly seen as the tool to automate deployment and management of the applications. With this Percona Distribution for PostgreSQL Operator release, we simplify PostgreSQL management even more and enable users to leverage the latest version 14.
We encourage you to try out our operator. See our github repository and check out the documentation.
Found a bug or have a feature idea? Feel free to submit it in JIRA.
For general questions please raise the topic in the community forum.
You are a developer and looking to contribute? Please read our CONTRIBUTING.md and send the Pull Request.
23
2021
Multi-Tenant Kubernetes Cluster with Percona Operators
There are cases where multiple teams, customers, or applications run in the same Kubernetes cluster. Such an environment is called multi-tenant and requires some preparation and management. Multi-tenant Kubernetes deployment allows you to utilize the economy of scale model on various levels:
- Smaller compute footprint – one control plane, dense container deployments
- Ease of management – one cluster, not hundreds
In this blog post, we are going to review multi-tenancy best practices, recommendations and see how Percona Kubernetes Operators can be deployed and managed in such Kubernetes clusters.
Multi-Tenancy
Generic
Multi-tenancy usually means a lot of Pods and workloads in a single cluster. You should always remember that there are certain limits when designing your infrastructure. For vanilla Kubernetes, these limits are quite high and hard to reach:
- 5000 nodes
- 10 000 namespaces
- 150 000 pods
Managed Kubernetes services have their own limits that you should keep in mind. For example, GKE allows a maximum of 110 Pods per node on a standard cluster and only 32 on GKE Autopilot nodes.
The older AWS EKS CNI plugin was limiting the number of Pods per node to the number of IP addresses EC2 can have. With the prefix assignment enabled in CNI, you are still going to hit a limit of 110 pods per node.
Namespaces
Kubernetes Namespaces provides a mechanism for isolating groups of resources within a single cluster. The scope of k8s objects can either be cluster scope or namespace scoped. Objects which are accessible across all the namespaces like
ClusterRole
are cluster scoped and those which are accessible only in a single namespace like Deployments are namespace scoped.
Deploying a database with Percona Operators creates pods that are namespace scoped. This provides interesting opportunities to run workloads on different namespaces for different teams, projects, and potentially, customers too.
Example: Percona Distribution for MongoDB Operator and Percona Server for MongoDB can be run on two different namespaces by adding namespace metadata fields. Snippets are as follows:
# Team 1 DB running in team1-db namespace apiVersion: psmdb.percona.com/v1-11-0 kind: PerconaServerMongoDB metadata: name: team1-server namespace: team1-db # Team 1 deployment running in team1-db namespace apiVersion: apps/v1 kind: Deployment metadata: name: percona-server-mongodb-operator-team1 namespace: team1-db # Team 2 DB running in team2-db namespace apiVersion: psmdb.percona.com/v1-11-0 kind: PerconaServerMongoDB metadata: name: team2-server namespace: team2-db # Team 2 deployment running in team2-db namespace apiVersion: apps/v1 kind: Deployment metadata: name: percona-server-mongodb-operator-team2 namespace: team2-db
Suggestions:
- Avoid using the standard namespaces like
kube-system
or
default
.
- It’s always better to run independent workloads on different namespaces unless there is a specific requirement to do it in a shared namespace.
Namespaces can be used per team, per application environment, or any other logical structure that fits the use case.
Resources
The biggest problem in any multi-tenant environment is this – how can we ensure that a single bad apple doesn’t spoil the whole bunch of apples?
ResourceQuotas
Thanks to Resource Quotas, we can restrict the resource utilization of namespaces.
ResourceQuotas
also allows you to restrict the number of k8s objects which can be created in a namespace.
Example of the YAML manifest with resource quotas:
apiVersion: v1 kind: ResourceQuota metadata: name: team1-quota namespace: team1-db # Namespace where operator is deployed spec: hard: requests.cpu: "10" # Cumulative CPU requests of all k8s objects in the namespace cannot exceed 10vcpu limits.cpu: "20" # Cumulative CPU limits of all k8s objects in the namespace cannot exceed 20 vcpu requests.memory: 10Gi # Cumulative memory requests of all k8s objects in the namespace cannot exceed 10Gi limits.memory: 20Gi # Cumulative memory limits of all k8s objects in the namespace cannot exceed 20Gi requests.ephemeral-storage: 100Gi # Cumulative ephemeral storage request of all k8s objects in the namespace cannot exceed 100Gi limits.ephemeral-storage: 200Gi # Cumulative ephemeral storage limits of all k8s objects in the namespace cannot exceed 200Gi requests.storage: 300Gi # Cumulative storage requests of all PVC in the namespace cannot exceed 300Gi persistentvolumeclaims: 5 # Maximum number of PVC in the namespace is 5 count/statefulsets.apps: 2 # Maximum number of statefulsets in the namespace is 2 # count/psmdb: 2 # Maximum number of PSMDB objects in the namespace is 2, replace the name with proper Custom Resource
Please refer to the Resource Quotas documentation and apply quotas that are required for your use case.
If resource quotas are applied to a namespace, it is required to set containers’ requests and limits, otherwise, you are going to have an error similar to the following:
Error creating: pods "my-cluster-name-rs0-0" is forbidden: failed quota: my-cpu-memory-quota: must specify limits.cpu,requests.cpu
All Percona Operators provide the capability to fine-tune the requests and limits. The following example sets CPU and memory requests for Percona XtraDB Cluster containers:
spec: pxc: resources: requests: memory: 4G cpu: 2
LimitRange
With
ResourceQuotas
we can control the cumulative resources in the namespaces but if we want to enforce constraints on individual Kubernetes objects, LimitRange is a useful option.
For example, if Team 1,2,3 are provided a namespace to run workloads,
ResourceQuota
will ensure that none of the team can exceed the quotas allocated and over-utilize the cluster… but what if a badly configured workload (say an operator run from team 1 with higher priority class) is utilizing all the resources allocated to the team?
LimitRange
can be used to enforce resources like compute, memory, ephemeral storage, storage with PVC. The example below highlights some of the possibilities.
apiVersion: v1 kind: LimitRange metadata: name: lr-team1 namespace: team1-db spec: limits: - type: Pod max: # Maximum resource limit of all containers combined. Consider setting default limits ephemeral-storage: 100Gi # Maximum ephemeral storage cannot exceed 100GB cpu: "800m" # Maximum CPU limits of the Pod is 800mVCPU memory: 4Gi # Maximum memory limits of the Pod is 4 GB min: # Minimum resource request of all containers combined. Consider setting default requests ephemeral-storage: 50Gi # Minimum ephemeral storage should be 50GB cpu: "200m" # Minimum CPU request is 200mVCPU memory: 2Gi # Minimum memory request is 2 GB - type: PersistentVolumeClaim max: storage: 2Gi # Maximum PVC storage limit min: storage: 1Gi # Minimum PVC storage request
Suggestions:
- When it’s feasible, apply
ResourceQuotas
and
LimitRanges
to the namespaces where the Percona operator is running. This ensures that tenants are not overutilizing the cluster.
- Set alerts to monitor objects and usage of resources in namespaces. Automation of
ResourceQuotas
changes may also be useful in some scenarios.
- It is advisable to use a buffer on maximum expected utilization before setting the
ResourceQuotas
.
- Set
LimitRanges
to ensure workloads are not overutilizing resources in individual namespaces.
Roles and Security
Kubernetes provides several modes to authorize an API request. Role-Based access control is a popular way for authorization. There are four important objects to provide access:
ClusterRole | Represents a set of permissions across the cluster (cluster scope) |
Role | Represents a set of permissions within a namespace ( namespace scope) |
ClusterRoleBinding | Granting permission to subjects across the cluster ( cluster scope ) |
RoleBinding | Granting permissions to subjects within a namespace ( namespace scope) |
Subjects in the
RoleBinding/ClusterRoleBinding
can be users, groups, or service accounts. Every pod running in the cluster will have an identity and a service account attached (“default” service account in the same namespace will be attached if not explicitly specified). Permissions granted to the service account with
RoleBinding/ClusterRoleBinding
dictate the access that pods will have.
Going by the best policy of least privileges, it’s always advisable to use Roles with the least set of permissions and bind it to a service account with
RoleBinding
. This service account can be used to run the operator or custom resource to ensure proper access and also restrict the blast radius.
Avoid granting cluster-level access unless there is a strong use case to do it.
Example: RBAC in MongoDB Operator uses Role and
RoleBinding
restricting access to a single namespace for the service account. The same service account is used for both CustomResource and the Operator.
Network Policies
Network isolation provides additional security to applications and customers in a multi-tenant environment. Network policies are Kubernetes resources that allow you to control the traffic between Pods, CIDR blocks, and network endpoints, but the most common approach is to control the traffic between namespaces:
Most Container Network Interface (CNI) plugins support the implementation of network policies, however, if they don’t and
NetworkPolicy
is created, the resource is silently ignored. For example, AWS CNI does not support network policies, but AWS EKS can run Calico CNI which does.
It is a good approach to follow the least privilege approach, whereby default traffic is denied and access is granted granularly:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress namespace: app1-db spec: podSelector: {} policyTypes: - Ingress
Allow traffic from Pods in namespace
app1
to namespace
app1-db
:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress namespace: app1-db spec: podSelector: {} ingress: - from: - namespaceSelector: matchLabels: name: app1 policyTypes: - Ingress
Policy Enforcement
In a multi-tenant environment, policy enforcement plays a key role. Policy enforcement ensures that k8s objects pass the required quality gates set by administrators/teams. Some examples of policy enforcement could be:
- All the workloads have proper labels
- Proper network policies are set for DB
- Unsafe configurations are not allowed (Example)
- Backups are always enabled (Example)
The K8s ecosystem offers a wide range of options to achieve this. Some of them are listed below:
- Open Policy Agent (OPA) is a CNCF graduated project which gives a high-level declarative language to author and enforces policies across k8s objects. (Examples from Google and OPA repo can be helpful)
- Mutating Webhooks can be used to modify API calls before it reaches the API server. This can be used to set required properties for k8s objects. (Example: mutating webhook to add
NetworkPolicy
for Pods created in production namespaces)
- Validating Webhooks can be used to check if k8s API follows the required policy, any API which doesn’t follow the policy will be rejected. (Example: validating webhook to ensure huge pages of 1GB is not used in the pod )
Cluster-Wide
Percona Distribution for MySQL Operator and Percona Distribution for PostgreSQL Operator both support cluster-wide mode which allows single Operator deploy and manage databases across multiple namespaces (support for cluster-wide mode in Percona Operator for MongoDB is on the roadmap). Is also possible to have an Operator per namespace:
For example, a single deployment of Percona Distribution for MySQL Operator can monitor multiple namespaces in cluster-wide mode. The use can specify them in WATCH_NAMESPACE environment variable in the
cw-bundle.yaml
file:
spec: containers: - command: - percona-xtradb-cluster-operator env: - name: WATCH_NAMESPACE value: "namespace-a, namespace-b"
In a multi-tenant environment, it depends on the amount of freedom you want to give to the tenants. Usually when the tenants are highly trusted (for instance internal teams), then it is fine to choose namespace-scoped deployment, where each team can deploy and manage the Operator themselves.
Conclusion
It is important to remember that Kubernetes is not a multi-tenant system out of the box. Various levels of isolation were described in this blog post that would help you to run your applications and databases securely and ensure operational stability.
We encourage you to try out our Operators:
CONTRIBUTING.md in every repository is there for those of you who want to contribute your ideas, code, and docs.
For general questions please raise the topic in the community forum.
07
2021
Getting Started with ProxySQL in Kubernetes
There are plenty of ways to run ProxySQL in Kubernetes (K8S). For example, we can deploy sidecar containers on the application pods, or run a dedicated ProxySQL service with its own pods.
We are going to discuss the latter approach, which is more likely to be used when dealing with a large number of application pods. Remember each ProxySQL instance runs a number of checks against the database backends. These checks monitor things like server-status and replication lag. Having too many proxies can cause significant overhead.
Creating a Cluster
For the purpose of this example, I am going to deploy a test cluster in GKE. We need to follow these steps:
1. Create a cluster
gcloud container clusters create ivan-cluster --preemptible --project my-project --zone us-central1-c --machine-type n2-standard-4 --num-nodes=3
2. Configure command-line access
gcloud container clusters get-credentials ivan-cluster --zone us-central1-c --project my-project
3. Create a Namespace
kubectl create namespace ivantest-ns
4. Set the context to use our new Namespace
kubectl config set-context $(kubectl config current-context) --namespace=ivantest-ns
Dedicated Service Using a StatefulSet
One way to implement this approach is to have ProxySQL pods use persistent volumes to store the configuration. We can rely on ProxySQL Cluster mode to make sure the configuration is kept in sync.
For simplicity, we are going to use a ConfigMap with the initial config for bootstrapping the ProxySQL service for the first time.
Exposing the passwords in the ConfigMap is far from ideal, and so far the K8S community hasn’t made up its mind about how to implement Reference Secrets from ConfigMap.
1. Prepare a file for the ConfigMap
tee proxysql.cnf <<EOF datadir="/var/lib/proxysql" admin_variables= { admin_credentials="admin:admin;cluster:secret" mysql_ifaces="0.0.0.0:6032" refresh_interval=2000 cluster_username="cluster" cluster_password="secret" } mysql_variables= { threads=4 max_connections=2048 default_query_delay=0 default_query_timeout=36000000 have_compress=true poll_timeout=2000 interfaces="0.0.0.0:6033;/tmp/proxysql.sock" default_schema="information_schema" stacksize=1048576 server_version="8.0.23" connect_timeout_server=3000 monitor_username="monitor" monitor_password="monitor" monitor_history=600000 monitor_connect_interval=60000 monitor_ping_interval=10000 monitor_read_only_interval=1500 monitor_read_only_timeout=500 ping_interval_server_msec=120000 ping_timeout_server=500 commands_stats=true sessions_sort=true connect_retries_on_failure=10 } mysql_servers = ( { address="mysql1" , port=3306 , hostgroup=10, max_connections=100 }, { address="mysql2" , port=3306 , hostgroup=20, max_connections=100 } ) mysql_users = ( { username = "myuser", password = "password", default_hostgroup = 10, active = 1 } ) proxysql_servers = ( { hostname = "proxysql-0.proxysqlcluster", port = 6032, weight = 1 }, { hostname = "proxysql-1.proxysqlcluster", port = 6032, weight = 1 }, { hostname = "proxysql-2.proxysqlcluster", port = 6032, weight = 1 } ) EOF
2. Create the ConfigMap
kubectl create configmap proxysql-configmap --from-file=proxysql.cnf
3. Prepare a file with the StatefulSet
tee proxysql-ss-svc.yml <<EOF apiVersion: apps/v1 kind: StatefulSet metadata: name: proxysql labels: app: proxysql spec: replicas: 3 serviceName: proxysqlcluster selector: matchLabels: app: proxysql updateStrategy: type: RollingUpdate template: metadata: labels: app: proxysql spec: restartPolicy: Always containers: - image: proxysql/proxysql:2.3.1 name: proxysql volumeMounts: - name: proxysql-config mountPath: /etc/proxysql.cnf subPath: proxysql.cnf - name: proxysql-data mountPath: /var/lib/proxysql subPath: data ports: - containerPort: 6033 name: proxysql-mysql - containerPort: 6032 name: proxysql-admin volumes: - name: proxysql-config configMap: name: proxysql-configmap volumeClaimTemplates: - metadata: name: proxysql-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2Gi --- apiVersion: v1 kind: Service metadata: annotations: labels: app: proxysql name: proxysql spec: ports: - name: proxysql-mysql nodePort: 30033 port: 6033 protocol: TCP targetPort: 6033 - name: proxysql-admin nodePort: 30032 port: 6032 protocol: TCP targetPort: 6032 selector: app: proxysql type: NodePort EOF
4. Create the StatefulSet
kubectl create -f proxysql-ss-svc.yml
5. Prepare the definition of the headless Service (more on this later)
tee proxysql-headless-svc.yml <<EOF apiVersion: v1 kind: Service metadata: name: proxysqlcluster labels: app: proxysql spec: clusterIP: None ports: - port: 6032 name: proxysql-admin selector: app: proxysql EOF
6. Create the headless Service
kubectl create -f proxysql-headless-svc.yml
7. Verify the Services
kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE proxysql NodePort 10.3.249.158 6033:30033/TCP,6032:30032/TCP 12m proxysqlcluster ClusterIP None 6032/TCP 8m53s
Pod Name Resolution
By default, each pod has a DNS name associated in the form pod-ip-address.my-namespace.pod.cluster-domain.example.
The headless Service causes K8S to auto-create a DNS record with each pod’s FQDN as well. The result is we will have the following entries available:
proxysql-0.proxysqlcluster
proxysql-1.proxysqlcluster
proxysql-3.proxysqlcluster
We can then use these to set up the ProxySQL cluster (the proxysql_servers part of the configuration file).
Connecting to the Service
To test the service, we can run a container that includes a MySQL client and connect its console output to our terminal. For example, use the following command (which also removes the container/pod after we exit the shell):
kubectl run -i --rm --tty percona-client --image=percona/percona-server:latest --restart=Never -- bash -il
The connections from other pods should be sent to the Cluster-IP and port 6033 and will be load balanced. We can also use the DNS name proxysql.ivantest-ns.svc.cluster.local that got auto-created.
mysql -umyuser -ppassword -h10.3.249.158 -P6033
Use port 30033 instead if the client is connecting from an external network:
mysql -umyuser -ppassword -h10.3.249.158 -P30033
Cleanup Steps
In order to remove all the resources we created, run the following steps:
kubectl delete statefulsets proxysql kubectl delete service proxysql kubectl delete service proxysqlcluster
Final Words
We have seen one of the possible ways to deploy ProxySQL in Kubernetes. The approach presented here has a few shortcomings but is good enough for illustrative purposes. For a production setup, consider looking at the Percona Kubernetes Operators instead.
Complete the 2021 Percona Open Source Data Management Software Survey