Percona Operator for MongoDB version 1.13 was recently released and it comes with various ravishing features. In this blog post, we are going to look under the hood and see what are the practical use cases for these improvements.
Cluster-wide deployment
There are two modes that Percona Operators support:
- Namespace scope
- Cluster-wide
Namespace scope limits the Operator operations to a single namespace, whereas in cluster-wide mode Operator can deploy and manage databases in multiple namespaces of a Kubernetes cluster. Our Operators for PostgreSQL and MySQL already support cluster-wide mode. With the 1.13 release, we are closing the gap for Percona Operator for MongoDB.
Multi-tenant clusters are the most common call for cluster-wide mode. You as a cluster administrator manage a single deployment of the Operator and equip your teams with the way to deploy and manage MongoDB in their isolated namespaces. Read more about multi-tenancy and best practices in our Multi-Tenant Kubernetes Cluster with Percona Operators.
How does it work?
To deploy in cluster-wide mode we introduce
cw-*.yaml
manifests. The quickest way would be to use the
cw-bundle.yaml
which deploys the following:
- Custom Resource Definition
- Service Account and Cluster Role that allows Operator to create and manage Kubernetes objects in various namespaces
- Operator Deployment itself
By default, Operator monitors all the namespaces in the cluster. The
WATCH_NAMESPACE
environment variable in the Operator Deployment limits the scope. It can be a comma-separated list that instructs the Operator on which namespaces to monitor for Custom Resource objects:
- name: WATCH_NAMESPACE value: "app-dev1,app-dev2”
This is useful if you want to limit the blast radius, but run multiple Operators monitoring various namespaces. For example, you can run an Operator per environment – development, staging, production.
Deploy the bundle:
kubectl apply -f deploy/cw-bundle.yaml -n psmdb-operator
Now you can start deploying databases in the namespaces you need:
kubectl apply -f deploy/cr.yaml -n app-dev1 kubectl apply -f deploy/cr.yaml -n app-dev2
See the demo below where I deploy two clusters in different namespaces with a single Operator.
Hashicorp Vault integration for encryption-at-rest
We take security seriously at Percona. Data-at-rest encryption prevents data visibility in the event of unauthorized access or theft. It is supported by all our Operators. With this release, we introduce the support for integration with Hashicorp Vault, where the user can keep the keys in the Vault and instruct Percona Operator for MongoDB to use those. This feature is in a technical preview stage.
There is a good blog post that describes how Percona Server for MongoDB works with the Vault. In Operator, we implement the same functionality and follow the structure of the same parameters.
How does it work?
We are going to assume that you already have Hashicorp Vault installed – you either use Cloud Platform or a self-hosted version. We will focus on the configuration of the Operator.
To instruct the Operator to use Vault you need to specify two things in the Custom Resource:
-
secrets.vault
– Secret resource with a Vault token in it
- Custom configuration for mongod for a replica set and config servers
secrets.vault
Example of cr.yaml:
spec: secrets: vault: my-vault-secret
The secret object itself should contain the token that has access to create, read, update and delete the secrets in the desired path in the Vault. Please refer to the Vault documentation to understand policies better.
Example of a Secret:
apiVersion: v1 data: token: aHZzLnhrVVRPTEVOM2dLQmZuV0I5WTF0RmtOaA== kind: Secret metadata: name: my-vault-secret namespace: default type: Opaque
Custom configuration
The operator allows users to fine-tune mongod and mongos configurations. For encryption to work, you must specify vault configuration for replica sets – both data and config servers.
Example of cr.yaml:
replsets: - name: rs0 size: 3 configuration: | security: enableEncryption: true vault: serverName: vault port: 8200 tokenFile: /etc/mongodb-vault/token secret: secret/data/dc/cluster1/rs0 disableTLSForTesting: true … sharding: enabled: true configsvrReplSet: size: 3 configuration: | security: enableEncryption: true vault: serverName: vault port: 8200 tokenFile: /etc/mongodb-vault/token secret: secret/data/dc/cluster1/cfg disableTLSForTesting: true
What to note here:
-
tokenFile: /etc/mongodb-vault/token
- It is where the Operator is going to mount the Secret with the Vault token you created before. This is the default path and in most cases should not be changed.
-
secret: secret/data/dc/cluster1/rs0
- It is the path where keys are going to be stored in the Vault.
You can read more about Percona Server for MongoDB and Hashicorp Vault parameters in our documentation.
Once you are done with the configuration, apply the Custom Resource as usual. If everything is set up correctly you will see the following message in mongod log:
$ kubectl logs cluster1-rs0-0 -c mongod … {"t":{"$date":"2022-09-13T19:40:20.342+00:00"},"s":"I", "c":"STORAGE", "id":29039, "ctx":"initandlisten","msg":"Encryption keys DB is initialized successfully"}
Azure Kubernetes Service support
All Percona Operators are going through rigorous QA testing throughout the development lifecycle. Hours of QA engineers’ work are put into automating the test suites for specific Kubernetes flavors.
AKS, or Azure Kubernetes Service, is the second most popular managed Kubernetes offering according to Flexera 2022 State of the Cloud report. After adding the support for Azure Blob Storage in version 1.11.0, it was just a matter of time before we started supporting AKS in full.
Starting with the 1.13.0 release, Percona Operator for MongoDB supports AKS in Technical Preview. You can see more details in our documentation.
The installation process of the Operator is no different from any other Kubernetes flavor. You can use a helm chart or apply YAML manifests with kubectl. I ran the cluster-wide demo above with AKS.
Admin user
This is a minor change, but frankly, it is my favorite as it impacts the user experience in a great way. Our Operator is coming with systems users that are used to manage and track the health of the database. Also, there are userAdmin and clusterAdmin for users to control the database, create users, and so on.
The problem here is neither userAdmin nor clusterAdmin allows you to start playing with the database right away. At first, you need to create the user that has permission to create databases and collections, and only after that, you can start using your fresh MongoDB cluster.
With the release 1.13, we say no more to this. We add the databaseAdmin user that acts like a database admin, enabling users to start innovating right away.
databaseAdmin credentials are added to the same Secret object where other users are:
$ kubectl get secret my-cluster-name-secrets -o yaml | grep MONGODB_DATABASE_ADMIN_ MONGODB_DATABASE_ADMIN_PASSWORD: Rm9NUnJ3UDJDazB5cW9WWU8= MONGODB_DATABASE_ADMIN_USER: ZGF0YWJhc2VBZG1pbg==
Get your password like this:
$ kubectl get secret my-cluster-name-secrets -o jsonpath='{.data.MONGODB_DATABASE_ADMIN_PASSWORD}' | base64 --decode FoMRrwP2Ck0yqoVYO
Connect to the database as usual and start innovating:
mongo "mongodb://databaseAdmin:FoMRrwP2Ck0yqoVYO@20.31.226.164/admin"
What’s next
Percona is committed to running databases anywhere. Kubernetes adoption grows year over year, turning from a container orchestrator into a cloud operating system. Our Operators are supporting the community and our customers’ journey in infrastructure transformations by automating the deployment and management of the databases in Kubernetes.
The following links are going to help you to get familiar with Percona Operator for MongoDB:
- Quickstart guides
- Free Kubernetes cluster for easier and quicker testing
- Percona Operator for MongoDB community forum if you have general questions or need assistance
Read about Percona Monitoring and Management DBaaS, an open source solution that simplifies the deployment and management of MongoDB even more.