Run MongoDB in Kubernetes: Solutions, Pros and Cons

Run MongoDB in Kubernetes

Run MongoDB in KubernetesRunning MongoDB in Kubernetes is becoming the norm. The most common use cases for running databases in Kubernetes are:

  • Create a private Database as a Service (DBaaS).
  • Maintain coherent infrastructure, where databases and applications run on the same platform.
  • Avoid vendor lock-in and be able to run their databases anywhere utilizing the power of Kubernetes and containers.

There are multiple solutions that allow you to run MongoDB in Kubernetes, and in this blog post, we are going to compare these solutions and review the pros and cons of each of them.

Solutions that we are going to review are:

The summary and comparison table can be found in our documentation.

Bitnami Helm chart

Bitnami was acquired by VMWare in 2019 and is known for its Helm charts to deploy various applications in Kubernetes. The key word here is deploy, as there are no management capabilities in this solution. 

All Bitnami Helm charts are under Apache 2.0 license, pure open source. The community around Bitnami is quite big and active.


Installation is a usual two-step process: add Helm repository, and deploy the chart.

Add the repository:

$ helm repo add bitnami

You can tweak the chart installation through values. I decided to experiment with it and deploy a replica set vs the default standalone setup:

$ helm install my-mongo bitnami/mongodb --set architecture="replicaset" --set replicaCount=2

This command deploys a MongoDB replica set with two nodes and one arbiter. 

It is worth noting that Github readme is not the only piece of documentation, but there are official Bitnami docs with more examples and details.


Management capabilities are not the strong part of this solution, but it is possible to scale the replica set horizontally (add more nodes to it) and vertically (add more resources to nodes). 

Monitoring is represented by


as a sidecar. It is described in the Metrics parameters section. It’s a straightforward approach that would be enough for most of the use cases.

What I love about all Bitnami Helm charts is that they provide you with lots of flexibility for tuning Kubernetes primitives and components: labels, annotations, images,  resources, and all. 

If we look into MongoDB-related functionality in this solution, there are a few that are worth noting:

  1. Create users and databases during database provisioning. It is a bit weird, that you cannot create roles and assign them right away. But a nice move.
  2. Hidden node – you will not find it in any other solution in this blog post. 
  3. TLS – you can encrypt the traffic flowing between the replica set nodes. Good move, but for some reason, it is disabled by default. 

I have not found any information about supported MongoDB versions. The latest Helm chart version was deploying MongoDB 5.0.

You can deploy sharded clusters, but it is a separate Helm chart – MongoDB Sharded. But still, you cannot easily integrate it with your enterprise solutions – let’s say LDAP or Hashicorp vault. At least this is not going to be a straightforward integration.


KubeDB, created by AppsCode, is a Swiss-army knife Operator to deploy various databases in Kubernetes including MongoDB. 

This solution follows an open-core model, where limited functionality is available for free, but good stuff comes after you purchase the license. Here you can find a comparison of the Community vs Enterprise versions.

The focus of our review would be on the Community version, but we are going to mention which features are available in the Enterprise version.


I found deployment a bit cumbersome as it requires downloading the license first, even for the Community version. 

Add the Helm repo:

$ helm repo add appscode

Install the Operator:

$ helm install kubedb appscode/kubedb \
 --version v2022.05.24 \
 --namespace kubedb --create-namespace \
 --set-file global.license=/path/to/the/license.txt

Deploy the database with the command below. It is worth mentioning that with the Community version you can deploy the database only in the demo namespace.

$ kubectl create -f

This deploys a sharded cluster. You can find a bunch of examples in github.


The Community version comes with almost no management capabilities. You can add and remove resources and nodes, so perform basic scaling, but nothing else.

Backups, restores, upgrades, encryption, and many more are all available in the Enterprise version only.

For monitoring, KubeDB follows the same approach as Bitnami and allows you to deploy a


container as a sidecar. Read more here.

Definitely, the Community version comes with limited functionality which is not enough to run it in production. At the same time there are some interesting features:

  • One Operator, many databases. I like this approach as it can simplify your operational landscape if you run various database flavors.
  • Pick your MongoDB version. You can easily find various supported MongoDB versions and pick the one you need through custom resource:
$ kubectl get mongodbversions
NAME             VERSION   DISTRIBUTION   DB_IMAGE                                 DEPRECATED   AGE
3.4.17-v1        3.4.17    Official       mongo:3.4.17                                          20m
3.4.22-v1        3.4.22    Official       mongo:3.4.22                                          20m
3.6.13-v1        3.6.13    Official       mongo:3.6.13                                          20m
3.6.8-v1         3.6.8     Official       mongo:3.6.8                                           20m
4.0.11-v1        4.0.11    Official       mongo:4.0.11                                          20m
4.0.3-v1         4.0.3     Official       mongo:4.0.3                                           20m
4.0.5-v3         4.0.5     Official       mongo:4.0.5                                           20m
4.1.13-v1        4.1.13    Official       mongo:4.1.13                                          20m
4.1.4-v1         4.1.4     Official       mongo:4.1.4                                           20m
4.1.7-v3         4.1.7     Official       mongo:4.1.7                                           20m
4.2.3            4.2.3     Official       mongo:4.2.3                                           20m
4.4.6            4.4.6     Official       mongo:4.4.6                                           20m
5.0.2            5.0.2     Official       mongo:5.0.2                                           20m
5.0.3            5.0.3     Official       mongo:5.0.3                                           20m
percona-3.6.18   3.6.18    Percona        percona/percona-server-mongodb:3.6.18                 20m
percona-4.0.10   4.0.10    Percona        percona/percona-server-mongodb:4.0.10                 20m
percona-4.2.7    4.2.7     Percona        percona/percona-server-mongodb:4.2.7-7                20m
percona-4.4.10   4.4.10    Percona        percona/percona-server-mongodb:4.4.10                 20m

As you can see, it even supports Percona Server for MongoDB

  • Sharding. I like that the Community version comes with sharding support as it helps to experiment and understand the future fit of this solution for your environment. 

MongoDB Community Operator

Similar to KubeDB, MongoDB Corp follows the open core model. The Community version of the Operator is free, and there is an enterprise version available (MongoDB Enterprise Kubernetes Operator), which is more feature rich.


Helm chart is available for this Operator, but at the same time, there is no helm chart to deploy the Custom Resource itself. This makes onboarding a bit more complicated.

Add the repository: 

$ helm repo add mongodb

Install the Operator:

$ helm install community-operator mongodb/community-operator

To deploy the database you need to modify this example and set the password. Once done apply it to deploy the replica set of three nodes:

$ kubectl apply -f mongodb.com_v1_mongodbcommunity_cr.yaml


There are no expectations that anyone would run MongoDB Communty Operator in production as it lacks basic features: backups and restores, sharding, upgrades, etc. But at the same time as always there are interesting ideas that are worth mentioning:

  • User provisioning. Described here. You can create a MongoDB database user to authenticate to your MongoDB replica set using SCRAM. You create the Secret object and list users and their corresponding roles.
  • Connection string in the secret. Once a cluster is created, you can get the connection string, without figuring out the service endpoints or fetching the user credentials.
  "connectionString.standard": "mongodb://my-user:mySupaPass@example-mongodb-0.example-mongodb-svc.default.svc.cluster.local:27017,example-mongodb-1.example-mongodb-svc.default.svc.cluster.local:27017,example-mongodb-2.example-mongodb-svc.default.svc.cluster.local:27017/admin?replicaSet=example-mongodb&ssl=false",
  "connectionString.standardSrv": "mongodb+srv://my-user:mySupaPass@example-mongodb-svc.default.svc.cluster.local/admin?replicaSet=example-mongodb&ssl=false",
  "password": "mySupaPass",
  "username": "my-user"

Percona Operator for MongoDB

Last but not least in our solution list – a fully open source Percona Operator for MongoDB. It was created in 2018 and went through various stages of improvements reaching GA in 2020. The latest release came out in May 2022. 

If you are a Percona customer, you will get 24/7 support for the Operator and clusters deployed with it. 


There are various ways to deploy Percona Operators, and one of them is through Helm charts. One chart for Operator, another one for a database. 

Add repository:

$ helm repo add percona

Deploy the Operator:

$ helm install my-operator percona/psmdb-operator

Deploy the cluster:

$ helm install my-db percona/psmdb-db


Operator leverages Percona Distribution for MongoDB and this unblocks various enterprise capabilities right away. The Operator itself is feature rich and provides users with various features that would simplify the migration from your regular MongoDB deployment to Kubernetes: sharding, arbiters, backups and restores, and many more. 

Some key features that I would like to mention:

  • Automated upgrades. You can set up the schedule of upgrades and the Operator will automatically upgrade your MongoDB cluster’s minor version with no downtime to operations.
  • Point-in-time recovery. Percona Backup for MongoDB is part of our Distribution and provides backup and restore functionality in our Operator, and this includes the ability to store oplogs on an object storage of your choice. Read more about it in our documentation.
  • Multi-cluster deployment. Percona Operator supports complex topologies with cross-cluster replication capability. The most common use cases are disaster recovery and migrations.


Choosing a solution to deploy and manage MongoDB is an important technical decision, which might impact various business metrics in the future. In this blog post, we summarized and highlighted the pros and cons of various open source solutions to run MongoDB in Kubernetes. 

An important thing to remember is that the solution you choose should not only provide a way to deploy the database but also enable your teams to execute various management and maintenance tasks without drowning in MongoDB complexity. 


Run PostgreSQL on Kubernetes with Percona Operator & Pulumi

Run PostgreSQL on Kubernetes with Percona Operator and Pulumi

Avoid vendor lock-in, provide a private Database-as-a-Service for internal teams, quickly deploy-test-destroy databases with CI/CD pipeline – these are some of the most common use cases for running databases on Kubernetes with operators. Percona Distribution for PostgreSQL Operator enables users to do exactly that and more.

Pulumi is an infrastructure-as-a-code tool, which enables developers to write code in their favorite language (Python, Golang, JavaScript, etc.) to deploy infrastructure and applications easily to public clouds and platforms such as Kubernetes.

This blog post is a step-by-step guide on how to deploy a highly-available PostgreSQL cluster on Kubernetes with our Percona Operator and Pulumi.

Desired State

We are going to provision the following resources with Pulumi:

  • Google Kubernetes Engine cluster with three nodes. It can be any Kubernetes flavor.
  • Percona Operator for PostgreSQL
  • Highly available PostgreSQL cluster with one primary and two hot standby nodes
  • Highly available pgBouncer deployment with the Load Balancer in front of it
  • pgBackRest for local backups

Pulumi code can be found in this git repository.


I will use the Ubuntu box to run Pulumi, but almost the same steps would work on macOS.

Pre-install Packages

gcloud and kubectl

echo "deb [signed-by=/usr/share/keyrings/] cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl | sudo apt-key --keyring /usr/share/keyrings/ add -
sudo apt-get update
sudo apt-get install -y google-cloud-sdk kubectl jq unzip


Pulumi allows developers to use the language of their choice to describe infrastructure and applications. I’m going to use python. We will also pip (python package-management system) and venv (virtual environment module).

sudo apt-get install python3 python3-pip python3-venv


Install Pulumi:

curl -sSL | sh

On macOS, this can be installed view Homebrew with

brew install pulumi


You will need to add .pulumi/bin to the $PATH:

export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/percona/.pulumi/bin



You will need to provide access to Google Cloud to provision Google Kubernetes Engine.

gcloud config set project your-project
gcloud auth application-default login
gcloud auth login


Generate Pulumi token at You will need it later to init Pulumi stack:


This repo has the following files:

  • Pulumi.yaml

    – identifies that it is a folder with Pulumi project


    – python code used by Pulumi to provision everything we need

  • requirements.txt

    – to install required python packages

Clone the repo and go to the



git clone
cd blog-data/pg-k8s-pulumi

Init the stack with:

pulumi stack init pg

You will need the key here generated before on

Python code that Pulumi is going to process is in file. 

Lines 1-6: importing python packages

Lines 8-31: configuration parameters for this Pulumi stack. It consists of two parts:

  • Kubernetes cluster configuration. For example, the number of nodes.
  • Operator and PostgreSQL cluster configuration – namespace to be deployed to, service type to expose pgBouncer, etc.

Lines 33-80: deploy GKE cluster and export its configuration

Lines 82-88: create the namespace for Operator and PostgreSQL cluster

Lines 91-426: deploy the Operator. In reality, it just mirrors the operator.yaml from our Operator.

Lines 429-444: create the secret object that allows you to set the password for pguser to connect to the database

Lines 445-557: deploy PostgreSQL cluster. It is a JSON version of cr.yaml from our Operator repository

Line 560: exports Kubernetes configuration so that it can be reused later 


At first, we will set the configuration for this stack. Execute the following commands:

pulumi config set gcp:project YOUR_PROJECT
pulumi config set gcp:zone us-central1-a
pulumi config set node_count 3
pulumi config set master_version 1.21

pulumi config set namespace percona-pg
pulumi config set pg_cluster_name pulumi-pg
pulumi config set service_type LoadBalancer
pulumi config set pg_user_password mySuperPass

These commands set the following:

  • GCP project where GKE is going to be deployed
  • GCP zone 
  • Number of nodes in a GKE cluster
  • Kubernetes version
  • Namespace to run PostgreSQL cluster
  • The name of the cluster
  • Expose pgBouncer with LoadBalancer object

Deploy with the following command:

$ pulumi up
Previewing update (pg)

View Live:

     Type                                                           Name                                Plan       Info
 +   pulumi:pulumi:Stack                                            percona-pg-k8s-pg                   create     1 message
 +   ?? random:index:RandomPassword                                 pguser_password                     create
 +   ?? random:index:RandomPassword                                 password                            create
 +   ?? gcp:container:Cluster                                       gke-cluster                         create
 +   ?? pulumi:providers:kubernetes                                 gke_k8s                             create
 +   ?? kubernetes:core/v1:ServiceAccount                           pgoPgo_deployer_saServiceAccount    create
 +   ?? kubernetes:core/v1:Namespace                                pgNamespace                         create
 +   ?? kubernetes:batch/v1:Job                                     pgoPgo_deployJob                    create
 +   ?? kubernetes:core/v1:ConfigMap                                pgoPgo_deployer_cmConfigMap         create
 +   ?? kubernetes:core/v1:Secret                                   percona_pguser_secretSecret         create
 +   ??  pgo_deployer_crbClusterRoleBinding  create
 +   ??         pgo_deployer_crClusterRole          create
 +   ??               my_cluster_name                     create

  pulumi:pulumi:Stack (percona-pg-k8s-pg):
    E0225 14:19:49.739366105   53802]           Fork support is only compatible with the epoll1 and poll polling strategies

Do you want to perform this update? yes

Updating (pg)
View Live:
     Type                                                           Name                                Status      Info
 +   pulumi:pulumi:Stack                                            percona-pg-k8s-pg                   created     1 message
 +   ?? random:index:RandomPassword                                 pguser_password                     created
 +   ?? random:index:RandomPassword                                 password                            created
 +   ?? gcp:container:Cluster                                       gke-cluster                         created
 +   ?? pulumi:providers:kubernetes                                 gke_k8s                             created
 +   ?? kubernetes:core/v1:ServiceAccount                           pgoPgo_deployer_saServiceAccount    created
 +   ?? kubernetes:core/v1:Namespace                                pgNamespace                         created
 +   ?? kubernetes:core/v1:ConfigMap                                pgoPgo_deployer_cmConfigMap         created
 +   ?? kubernetes:batch/v1:Job                                     pgoPgo_deployJob                    created
 +   ?? kubernetes:core/v1:Secret                                   percona_pguser_secretSecret         created
 +   ??         pgo_deployer_crClusterRole          created
 +   ??  pgo_deployer_crbClusterRoleBinding  created
 +   ??               my_cluster_name                     created

  pulumi:pulumi:Stack (percona-pg-k8s-pg):
    E0225 14:20:00.211695433   53839]           Fork support is only compatible with the epoll1 and poll polling strategies

    kubeconfig: "[secret]"

    + 13 created

Duration: 5m30s


Get kubeconfig first:

pulumi stack output kubeconfig --show-secrets > ~/.kube/config

Check if Pods of your PG cluster are up and running:

$ kubectl -n percona-pg get pods
NAME                                             READY   STATUS      RESTARTS   AGE
backrest-backup-pulumi-pg-dbgsp                  0/1     Completed   0          64s
pgo-deploy-8h86n                                 0/1     Completed   0          4m9s
postgres-operator-5966f884d4-zknbx               4/4     Running     1          3m27s
pulumi-pg-787fdbd8d9-d4nvv                       1/1     Running     0          2m12s
pulumi-pg-backrest-shared-repo-f58bc7657-2swvn   1/1     Running     0          2m38s
pulumi-pg-pgbouncer-6b6dc4564b-bh56z             1/1     Running     0          81s
pulumi-pg-pgbouncer-6b6dc4564b-vpppx             1/1     Running     0          81s
pulumi-pg-pgbouncer-6b6dc4564b-zkdwj             1/1     Running     0          81s
pulumi-pg-repl1-58d578cf49-czm54                 0/1     Running     0          46s
pulumi-pg-repl2-7888fbfd47-h98f4                 0/1     Running     0          46s
pulumi-pg-repl3-cdd958bd9-tf87k                  1/1     Running     0          46s

Get the IP-address of pgBouncer LoadBalancer:

$ kubectl -n percona-pg get services
NAME                             TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
pulumi-pg-pgbouncer              LoadBalancer   5432:32042/TCP               3m17s

You can connect to your PostgreSQL cluster through this IP-address. Use pguser password that was set earlier with

pulumi config set pg_user_password


psql -h -p 5432 -U pguser pgdb

Clean up

To delete everything it is enough to run the following commands:

pulumi destroy
pulumi stack rm

Tricks and Quirks

Pulumi Converter

kube2pulumi is a huge help if you already have YAML manifests. You don’t need to rewrite the whole code, but just convert YAMLs to Pulumi code. This is what I did for operator.yaml.


There are two ways for Custom Resource management in Pulumi:

crd2pulumi generates libraries/classes out of Custom Resource Definitions and allows you to create custom resources later using these. I found it a bit complicated and it also lacks documentation.

apiextensions.CustomResource on the other hand allows you to create Custom Resources by specifying them as JSON. It is much easier and requires less manipulation. See lines 446-557 in my

True/False in JSON

I have the following in my Custom Resource definition in Pulumi code:

perconapg = kubernetes.apiextensions.CustomResource(
    spec= {
    "disableAutofail": False,
    "tlsOnly": False,
    "standby": False,
    "pause": False,
    "keepData": True,

Be sure that you use boolean of the language of your choice and not the “true”/”false” strings. For me using the strings turned into a failure as the Operator was expecting boolean, not the strings.

Depends On…

Pulumi makes its own decisions on the ordering of provisioning resources. You can enforce the order by specifying dependencies

For example, I’m ensuring that Operator and Secret are created before the Custom Resource:



High Availability and Disaster Recovery Recipes for PostgreSQL on Kubernetes

High Availability and Disaster Recovery PostgreSQL on Kubernetes

Percona Distribution for PostgreSQL Operator allows you to deploy and manage highly available and production-grade PostgreSQL clusters on Kubernetes with minimal manual effort. In this blog post, we are going to look deeper into High Availability, Disaster Recovery, and Scaling of PostgreSQL clusters.

High Availability

Our default custom resource manifest deploys a highly available (HA) PostgreSQL cluster. Key components of HA setup are:

  • Kubernetes Services that point to pgBouncer and replica nodes
  • pgBouncer – a lightweight connection pooler for PostgreSQL
  • Patroni – HA orchestrator for PostgreSQL
  • PostgreSQL nodes – we have one primary and 2 replica nodes in hot standby by default

high availability postgresql

Kubernetes Service is the way to expose your PostgreSQL cluster to applications or users. We have two services:

  • clusterName-pgbouncer

    – Exposing your PostgreSQL cluster through pgBouncer connection pooler. Both reads and writes are sent to the Primary node. 

  • clusterName-replica

    – Exposes replica nodes directly. It should be used for reads only. Also, keep in mind that connections to this service are not pooled. We are working on a better solution, where the user would be able to leverage both connection pooling and read-scaling through a single service.

By default we use ClusterIP service type, but you can change it in





Every PostgreSQL container has Patroni running. Patroni monitors the state of the cluster and in case of Primary node failure switches the role of the Primary to one of the Replica nodes. PgBouncer always knows where Primary is.

As you see we distribute PostgreSQL cluster components across different Kubernetes nodes. This is done with Affinity rules and they are applied by default to ensure that single node failure does not cause database downtime.

Multi-Datacenter with Multi-AZ

Good architecture design is to run your Kubernetes cluster across multiple datacenters. Public clouds have a concept of availability zones (AZ) which are data centers within one region with a low-latency network connection between them. Usually, these data centers are at least 100 kilometers away from each other to minimize the probability of regional outage. You can leverage multi-AZ Kubernetes deployment to run cluster components in different data centers for better availability.

Multi-Datacenter with Multi-AZ

To ensure that PostgreSQL components are distributed across availability zones, you need to tweak affinity rules. Now it is only possible through editing Deployment resources directly:

$ kubectl edit deploy cluster1-repl2
-            topologyKey:
+            topologyKey:


Scaling PostgreSQL to meet the demand at peak hours is crucial for high availability. Our Operator provides you with tools to scale PostgreSQL components both horizontally and vertically.

Vertical Scaling

Scaling vertically is all about adding more power to a PostgreSQL node. The recommended way is to change resources in the Custom Resource (instead of changing them in Deployment objects directly). For example, change the following in the


to get 256 MBytes of RAM for all PostgreSQL Replica nodes:

-         memory: "128Mi"
+         memory: "256Mi"




$ kubectl apply -f cr.yaml

Use the same approach to tune other components in their corresponding sections.

You can also leverage Vertical Pod Autoscaler (VPA) to react to load spikes automatically. We create a Deployment resource for Primary and each Replica node. VPA objects should target these deployments. The following example will track one of the replicas Deployment resources of cluster1 and scale automatically:

kind: VerticalPodAutoscaler
  name: pxc-vpa
    apiVersion: "apps/v1"
    kind:       Deployment
    name:     cluster1-repl1  
    namespace:  pgo
    updateMode: "Auto"

Please read more about VPA and its capabilities in its documentation.

Horizontal Scaling

Adding more replica nodes or pgBouncers can be done by changing size parameters in the Custom Resource. Do the following change in the default



-      size: 2
+      size: 3

Apply the change to get one more PostgreSQL Replica node:

$ kubectl apply -f cr.yaml

Starting from release 1.1.0 it is also possible to scale our cluster using kubectl scale command. Execute the following to have two PostgreSQL replica nodes in cluster1:

$ kubectl scale --replicas=2 perconapgcluster/cluster1 scaled

In the latest release, it is not possible to use Horizontal Pod Autoscaler (HPA) yet and we will have it supported in the next one. Stay tuned.

Disaster Recovery

It is important to understand that Disaster Recovery (DR) is not High Availability. DR’s goal is to ensure business continuity in the case of a massive disaster, such as a full region outage. Recovery in such cases can be of course automated, but not necessarily – it strictly depends on the business requirements.

Disaster Recovery postgresql

Backup and Restore

I think it is the most common Disaster Recover protocol – take the backup, store it in some 3rd party premises, restore to another datacenter if needed.

This approach is simple, but comes with a long recovery time, especially if the database is big. Use this method only if it passes your Recovery Time Objectives (RTO).

Recovery Time Objectives

Our Operator handles backup and restore for PostgreSQL clusters. The disaster recovery is built around pgBackrest and looks like the following:

  1. Configure pgBackrest to upload backups to S3 or GCS (see our documentation for details).
  2. Create the backup manually (through pgTask) or ensure that a scheduled backup was created. 
  3. Once the Main cluster fails, create the new cluster in the Disaster Recovery data center. The cluster must be running in standby mode and pgBackrest must be pointing to the same repository as the main cluster:
  standby: true
  # same config as on original cluster

Once data is recovered, the user can turn off standby mode and switch the application to DR cluster.

Continuous Restoration

This approach is quite similar to the above: pgBackrest instances continuously synchronize data between two clusters through object storage. This approach minimizes RTO and allows you to switch the application traffic to the DR site almost immediately. 

Continuous Restoration postgresql

Configuration here is similar to the previous case, but we always run a second PostgreSQL cluster in the Disaster Recovery data center. In case of main site failure just turn off the standby mode:

  standby: false

You can use a similar setup to migrate the data to and from Kubernetes. Read more about it in the Migrating PostgreSQL to Kubernetes blog post.


Kubernetes Operators provide ready-to-use service, and in the case of Percona Distribution for PostgreSQL Operator, the user gets a production-grade, highly available database cluster. In addition, the Operator provides day-2 operation capabilities and automates day-to-day routine.

We encourage you to try out our operator. See our GitHub repository and check out the documentation.

Found a bug or have a feature idea? Feel free to submit it in JIRA.

For general questions please raise the topic in the community forum

Are you a developer and looking to contribute? Please read our and send the Pull.


Percona Distribution for PostgreSQL Operator 1.1.0 – Notable Features

Features in Percona Distribution for PostgreSQL Operator

Features in Percona Distribution for PostgreSQL OperatorPercona in 2021 is heavily invested in making the PostgreSQL ecosystem better and contributing to it from different angles:

With this in mind let me introduce to you Percona Distribution for PostgreSQL Operator version 1.1.0 and its notable features:

  • Smart Update – forget about manual and error-prone database upgrades
  • System Users management – add and modify system users with ease with a single Kubernetes Secret resource
  • PostgreSQL 14 support – leverage the latest and greatest by running Percona Distribution for PostgreSQL on Kubernetes

Full release notes can be found here.

Smart Update Feature

Updating databases and their components is always a challenge. In our Operators for MySQL and MongoDB we have simplified and automated upgrade procedures, and now it’s time for PostgreSQL. In the 1.1.0 version, we ship this feature as Technical Preview with a plan to promote it to GA in the next release.

This feature consists of two parts:

  • Version Service – get the latest or recommended version of the database or other component (PMM for example)
  • Smart Update – apply new version without downtime

Version Service

This feature answers the question: which PostgreSQL/pgBackRest/pgBouncer version should I be running with this Operator? It is important to note, that Version Service and Smart Update can only perform minor version upgrades (ex. from 13.1 to 13.4). Major Version upgrades are manual for now and will be automated in the Operator soon.

The way it works is well depicted on the following diagram: 

Percona Distribution for PostgreSQL Operator

Version Service is an open source tool, see the source code on Github. Percona hosts and Operators use it by default, but users can run their own self-hosted Version Service.

Users who worked with our Operators for MySQL and MongoDB will find the configuration of Version Service and Smart Update quite familiar:

    apply: recommended
    schedule: "0 2 * * *"

  • Define Version Service endpoint
  • Define PostgreSQL version – Operator will automatically figure out components versions
  • Schedule defines the time when the rollout of newer versions is going to take place. Good practice to set this time outside of peak hours.

Smart Update

Okay, now Operator knows the versions that should be used. It is time to apply them and do it with minimal downtime. Here is where the Smart Update feature kicks in. 

The heart of Smart Update is smartUpdateCluster function. The goal here is to switch container images versions for database components in a specific order and minimize downtime. Once the image is changed, Kubernetes does the magic. For Deployment resources, which we use in our Operator, Kubernetes first spins up the Pod with a new image and then terminates the old one. This provides minimal downtime. The update itself looks like this:

  1. Upgrade pgBackRest image in Deployment object in Kubernetes
  2. Start upgrading PostgreSQL itself
    1. Percona Monitoring and Management which runs as a sidecar gets the new version here as well
    2. Same for pgBadger
    3. We must upgrade replica nodes first here. If we upgrade the primary node first, the cluster will not recover. The tricky part here, is that in an event of failover Primary node can be somewhere in the pgReplicas Deployment. So we need to verify where the primary is first and only after that change the image. See the Smart Update sequence diagram for more details. 
  3. Last, but not least – change the image for pgBouncer. To minimize the downtime here, we recommend running at least two pgBouncer nodes. By default pgBouncer.size is set to 3.

As a result, the user gets the latest, most secure, and performant PostgreSQL and its components automatically with minimal downtime.

System Users Management

Our Operator has multiple system users to manage the cluster and ensure its health. Our users raised two main concerns:

  • it is not possible to change system user password with the Operator after cluster deployment
  • it is confusing that there is a Secret object per user

In this release, we are moving all system users to a single Secret. The change in the Secret resource is going to trigger the update of the passwords in PostgreSQL automatically.

If the cluster is created from scratch the Secret with system users is going to be created automatically and passwords would be randomly generated. By default the Secret name is


, it can be changed under


variable in the Custom Resource.

  secretsName: my-custom-secret

When upgrading from 1.0.0 to 1.1.0, if you want to keep old passwords, please create the Secret resource manually. Otherwise, the passwords for system users are going to be generated randomly and updated by the Operator.

PostgreSQL 14 Support

PostgreSQL 14 provides an extensive set of new features and enhancements to security, performance, usability for client applications, and more.

Most notable of them include the following:

  • Expired B-tree index entries can now be detected and removed between vacuum runs. This results in a lesser number of page splits and reduces the index bloat.
  • The vacuum process now deletes dead tuples in a single cycle, as opposed to the previous 2-step approach of first marking tuples as deleted and then actually freeing up space in the next run. This speeds up free space cleanup.
  • Support for subscripts in JSON is added to simplify data retrieval using a commonly recognized syntax.
  • Stored procedures can accept OUT parameters.
  • The libpq library now supports the pipeline mode. Previously, the client applications waited for a transaction to be completed before sending the next one. The pipeline mode allows the applications to send multiple transactions at the same time thus boosting performance.
  • Large transactions are now streamed to subscribers in-progress, thus increasing the performance. This improvement applies to logical replication.
  • LZ4 compression is added for TOAST operations. This speeds up large data processing and also improves the compression ratio.
  • SCRAM is made the default authentication mechanism. This mechanism improves security and simplifies regulatory compliance for data security.

In the 1.1.0 version of PostgreSQL Distribution for PostgreSQL Operator, we enable our users to run the latest and greatest PostgreSQL 14. PostgreSQL 14 is the default version since this release, but you still can use versions 12 and 13.


Kubernetes Operators are mainly seen as the tool to automate deployment and management of the applications. With this Percona Distribution for PostgreSQL Operator release, we simplify PostgreSQL management even more and enable users to leverage the latest version 14. 

We encourage you to try out our operator. See our github repository and check out the documentation.

Found a bug or have a feature idea? Feel free to submit it in JIRA.

For general questions please raise the topic in the community forum

You are a developer and looking to contribute? Please read our and send the Pull Request.


Multi-Tenant Kubernetes Cluster with Percona Operators

multi-tenant kubernetes cluster

multi-tenant kubernetes clusterThere are cases where multiple teams, customers, or applications run in the same Kubernetes cluster. Such an environment is called multi-tenant and requires some preparation and management. Multi-tenant Kubernetes deployment allows you to utilize the economy of scale model on various levels:

  • Smaller compute footprint – one control plane, dense container deployments
  • Ease of management – one cluster, not hundreds

In this blog post, we are going to review multi-tenancy best practices, recommendations and see how Percona Kubernetes Operators can be deployed and managed in such Kubernetes clusters.



Multi-tenancy usually means a lot of Pods and workloads in a single cluster. You should always remember that there are certain limits when designing your infrastructure. For vanilla Kubernetes, these limits are quite high and hard to reach:

  • 5000 nodes
  • 10 000 namespaces
  • 150 000 pods

Managed Kubernetes services have their own limits that you should keep in mind. For example, GKE allows a maximum of 110 Pods per node on a standard cluster and only 32 on GKE Autopilot nodes.

The older AWS EKS CNI plugin was limiting the number of Pods per node to the number of IP addresses EC2 can have. With the prefix assignment enabled in CNI, you are still going to hit a limit of 110 pods per node.


Kubernetes Namespaces provides a mechanism for isolating groups of resources within a single cluster. The scope of k8s objects can either be cluster scope or namespace scoped. Objects which are accessible across all the namespaces like


 are cluster scoped and those which are accessible only in a single namespace like Deployments are namespace scoped.

kubernetes namespaces

Deploying a database with Percona Operators creates pods that are namespace scoped. This provides interesting opportunities to run workloads on different namespaces for different teams, projects, and potentially, customers too. 

Example: Percona Distribution for MongoDB Operator and Percona Server for MongoDB can be run on two different namespaces by adding namespace metadata fields. Snippets are as follows:

# Team 1 DB running in team1-db namespace
kind: PerconaServerMongoDB
 name: team1-server
 namespace: team1-db

# Team 1 deployment running in team1-db namespace
apiVersion: apps/v1
kind: Deployment
 name: percona-server-mongodb-operator-team1
 namespace: team1-db

# Team 2 DB running in team2-db namespace
kind: PerconaServerMongoDB
 name: team2-server
 namespace: team2-db

# Team 2 deployment running in team2-db namespace
apiVersion: apps/v1
kind: Deployment
 name: percona-server-mongodb-operator-team2
 namespace: team2-db


  1. Avoid using the standard namespaces like




  2. It’s always better to run independent workloads on different namespaces unless there is a specific requirement to do it in a shared namespace.

Namespaces can be used per team, per application environment, or any other logical structure that fits the use case.


The biggest problem in any multi-tenant environment is this – how can we ensure that a single bad apple doesn’t spoil the whole bunch of apples?


Thanks to Resource Quotas, we can restrict the resource utilization of namespaces.


 also allows you to restrict the number of k8s objects which can be created in a namespace. 

Example of the YAML manifest with resource quotas:

apiVersion: v1
kind: ResourceQuota
  name: team1-quota         
  namespace: team1-db    # Namespace where operator is deployed
    requests.cpu: "10"     # Cumulative CPU requests of all k8s objects in the namespace cannot exceed 10vcpu
    limits.cpu: "20"       # Cumulative CPU limits of all k8s objects in the namespace cannot exceed 20 vcpu
    requests.memory: 10Gi  # Cumulative memory requests of all k8s objects in the namespace cannot exceed 10Gi
    limits.memory: 20Gi    # Cumulative memory limits of all k8s objects in the namespace cannot exceed 20Gi
    requests.ephemeral-storage: 100Gi  # Cumulative ephemeral storage request of all k8s objects in the namespace cannot exceed 100Gi
    limits.ephemeral-storage: 200Gi    # Cumulative ephemeral storage limits of all k8s objects in the namespace cannot exceed 200Gi 300Gi            # Cumulative storage requests of all PVC in the namespace cannot exceed 300Gi
    persistentvolumeclaims: 5          # Maximum number of PVC in the namespace is 5
    count/statefulsets.apps: 2         # Maximum number of statefulsets in the namespace is 2
    # count/psmdb: 2                   # Maximum number of PSMDB objects in the namespace is 2, replace the name with proper Custom Resource

Please refer to the Resource Quotas documentation and apply quotas that are required for your use case.

If resource quotas are applied to a namespace, it is required to set containers’ requests and limits, otherwise, you are going to have an error similar to the following:

Error creating: pods "my-cluster-name-rs0-0" is forbidden: failed quota: my-cpu-memory-quota: must specify limits.cpu,requests.cpu

All Percona Operators provide the capability to fine-tune the requests and limits. The following example sets CPU and memory requests for Percona XtraDB Cluster containers:

        memory: 4G
        cpu: 2




we can control the cumulative resources in the namespaces but if we want to enforce constraints on individual Kubernetes objects, LimitRange is a useful option. 

For example, if Team 1,2,3 are provided a namespace to run workloads,


will ensure that none of the team can exceed the quotas allocated and over-utilize the cluster… but what if a badly configured workload (say an operator run from team 1 with higher priority class) is utilizing all the resources allocated to the team?


can be used to enforce resources like compute, memory, ephemeral storage, storage with PVC. The example below highlights some of the possibilities.

apiVersion: v1
kind: LimitRange
  name: lr-team1
  namespace: team1-db
  - type: Pod                      
    max:                            # Maximum resource limit of all containers combined. Consider setting default limits
      ephemeral-storage: 100Gi      # Maximum ephemeral storage cannot exceed 100GB
      cpu: "800m"                   # Maximum CPU limits of the Pod is 800mVCPU
      memory: 4Gi                   # Maximum memory limits of the Pod is 4 GB
    min:                            # Minimum resource request of all containers combined. Consider setting default requests
      ephemeral-storage: 50Gi       # Minimum ephemeral storage should be 50GB
      cpu: "200m"                   # Minimum CPU request is  200mVCPU
      memory: 2Gi                   # Minimum memory request is 2 GB
  - type: PersistentVolumeClaim
      storage: 2Gi                  # Maximum PVC storage limit
      storage: 1Gi                  # Minimum PVC storage request


  1. When it’s feasible, apply



     to the namespaces where the Percona operator is running. This ensures that tenants are not overutilizing the cluster.

  2. Set alerts to monitor objects and usage of resources in namespaces. Automation of

     changes may also be useful in some scenarios.

  3. It is advisable to use a buffer on maximum expected utilization before setting the


  4. Set

    to ensure workloads are not overutilizing resources in individual namespaces.

Roles and Security

Kubernetes provides several modes to authorize an API request. Role-Based access control is a popular way for authorization. There are four important objects to provide access:

ClusterRole Represents a set of permissions across the cluster (cluster scope)
Role Represents a set of permissions within a namespace ( namespace scope)
ClusterRoleBinding Granting permission to subjects across the cluster ( cluster scope )
RoleBinding Granting permissions to subjects within a namespace ( namespace scope)

Subjects in the


can be users, groups, or service accounts. Every pod running in the cluster will have an identity and a service account attached (“default” service account in the same namespace will be attached if not explicitly specified). Permissions granted to the service account with


dictate the access that pods will have. 

Going by the best policy of least privileges, it’s always advisable to use Roles with the least set of permissions and bind it to a service account with


. This service account can be used to run the operator or custom resource to ensure proper access and also restrict the blast radius.

Avoid granting cluster-level access unless there is a strong use case to do it.

Example: RBAC in MongoDB Operator uses Role and


restricting access to a single namespace for the service account. The same service account is used for both CustomResource and the Operator

Network Policies

Network isolation provides additional security to applications and customers in a multi-tenant environment. Network policies are Kubernetes resources that allow you to control the traffic between Pods, CIDR blocks, and network endpoints, but the most common approach is to control the traffic between namespaces:

kubernetes network policies

Most Container Network Interface (CNI) plugins support the implementation of network policies, however, if they don’t and


is created, the resource is silently ignored. For example, AWS CNI does not support network policies, but AWS EKS can run Calico CNI which does.

It is a good approach to follow the least privilege approach, whereby default traffic is denied and access is granted granularly:

kind: NetworkPolicy
  name: default-deny-ingress
  namespace: app1-db
  podSelector: {}
  - Ingress

Allow traffic from Pods in namespace


to namespace



kind: NetworkPolicy
  name: default-deny-ingress
  namespace: app1-db
  podSelector: {}
  - from:
    - namespaceSelector:
          name: app1
  - Ingress

Policy Enforcement

In a multi-tenant environment, policy enforcement plays a key role. Policy enforcement ensures that k8s objects pass the required quality gates set by administrators/teams. Some examples of policy enforcement could be:

  1. All the workloads have proper labels 
  2. Proper network policies are set for DB
  3. Unsafe configurations are not allowed (Example)
  4. Backups are always enabled (Example)

The K8s ecosystem offers a wide range of options to achieve this. Some of them are listed below:

  1. Open Policy Agent (OPA) is a CNCF graduated project which gives a high-level declarative language to author and enforces policies across k8s objects. (Examples from Google and OPA repo can be helpful)
  2. Mutating Webhooks can be used to modify API calls before it reaches the API server. This can be used to set required properties for k8s objects. (Example: mutating webhook to add

    for Pods created in production namespaces)

  3. Validating Webhooks can be used to check if k8s API follows the required policy, any API which doesn’t follow the policy will be rejected. (Example: validating webhook to ensure huge pages of 1GB is not used in the pod )


Percona Distribution for MySQL Operator and Percona Distribution for PostgreSQL Operator both support cluster-wide mode which allows single Operator deploy and manage databases across multiple namespaces (support for cluster-wide mode in Percona Operator for MongoDB is on the roadmap). Is also possible to have an Operator per namespace:

Operator per namespace

For example, a single deployment of Percona Distribution for MySQL Operator can monitor multiple namespaces in cluster-wide mode. The use can specify them in WATCH_NAMESPACE environment variable in the



      - command:
        - percona-xtradb-cluster-operator
        - name: WATCH_NAMESPACE
          value: "namespace-a, namespace-b"

In a multi-tenant environment, it depends on the amount of freedom you want to give to the tenants. Usually when the tenants are highly trusted (for instance internal teams), then it is fine to choose namespace-scoped deployment, where each team can deploy and manage the Operator themselves.


It is important to remember that Kubernetes is not a multi-tenant system out of the box. Various levels of isolation were described in this blog post that would help you to run your applications and databases securely and ensure operational stability. 

We encourage you to try out our Operators: in every repository is there for those of you who want to contribute your ideas, code, and docs.

For general questions please raise the topic in the community forum.


Getting Started with ProxySQL in Kubernetes

Getting Started with ProxySQL in Kubernetes

Getting Started with ProxySQL in KubernetesThere are plenty of ways to run ProxySQL in Kubernetes (K8S). For example, we can deploy sidecar containers on the application pods, or run a dedicated ProxySQL service with its own pods.

We are going to discuss the latter approach, which is more likely to be used when dealing with a large number of application pods. Remember each ProxySQL instance runs a number of checks against the database backends. These checks monitor things like server-status and replication lag. Having too many proxies can cause significant overhead.

Creating a Cluster

For the purpose of this example, I am going to deploy a test cluster in GKE. We need to follow these steps:

1. Create a cluster

gcloud container clusters create ivan-cluster --preemptible --project my-project --zone us-central1-c --machine-type n2-standard-4 --num-nodes=3

2. Configure command-line access

gcloud container clusters get-credentials ivan-cluster --zone us-central1-c --project my-project

3. Create a Namespace

kubectl create namespace ivantest-ns

4. Set the context to use our new Namespace

kubectl config set-context $(kubectl config current-context) --namespace=ivantest-ns

Dedicated Service Using a StatefulSet

One way to implement this approach is to have ProxySQL pods use persistent volumes to store the configuration. We can rely on ProxySQL Cluster mode to make sure the configuration is kept in sync.

For simplicity, we are going to use a ConfigMap with the initial config for bootstrapping the ProxySQL service for the first time.

Exposing the passwords in the ConfigMap is far from ideal, and so far the K8S community hasn’t made up its mind about how to implement Reference Secrets from ConfigMap.

1. Prepare a file for the ConfigMap

tee proxysql.cnf <<EOF
mysql_servers =
    { address="mysql1" , port=3306 , hostgroup=10, max_connections=100 },
    { address="mysql2" , port=3306 , hostgroup=20, max_connections=100 }
mysql_users =
    { username = "myuser", password = "password", default_hostgroup = 10, active = 1 }
proxysql_servers =
    { hostname = "proxysql-0.proxysqlcluster", port = 6032, weight = 1 },
    { hostname = "proxysql-1.proxysqlcluster", port = 6032, weight = 1 },
    { hostname = "proxysql-2.proxysqlcluster", port = 6032, weight = 1 }

2. Create the ConfigMap

kubectl create configmap proxysql-configmap --from-file=proxysql.cnf

3. Prepare a file with the StatefulSet

tee proxysql-ss-svc.yml <<EOF
apiVersion: apps/v1
kind: StatefulSet
  name: proxysql
    app: proxysql
  replicas: 3
  serviceName: proxysqlcluster
      app: proxysql
    type: RollingUpdate
        app: proxysql
      restartPolicy: Always
      - image: proxysql/proxysql:2.3.1
        name: proxysql
        - name: proxysql-config
          mountPath: /etc/proxysql.cnf
          subPath: proxysql.cnf
        - name: proxysql-data
          mountPath: /var/lib/proxysql
          subPath: data
        - containerPort: 6033
          name: proxysql-mysql
        - containerPort: 6032
          name: proxysql-admin
      - name: proxysql-config
          name: proxysql-configmap
  - metadata:
      name: proxysql-data
      accessModes: [ "ReadWriteOnce" ]
          storage: 2Gi
apiVersion: v1
kind: Service
    app: proxysql
  name: proxysql
  - name: proxysql-mysql
    nodePort: 30033
    port: 6033
    protocol: TCP
    targetPort: 6033
  - name: proxysql-admin
    nodePort: 30032
    port: 6032
    protocol: TCP
    targetPort: 6032
    app: proxysql
  type: NodePort

4. Create the StatefulSet

kubectl create -f proxysql-ss-svc.yml

5. Prepare the definition of the headless Service (more on this later)

tee proxysql-headless-svc.yml <<EOF 
apiVersion: v1
kind: Service
  name: proxysqlcluster
    app: proxysql
  clusterIP: None
  - port: 6032
    name: proxysql-admin
    app: proxysql

6. Create the headless Service

kubectl create -f proxysql-headless-svc.yml

7. Verify the Services

kubectl get svc

NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                         AGE
proxysql          NodePort           6033:30033/TCP,6032:30032/TCP   12m
proxysqlcluster   ClusterIP   None                   6032/TCP                        8m53s

Pod Name Resolution

By default, each pod has a DNS name associated in the form

The headless Service causes K8S to auto-create a DNS record with each pod’s FQDN as well. The result is we will have the following entries available:


We can then use these to set up the ProxySQL cluster (the proxysql_servers part of the configuration file).

Connecting to the Service

To test the service, we can run a container that includes a MySQL client and connect its console output to our terminal. For example, use the following command (which also removes the container/pod after we exit the shell):

kubectl run -i --rm --tty percona-client --image=percona/percona-server:latest --restart=Never -- bash -il

The connections from other pods should be sent to the Cluster-IP and port 6033 and will be load balanced. We can also use the DNS name proxysql.ivantest-ns.svc.cluster.local that got auto-created.

mysql -umyuser -ppassword -h10.3.249.158 -P6033

Use port 30033 instead if the client is connecting from an external network:

mysql -umyuser -ppassword -h10.3.249.158 -P30033

Cleanup Steps

In order to remove all the resources we created, run the following steps:

kubectl delete statefulsets proxysql
kubectl delete service proxysql
kubectl delete service proxysqlcluster

Final Words

We have seen one of the possible ways to deploy ProxySQL in Kubernetes. The approach presented here has a few shortcomings but is good enough for illustrative purposes. For a production setup, consider looking at the Percona Kubernetes Operators instead.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Powered by WordPress | Theme: Aeros 2.0 by