Jul
19
2023
--

Deploy PostgreSQL on Kubernetes Using GitOps and ArgoCD

PostgreSQL on Kubernetes using GitOps and ArgoCD

In the world of modern DevOps, deployment automation tools have become essential for streamlining processes and ensuring consistent, reliable deployments. GitOps and ArgoCD are at the cutting edge of deployment automation, making it easy to deploy complex applications and reducing the risk of human error in the deployment process. In this blog post, we will explore how to deploy the Percona Operator for PostgreSQL v2 using GitOps and ArgoCD.

deploy the Percona Operator for PostgreSQL v2 using GitOps and ArgoCD

The setup we are looking for is the following:

  1. Teams or CICD roll out the manifests to Github
  2. ArgoCD reads the changes and compares the changes to what we have in Kubernetes
  3. ArgoCD creates/modifies Percona Operator and PostgreSQL custom resources
  4. Percona Operator takes care of day-1 and day-2 operations based on the changes pushed by ArgoCD to custom resources

Prerequisites:

  1. Kubernetes cluster
  2. GitHub repository. You can find my manifests here.

Start it up

Deploy and prepare ArgoCD

ArgoCD has quite detailed documentation explaining the installation process. I did the following:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Expose the ArgoCD server. You might want to use ingress or some other approach. I’m using a Load Balancer in a public cloud:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

Get the Load Balancer endpoint; we will use it later:

kubectl -n argocd get svc argocd-server
NAME            TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
argocd-server   LoadBalancer   10.88.1.239   123.21.23.21   80:30480/TCP,443:32623/TCP   6h28m

I’m not a big fan of Web User Interfaces, so I took the path of using
argocd CLI. Install it by following the CLI installation documentation.

Retrieve the admin password to log in using the CLI:

argocd admin initial-password -n argocd

Login to the server. Use the Load Balancer endpoint from above:

argocd login 123.21.23.21

PostgreSQL

Github and manifests

Put YAML manifests into the github repository. I have two:

  1. bundle.yaml – deploys Custom Resource Definitions, Service Account, and the Deployment of the Operator
  2. argo-test.yaml – deploys the PostgreSQL Custom Resource manifest that Operator will process

There are some changes that you would need to make to ensure that ArgoCD works with Percona Operator.

Server Side sync

Percona relies on OpenAPI v3.0 validation for Custom Resources. When done properly, it increases the size of a Custom Resource Definition manifest (CRDS) in some cases. As a result, you might see the following error when you apply the bundle:

kubectl apply -f deploy/bundle.yaml
...
Error from server (Invalid): error when creating "deploy/bundle.yaml": CustomResourceDefinition.apiextensions.k8s.io "perconapgclusters.pg.percona.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

To avoid it, use the
serverside flag. ArgoCD supports Server-Side apply. In the manifests, I added them through annotations to
CustomResourceDefinition objects:

kind: CustomResourceDefinition
metadata:
  annotations:
    ...
    argocd.argoproj.io/sync-options: ServerSideApply=true
  name: perconapgclusters.pgv2.percona.com

Phases and waves

ArgoCD comes with Phases and Waves that allow you to apply manifests and resources in them in a specific order. You should use Waves for two reasons:

  1. To deploy Operator before Percona PostgreSQL Custom Resource
  2. It is also important to delete the Custom Resource first (so perform operations in reverse order)

I added the waves through annotations to the resources.

  • All resources in bundle.yaml are assigned to wave “1”:
kind: CustomResourceDefinition
metadata:
  annotations:
    ...
    argocd.argoproj.io/sync-wave: "1"
  name: perconapgclusters.pgv2.percona.com

  • PostgreSQL Custom Resource in argo-test.yaml has wave “5”:
apiVersion: pgv2.percona.com/v2
kind: PerconaPGCluster
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "5"
  name: argo-test

The bigger the number in wave, the later the resource will be created. In our case, PerconaPGCluster resource will be created after the Custom Resource Definitions from bundle.yaml.

Deploy with ArgoCD

Create the application in ArgoCD:

argocd app create percona-pg --repo https://github.com/spron-in/blog-data.git --path gitops-argocd-postgresql --dest-server https://kubernetes.default.svc --dest-namespace default

The commands do the following:

  • Creates an application called percona-pg
  • Uses the GitHub repo and a folder in it as a source of YAML manifests
  • Uses local k8s API. It is possible to have multiple k8s clusters.
  • Deploys into default namespace

Now perform a manual sync. The command will roll out manifests:

argocd app sync percona-pg

Check for pg object in the default namespace:

kubectl get pg
NAME        ENDPOINT          STATUS   POSTGRES   PGBOUNCER   AGE
argo-test   33.22.11.44       ready    3          3           80s

Operational tasks

Now let’s say we want to change something. The change should be merged into git, and ArgoCD will detect it. The sync interval is 180 seconds by default. You can change it in argocd-cm ConfigMap if needed.

Even when ArgoCD detects the change, it marks it as out of sync. For example, I reduced the number of CPUs for pgBouncer. ArgoCD detected the change:

argocd app get percona-pg
...
2023-07-07T10:11:22+03:00  pgv2.percona.com      PerconaPGCluster             default             argo-test                                OutOfSync                       perconapgcluster.pgv2.percona.com/argo-test configured

Now I can manually sync the change again. To automate the whole flow, just set the sync policy:

argocd app set percona-pg --sync-policy automated

Now all the changes in git will be automatically synced with Kubernetes once ArgoCD detects them.

Conclusion

GitOps, combined with Kubernetes and the Percona Operator for PostgreSQL, provides a powerful toolset for rapidly deploying and managing complex database infrastructures. By leveraging automation and deployment best practices, teams can reduce the risk of human error, increase deployment velocity, and focus on delivering business value. Additionally, the ability to declaratively manage infrastructure and application state enables teams to quickly recover from outages and respond to changes with confidence.

Try out the Operator by following the quickstart guide here.   

You can get early access to new product features, invite-only ”ask me anything” sessions with Percona Kubernetes experts, and monthly swag raffles. Interested? Fill in the form at percona.com/k8s.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community in a single distribution, designed and tested to work together.

 

Download Percona Distribution for PostgreSQL Today!

Jun
23
2021
--

MySQL on Kubernetes with GitOps

MySQL on Kubernetes with GitOps

GitOps workflow was introduced by WeaveWorks as a way to implement Continuous Deployment for cloud-native applications. This technique quickly found its way into devops and developer’s hearts as it greatly simplifies the application delivery pipeline: the change in the manifests in the git repository is reflected in Kubernetes right away. With GitOps there is no need to provide access to the cluster for the developer as all the actions are executed by the Operator.

This blog post is a guide on how to deploy Percona Distribution for MySQL on Kubernetes with Flux – GitOps Operator that keeps your cluster state in sync with the Git repository.

Percona Distribution for MySQL on Kubernetes with Flux

In a nutshell, the flow is the following:

  1. Developer triggers the change in the GitHub repository
  2. Flux Operator:
    1. detects the change
    2. deploys Percona Distribution for MySQL Operator
    3. creates the Custom Resource, which triggers the creation of Percona XtraDB Cluster and HAProxy pods

The result is a fully working MySQL service deployed without talking to Kubernetes API directly.

Preparation

Prerequisites:

  • Kubernetes cluster
  • Github user and account
    • For this blog post, I used the manifests from this repository 

It is a good practice to create a separate namespace for Flux:

$ kubectl create namespace gitops

Installing and managing Flux is easier with

fluxctl

. In Ubuntu, I use snap to install tools, for other operating systems please refer to the manual here.

$ sudo snap install fluxctl --classic

Install Flux operator to your Kubernetes cluster:

$ fluxctl install --git-email=your@email.com --git-url=git@github.com:spron-in/blog-data.git --git-path=gitops-mysql --manifest-generation=true --git-branch=master --namespace=gitops | kubectl apply -f -

GitHub Sync

As per configuration, Flux will monitor the changes in the spron-in/blog-data repository continuously and sync the state. It is required to grant access to Flux to the repo.

Get the public key that was generated during the installation:

$ fluxctl identity --k8s-fwd-ns gitops

Copy the key, add it as Deploy key with write access in GitHub. Go to Settings -> Deploy keys -> Add deploy key:

Action

All set. Flux reconcile loops check the state for changes every five minutes. To trigger synchronization right away run:

$ fluxctl sync --k8s-fwd-ns gitops

In my case I have two YAMLs in the repo:

  • bundle.yaml

    – installs the Operator, creates the Custom Resource Definitions (CRDs)

  • cr.yaml

    – deploys PXC and HAProxy pods

Flux is going to deploy them both.

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-haproxy-0                                 2/2     Running   0          26m
cluster1-haproxy-1                                 2/2     Running   0          25m
cluster1-pxc-0                                     1/1     Running   0          26m
cluster1-pxc-1                                     1/1     Running   0          25m
cluster1-pxc-2                                     1/1     Running   0          23m
percona-xtradb-cluster-operator-79966668bd-95plv   1/1     Running   0          26m

Now let’s add one more HAProxy Pod by changing

spec.haproxy.size

from 2 to 3 in

cr.yaml

. After that commit and push the changes. In a production-grade scenario, the Pull Request will go through a thorough review, in my case I push directly to the main branch.

$ git commit cr.yaml -m 'increase haproxy size from 2 to 3'
$ git push
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 2 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 385 bytes | 385.00 KiB/s, done.
Total 4 (delta 2), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To https://github.com/spron-in/blog-data
   e1a27b8..d555c77  master -> master

Either trigger the sync with

fluxctl sync

command or wait for approximately 5 minutes for Flux reconcile loop to detect the changes. In the logs of the Flux Operator you will see the event:

ts=2021-06-15T12:59:08.267469963Z caller=loop.go:134 component=sync-loop event=refreshed url=ssh://git@github.com/spron-in/blog-data.git branch=master HEAD=d555c77c19ea9d1685392680186e1491905401cc
ts=2021-06-15T12:59:08.270678093Z caller=sync.go:61 component=daemon info="trying to sync git changes to the cluster" old=e1a27b8a81e640d3bee9bc2e2c31f9c4189e898a new=d555c77c19ea9d1685392680186e1491905401cc
ts=2021-06-15T12:59:08.844068322Z caller=sync.go:540 method=Sync cmd=apply args= count=9
ts=2021-06-15T12:59:09.097835721Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=253.684342ms err=null output="serviceaccount/percona-xtradb-cluster-operator unchanged\nrole.rbac.authorization.k8s.io/percona-xtradb-cluster-operator unchanged\ncustomresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com configured\ncustomresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com unchanged\ncustomresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com unchanged\ncustomresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com unchanged\nrolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator unchanged\ndeployment.apps/percona-xtradb-cluster-operator unchanged\nperconaxtradbcluster.pxc.percona.com/cluster1 configured"
ts=2021-06-15T12:59:09.099258988Z caller=daemon.go:701 component=daemon event="Sync: d555c77, default:perconaxtradbcluster/cluster1" logupstream=false
ts=2021-06-15T12:59:11.387525662Z caller=loop.go:236 component=sync-loop state="tag flux" old=e1a27b8a81e640d3bee9bc2e2c31f9c4189e898a new=d555c77c19ea9d1685392680186e1491905401cc
ts=2021-06-15T12:59:12.122386802Z caller=loop.go:134 component=sync-loop event=refreshed url=ssh://git@github.com/spron-in/blog-data.git branch=master HEAD=d555c77c19ea9d1685392680186e1491905401cc

The log indicates that the main CR was configured:

perconaxtradbcluster.pxc.percona.com/cluster1 configured

 

Now we have three HAProxy Pods:

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-haproxy-0                                 2/2     Running   1          50m
cluster1-haproxy-1                                 2/2     Running   0          48m
cluster1-haproxy-2                                 2/2     Running   0          4m45s

It is important to note that GitOps maintains the sync between Kubernetes and GitHub. It means that if the user manually changes the object on Kubernetes, Flux, or any other GitOps Operator will revert the changes and sync them with GitHub.

GitOps also comes in handy when users want to take the backup or perform the restoration. To do that the user just creates YAML manifests in the GitHub repo and Flux creates corresponding Kubernetes objects. The Database Operator does the rest.

Conclusion

GitOps is a simple approach to deploy and manage applications on Kubernetes:

  • Change Management is provided by git version-control and code reviews
  • Direct access to Kubernetes API is limited which increases security
  • Infrastructure-as-a-Code is here, there is no need to integrate Terraform, Ansible, or any other tool

All Percona Operators can be deployed and managed with GitOps. As a result, you will get production-grade MySQL, MongoDB, or PostgreSQL cluster which just works.

Mar
22
2021
--

Storing Kubernetes Operator for Percona Server for MongoDB Secrets in Github

storing kubernetes MongoDB secrets github

storing kubernetes MongoDB secrets githubMore and more companies are adopting GitOps as the way of implementing Continuous Deployment. Its elegant approach built upon a well-known tool wins the hearts of engineers. But even if your git repository is private, it’s strongly discouraged to store keys and passwords in unencrypted form.

This blog post will show how easy it is to use GitOps and keep Kubernetes secrets for Percona Kubernetes Operator for Percona Server for MongoDB securely in the repository with Sealed Secrets or Vault Secrets Operator.

Sealed Secrets

Prerequisites:

  • Kubernetes cluster up and running
  • Github repository (optional)

Install Sealed Secrets Controller

Sealed Secrets rely on asymmetric cryptography (which is also used in TLS), where the private key (which in our case is stored in Kubernetes) can decrypt the message encrypted with the public key (which can be stored in public git repository safely). To make this task easier, Sealed Secrets provides the kubeseal tool, which helps with the encryption of the secrets.

Install kubeseal operator into your Kubernetes cluster:

kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.15.0/controller.yaml

It will install the controller into the kube-system namespace and provide the Custom Resource Definition

sealedsecrets.bitnami.com

. All resources in Kubernetes with

kind: SealedSecrets

will be handled by this Operator.

Download the kubeseal binary:

wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.15.0/kubeseal-linux-amd64 -O kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal

Encrypt the Keys

In this example, I intend to store important secrets of the Percona Kubernetes Operator for Percona Server for MongoDB in git along with my manifests that are used to deploy the database.

First, I will seal the secret file with system users, which is used by the MongoDB Operator to manage the database. Normally it is stored in deploy/secrets.yaml.

kubeseal --format yaml < secrets.yaml  > blog-data/sealed-secrets/mongod-secrets.yaml

This command creates the file with encrypted contents, you can see it in the blog-data/sealed-secrets repository here. It is safe to store it publicly as it can only be decrypted with a private key.

Executing

kubectl apply -f blog-data/sealed-secrets/mongod-secrets.yaml

does the following:

  1. A sealedsecrets custom resource (CR) is created. You can see it by executing
    kubectl get sealedsecrets

    .

  2. The Sealed Secrets Operator receives the event that a new sealedsecrets CR is there and decrypts it with the private key.
  3. Once decrypted, a regular Secrets object is created which can be used as usual.

$ kubectl get sealedsecrets
NAME               AGE
my-secure-secret   20m

$ kubectl get secrets my-secure-secret
NAME               TYPE     DATA   AGE
my-secure-secret   Opaque   10     20m

Next, I will also seal the keys for my S3 bucket that I plan to use to store backups of my MongoDB database:

kubeseal --format yaml < backup-s3.yaml  > blog-data/sealed-secrets/s3-secrets.yaml
kubectl apply -f blog-data/sealed-secrets/s3-secrets.yaml

Vault Secrets Operator

Sealed Secrets is the simplest approach, but it is possible to achieve the same result with HashiCorp Vault and Vault Secrets Operator. It is a more advanced, mature, and feature-rich approach.

Prerequisites:

Vault Secrets Operator also relies on Custom Resource, but all the keys are stored in HashiCorp Vault:

Preparation

Create a policy on the Vault for the Operator:

cat <<EOF | vault policy write vault-secrets-operator -
path "kvv2/data/*" {
  capabilities = ["read"]
}
EOF

The policy might look a bit differently, depending on where your secrets are.

Create and fetch the token for the policy:

$ vault token create -period=24h -policy=vault-secrets-operator

Key                  Value                                                                                                                                                                                        
---                  -----                                                                                               
token                s.0yJZfCsjFq75GiVyKiZgYVOm
...

Write down the token, as you will need it in the next step.

Create the Kubernetes Secret so that the Operator can authenticate with the Vault:

export VAULT_TOKEN=s.0yJZfCsjFq75GiVyKiZgYVOm
export VAULT_TOKEN_LEASE_DURATION=86400

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: vault-secrets-operator
type: Opaque
data:
  VAULT_TOKEN: $(echo -n "$VAULT_TOKEN" | base64)
  VAULT_TOKEN_LEASE_DURATION: $(echo -n "$VAULT_TOKEN_LEASE_DURATION" | base64)
EOF

Deploy Vault Secrets Operator

It is recommended to deploy the Operator with Helm, but before we need to create the values.yaml file to configure the operator.

environmentVars:
  - name: VAULT_TOKEN
    valueFrom:
      secretKeyRef:
        name: vault-secrets-operator
        key: VAULT_TOKEN
  - name: VAULT_TOKEN_LEASE_DURATION
    valueFrom:
      secretKeyRef:
        name: vault-secrets-operator
        key: VAULT_TOKEN_LEASE_DURATION
vault:
  address: "http://vault.vault.svc:8200"

Environment variables are pointing to the Secret that was created in the previous chapter to authenticate with Vault. We also need to provide the Vault address for the Operator to retrieve the secrets.

Now we can deploy the Vault Secrets Operator:

helm repo add ricoberger https://ricoberger.github.io/helm-charts
helm repo update

helm upgrade --install vault-secrets-operator ricoberger/vault-secrets-operator -f blog-data/sealed-secrets/values.yaml

Give me the Secret

I have a key created in my HashiCorp Vault:

$ vault kv get kvv2/mongod-secret
…
Key                                 Value
---                                 -----                                                                                                                                                                         
MONGODB_BACKUP_PASSWORD             <>
MONGODB_CLUSTER_ADMIN_PASSWORD      <>
MONGODB_CLUSTER_ADMIN_USER          <>
MONGODB_CLUSTER_MONITOR_PASSWORD    <>
MONGODB_CLUSTER_MONITOR_USER        <>                                                                                                                                                               
MONGODB_USER_ADMIN_PASSWORD         <>
MONGODB_USER_ADMIN_USER             <>

It is time to create the secret out of it. First, we will create the Custom Resource object of

kind: VaultSecret

.

$ cat blog-data/sealed-secrets/vs.yaml
apiVersion: ricoberger.de/v1alpha1
kind: VaultSecret
metadata:
  name: my-secure-secret
spec:
  path: kvv2/mongod-secret
  type: Opaque

$ kubectl apply -f blog-data/sealed-secrets/vs.yaml

The Operator will connect to HashiCorp Vault and create regular Secret object automatically:

$ kubectl get vaultsecret
NAME               SUCCEEDED   REASON    MESSAGE              LAST TRANSITION   AGE
my-secure-secret   True        Created   Secret was created   47m               47m

$ kubectl get secret  my-secure-secret
NAME               TYPE     DATA   AGE
my-secure-secret   Opaque   7      47m

Deploy MongoDB Cluster

Now that the secrets are in place, it is time to deploy the Operator and the DB cluster:

kubectl apply -f blog-data/sealed-secrets/bundle.yaml
kubectl apply -f blog-data/sealed-secrets/cr.yaml

The cluster will be up in a minute or two and use secrets we deployed.

By the way, my cr.yaml deploys MongoDB cluster with two shards. Multiple shards support was added in version 1.7.0of the Operator – I encourage you to try it out. Learn more about it here: Percona Server for MongoDB Sharding.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com