Nov
23
2021
--

Multi-Tenant Kubernetes Cluster with Percona Operators

multi-tenant kubernetes cluster

multi-tenant kubernetes clusterThere are cases where multiple teams, customers, or applications run in the same Kubernetes cluster. Such an environment is called multi-tenant and requires some preparation and management. Multi-tenant Kubernetes deployment allows you to utilize the economy of scale model on various levels:

  • Smaller compute footprint – one control plane, dense container deployments
  • Ease of management – one cluster, not hundreds

In this blog post, we are going to review multi-tenancy best practices, recommendations and see how Percona Kubernetes Operators can be deployed and managed in such Kubernetes clusters.

Multi-Tenancy

Generic

Multi-tenancy usually means a lot of Pods and workloads in a single cluster. You should always remember that there are certain limits when designing your infrastructure. For vanilla Kubernetes, these limits are quite high and hard to reach:

  • 5000 nodes
  • 10 000 namespaces
  • 150 000 pods

Managed Kubernetes services have their own limits that you should keep in mind. For example, GKE allows a maximum of 110 Pods per node on a standard cluster and only 32 on GKE Autopilot nodes.

The older AWS EKS CNI plugin was limiting the number of Pods per node to the number of IP addresses EC2 can have. With the prefix assignment enabled in CNI, you are still going to hit a limit of 110 pods per node.

Namespaces

Kubernetes Namespaces provides a mechanism for isolating groups of resources within a single cluster. The scope of k8s objects can either be cluster scope or namespace scoped. Objects which are accessible across all the namespaces like

ClusterRole

 are cluster scoped and those which are accessible only in a single namespace like Deployments are namespace scoped.

kubernetes namespaces

Deploying a database with Percona Operators creates pods that are namespace scoped. This provides interesting opportunities to run workloads on different namespaces for different teams, projects, and potentially, customers too. 

Example: Percona Distribution for MongoDB Operator and Percona Server for MongoDB can be run on two different namespaces by adding namespace metadata fields. Snippets are as follows:

# Team 1 DB running in team1-db namespace
apiVersion: psmdb.percona.com/v1-11-0
kind: PerconaServerMongoDB
metadata:
 name: team1-server
 namespace: team1-db

# Team 1 deployment running in team1-db namespace
apiVersion: apps/v1
kind: Deployment
metadata:
 name: percona-server-mongodb-operator-team1
 namespace: team1-db


# Team 2 DB running in team2-db namespace
apiVersion: psmdb.percona.com/v1-11-0
kind: PerconaServerMongoDB
metadata:
 name: team2-server
 namespace: team2-db

# Team 2 deployment running in team2-db namespace
apiVersion: apps/v1
kind: Deployment
metadata:
 name: percona-server-mongodb-operator-team2
 namespace: team2-db

Suggestions:

  1. Avoid using the standard namespaces like
    kube-system

    or

    default

    .

  2. It’s always better to run independent workloads on different namespaces unless there is a specific requirement to do it in a shared namespace.

Namespaces can be used per team, per application environment, or any other logical structure that fits the use case.

Resources

The biggest problem in any multi-tenant environment is this – how can we ensure that a single bad apple doesn’t spoil the whole bunch of apples?

ResourceQuotas

Thanks to Resource Quotas, we can restrict the resource utilization of namespaces.

ResourceQuotas

 also allows you to restrict the number of k8s objects which can be created in a namespace. 

Example of the YAML manifest with resource quotas:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: team1-quota         
  namespace: team1-db    # Namespace where operator is deployed
spec:
  hard:
    requests.cpu: "10"     # Cumulative CPU requests of all k8s objects in the namespace cannot exceed 10vcpu
    limits.cpu: "20"       # Cumulative CPU limits of all k8s objects in the namespace cannot exceed 20 vcpu
    requests.memory: 10Gi  # Cumulative memory requests of all k8s objects in the namespace cannot exceed 10Gi
    limits.memory: 20Gi    # Cumulative memory limits of all k8s objects in the namespace cannot exceed 20Gi
    requests.ephemeral-storage: 100Gi  # Cumulative ephemeral storage request of all k8s objects in the namespace cannot exceed 100Gi
    limits.ephemeral-storage: 200Gi    # Cumulative ephemeral storage limits of all k8s objects in the namespace cannot exceed 200Gi
    requests.storage: 300Gi            # Cumulative storage requests of all PVC in the namespace cannot exceed 300Gi
    persistentvolumeclaims: 5          # Maximum number of PVC in the namespace is 5
    count/statefulsets.apps: 2         # Maximum number of statefulsets in the namespace is 2
    # count/psmdb: 2                   # Maximum number of PSMDB objects in the namespace is 2, replace the name with proper Custom Resource

Please refer to the Resource Quotas documentation and apply quotas that are required for your use case.

If resource quotas are applied to a namespace, it is required to set containers’ requests and limits, otherwise, you are going to have an error similar to the following:

Error creating: pods "my-cluster-name-rs0-0" is forbidden: failed quota: my-cpu-memory-quota: must specify limits.cpu,requests.cpu

All Percona Operators provide the capability to fine-tune the requests and limits. The following example sets CPU and memory requests for Percona XtraDB Cluster containers:

spec:
  pxc:
    resources:
      requests:
        memory: 4G
        cpu: 2

LimitRange

With

ResourceQuotas

we can control the cumulative resources in the namespaces but if we want to enforce constraints on individual Kubernetes objects, LimitRange is a useful option. 

For example, if Team 1,2,3 are provided a namespace to run workloads,

ResourceQuota

will ensure that none of the team can exceed the quotas allocated and over-utilize the cluster… but what if a badly configured workload (say an operator run from team 1 with higher priority class) is utilizing all the resources allocated to the team?

LimitRange

can be used to enforce resources like compute, memory, ephemeral storage, storage with PVC. The example below highlights some of the possibilities.

apiVersion: v1
kind: LimitRange
metadata:
  name: lr-team1
  namespace: team1-db
spec:
  limits:
  - type: Pod                      
    max:                            # Maximum resource limit of all containers combined. Consider setting default limits
      ephemeral-storage: 100Gi      # Maximum ephemeral storage cannot exceed 100GB
      cpu: "800m"                   # Maximum CPU limits of the Pod is 800mVCPU
      memory: 4Gi                   # Maximum memory limits of the Pod is 4 GB
    min:                            # Minimum resource request of all containers combined. Consider setting default requests
      ephemeral-storage: 50Gi       # Minimum ephemeral storage should be 50GB
      cpu: "200m"                   # Minimum CPU request is  200mVCPU
      memory: 2Gi                   # Minimum memory request is 2 GB
  - type: PersistentVolumeClaim
    max:
      storage: 2Gi                  # Maximum PVC storage limit
    min:
      storage: 1Gi                  # Minimum PVC storage request

Suggestions:

  1. When it’s feasible, apply
    ResourceQuotas

    and

    LimitRanges

     to the namespaces where the Percona operator is running. This ensures that tenants are not overutilizing the cluster.

  2. Set alerts to monitor objects and usage of resources in namespaces. Automation of
    ResourceQuotas

     changes may also be useful in some scenarios.

  3. It is advisable to use a buffer on maximum expected utilization before setting the
    ResourceQuotas

    .

  4. Set
    LimitRanges

    to ensure workloads are not overutilizing resources in individual namespaces.

Roles and Security

Kubernetes provides several modes to authorize an API request. Role-Based access control is a popular way for authorization. There are four important objects to provide access:

ClusterRole Represents a set of permissions across the cluster (cluster scope)
Role Represents a set of permissions within a namespace ( namespace scope)
ClusterRoleBinding Granting permission to subjects across the cluster ( cluster scope )
RoleBinding Granting permissions to subjects within a namespace ( namespace scope)

Subjects in the

RoleBinding/ClusterRoleBinding

can be users, groups, or service accounts. Every pod running in the cluster will have an identity and a service account attached (“default” service account in the same namespace will be attached if not explicitly specified). Permissions granted to the service account with

RoleBinding/ClusterRoleBinding

dictate the access that pods will have. 

Going by the best policy of least privileges, it’s always advisable to use Roles with the least set of permissions and bind it to a service account with

RoleBinding

. This service account can be used to run the operator or custom resource to ensure proper access and also restrict the blast radius.

Avoid granting cluster-level access unless there is a strong use case to do it.

Example: RBAC in MongoDB Operator uses Role and

RoleBinding

restricting access to a single namespace for the service account. The same service account is used for both CustomResource and the Operator

Network Policies

Network isolation provides additional security to applications and customers in a multi-tenant environment. Network policies are Kubernetes resources that allow you to control the traffic between Pods, CIDR blocks, and network endpoints, but the most common approach is to control the traffic between namespaces:

kubernetes network policies

Most Container Network Interface (CNI) plugins support the implementation of network policies, however, if they don’t and

NetworkPolicy

is created, the resource is silently ignored. For example, AWS CNI does not support network policies, but AWS EKS can run Calico CNI which does.

It is a good approach to follow the least privilege approach, whereby default traffic is denied and access is granted granularly:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: app1-db
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Allow traffic from Pods in namespace

app1

to namespace

app1-db

:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: app1-db
spec:
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: app1
  policyTypes:
  - Ingress

Policy Enforcement

In a multi-tenant environment, policy enforcement plays a key role. Policy enforcement ensures that k8s objects pass the required quality gates set by administrators/teams. Some examples of policy enforcement could be:

  1. All the workloads have proper labels 
  2. Proper network policies are set for DB
  3. Unsafe configurations are not allowed (Example)
  4. Backups are always enabled (Example)

The K8s ecosystem offers a wide range of options to achieve this. Some of them are listed below:

  1. Open Policy Agent (OPA) is a CNCF graduated project which gives a high-level declarative language to author and enforces policies across k8s objects. (Examples from Google and OPA repo can be helpful)
  2. Mutating Webhooks can be used to modify API calls before it reaches the API server. This can be used to set required properties for k8s objects. (Example: mutating webhook to add
    NetworkPolicy

    for Pods created in production namespaces)

  3. Validating Webhooks can be used to check if k8s API follows the required policy, any API which doesn’t follow the policy will be rejected. (Example: validating webhook to ensure huge pages of 1GB is not used in the pod )

Cluster-Wide

Percona Distribution for MySQL Operator and Percona Distribution for PostgreSQL Operator both support cluster-wide mode which allows single Operator deploy and manage databases across multiple namespaces (support for cluster-wide mode in Percona Operator for MongoDB is on the roadmap). Is also possible to have an Operator per namespace:

Operator per namespace

For example, a single deployment of Percona Distribution for MySQL Operator can monitor multiple namespaces in cluster-wide mode. The use can specify them in WATCH_NAMESPACE environment variable in the

cw-bundle.yaml

file:

    spec:
      containers:
      - command:
        - percona-xtradb-cluster-operator
        env:
        - name: WATCH_NAMESPACE
          value: "namespace-a, namespace-b"

In a multi-tenant environment, it depends on the amount of freedom you want to give to the tenants. Usually when the tenants are highly trusted (for instance internal teams), then it is fine to choose namespace-scoped deployment, where each team can deploy and manage the Operator themselves.

Conclusion

It is important to remember that Kubernetes is not a multi-tenant system out of the box. Various levels of isolation were described in this blog post that would help you to run your applications and databases securely and ensure operational stability. 

We encourage you to try out our Operators:

CONTRIBUTING.md in every repository is there for those of you who want to contribute your ideas, code, and docs.

For general questions please raise the topic in the community forum.

Oct
07
2021
--

Getting Started with ProxySQL in Kubernetes

Getting Started with ProxySQL in Kubernetes

Getting Started with ProxySQL in KubernetesThere are plenty of ways to run ProxySQL in Kubernetes (K8S). For example, we can deploy sidecar containers on the application pods, or run a dedicated ProxySQL service with its own pods.

We are going to discuss the latter approach, which is more likely to be used when dealing with a large number of application pods. Remember each ProxySQL instance runs a number of checks against the database backends. These checks monitor things like server-status and replication lag. Having too many proxies can cause significant overhead.

Creating a Cluster

For the purpose of this example, I am going to deploy a test cluster in GKE. We need to follow these steps:

1. Create a cluster

gcloud container clusters create ivan-cluster --preemptible --project my-project --zone us-central1-c --machine-type n2-standard-4 --num-nodes=3

2. Configure command-line access

gcloud container clusters get-credentials ivan-cluster --zone us-central1-c --project my-project

3. Create a Namespace

kubectl create namespace ivantest-ns

4. Set the context to use our new Namespace

kubectl config set-context $(kubectl config current-context) --namespace=ivantest-ns

Dedicated Service Using a StatefulSet

One way to implement this approach is to have ProxySQL pods use persistent volumes to store the configuration. We can rely on ProxySQL Cluster mode to make sure the configuration is kept in sync.

For simplicity, we are going to use a ConfigMap with the initial config for bootstrapping the ProxySQL service for the first time.

Exposing the passwords in the ConfigMap is far from ideal, and so far the K8S community hasn’t made up its mind about how to implement Reference Secrets from ConfigMap.

1. Prepare a file for the ConfigMap

tee proxysql.cnf <<EOF
datadir="/var/lib/proxysql"
 
admin_variables=
{
    admin_credentials="admin:admin;cluster:secret"
    mysql_ifaces="0.0.0.0:6032"
    refresh_interval=2000
    cluster_username="cluster"
    cluster_password="secret"  
}
 
mysql_variables=
{
    threads=4
    max_connections=2048
    default_query_delay=0
    default_query_timeout=36000000
    have_compress=true
    poll_timeout=2000
    interfaces="0.0.0.0:6033;/tmp/proxysql.sock"
    default_schema="information_schema"
    stacksize=1048576
    server_version="8.0.23"
    connect_timeout_server=3000
    monitor_username="monitor"
    monitor_password="monitor"
    monitor_history=600000
    monitor_connect_interval=60000
    monitor_ping_interval=10000
    monitor_read_only_interval=1500
    monitor_read_only_timeout=500
    ping_interval_server_msec=120000
    ping_timeout_server=500
    commands_stats=true
    sessions_sort=true
    connect_retries_on_failure=10
}
 
mysql_servers =
(
    { address="mysql1" , port=3306 , hostgroup=10, max_connections=100 },
    { address="mysql2" , port=3306 , hostgroup=20, max_connections=100 }
)
 
mysql_users =
(
    { username = "myuser", password = "password", default_hostgroup = 10, active = 1 }
)
 
proxysql_servers =
(
    { hostname = "proxysql-0.proxysqlcluster", port = 6032, weight = 1 },
    { hostname = "proxysql-1.proxysqlcluster", port = 6032, weight = 1 },
    { hostname = "proxysql-2.proxysqlcluster", port = 6032, weight = 1 }
)
EOF

2. Create the ConfigMap

kubectl create configmap proxysql-configmap --from-file=proxysql.cnf

3. Prepare a file with the StatefulSet

tee proxysql-ss-svc.yml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: proxysql
  labels:
    app: proxysql
spec:
  replicas: 3
  serviceName: proxysqlcluster
  selector:
    matchLabels:
      app: proxysql
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: proxysql
    spec:
      restartPolicy: Always
      containers:
      - image: proxysql/proxysql:2.3.1
        name: proxysql
        volumeMounts:
        - name: proxysql-config
          mountPath: /etc/proxysql.cnf
          subPath: proxysql.cnf
        - name: proxysql-data
          mountPath: /var/lib/proxysql
          subPath: data
        ports:
        - containerPort: 6033
          name: proxysql-mysql
        - containerPort: 6032
          name: proxysql-admin
      volumes:
      - name: proxysql-config
        configMap:
          name: proxysql-configmap
  volumeClaimTemplates:
  - metadata:
      name: proxysql-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    app: proxysql
  name: proxysql
spec:
  ports:
  - name: proxysql-mysql
    nodePort: 30033
    port: 6033
    protocol: TCP
    targetPort: 6033
  - name: proxysql-admin
    nodePort: 30032
    port: 6032
    protocol: TCP
    targetPort: 6032
  selector:
    app: proxysql
  type: NodePort
EOF

4. Create the StatefulSet

kubectl create -f proxysql-ss-svc.yml

5. Prepare the definition of the headless Service (more on this later)

tee proxysql-headless-svc.yml <<EOF 
apiVersion: v1
kind: Service
metadata:
  name: proxysqlcluster
  labels:
    app: proxysql
spec:
  clusterIP: None
  ports:
  - port: 6032
    name: proxysql-admin
  selector:
    app: proxysql
EOF

6. Create the headless Service

kubectl create -f proxysql-headless-svc.yml

7. Verify the Services

kubectl get svc

NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                         AGE
proxysql          NodePort    10.3.249.158           6033:30033/TCP,6032:30032/TCP   12m
proxysqlcluster   ClusterIP   None                   6032/TCP                        8m53s

Pod Name Resolution

By default, each pod has a DNS name associated in the form pod-ip-address.my-namespace.pod.cluster-domain.example.

The headless Service causes K8S to auto-create a DNS record with each pod’s FQDN as well. The result is we will have the following entries available:

proxysql-0.proxysqlcluster
proxysql-1.proxysqlcluster
proxysql-3.proxysqlcluster

We can then use these to set up the ProxySQL cluster (the proxysql_servers part of the configuration file).

Connecting to the Service

To test the service, we can run a container that includes a MySQL client and connect its console output to our terminal. For example, use the following command (which also removes the container/pod after we exit the shell):

kubectl run -i --rm --tty percona-client --image=percona/percona-server:latest --restart=Never -- bash -il

The connections from other pods should be sent to the Cluster-IP and port 6033 and will be load balanced. We can also use the DNS name proxysql.ivantest-ns.svc.cluster.local that got auto-created.

mysql -umyuser -ppassword -h10.3.249.158 -P6033

Use port 30033 instead if the client is connecting from an external network:

mysql -umyuser -ppassword -h10.3.249.158 -P30033

Cleanup Steps

In order to remove all the resources we created, run the following steps:

kubectl delete statefulsets proxysql
kubectl delete service proxysql
kubectl delete service proxysqlcluster

Final Words

We have seen one of the possible ways to deploy ProxySQL in Kubernetes. The approach presented here has a few shortcomings but is good enough for illustrative purposes. For a production setup, consider looking at the Percona Kubernetes Operators instead.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com