Jan
13
2021
--

Running Kubernetes on the Edge

Running Kubernetes on the Edge

Running Kubernetes on the EdgeWhat is Edge

Edge is a buzzword that, behind the curtain, means moving private or public clouds closer to the end devices. End devices, such as the Internet of Things (from a doorbell to a VoIP station), become more complex and require more computational power.  There is a constant growth of connected devices and by the end of 2025, there will be 41.6 billion of them, generating 69.4 Zettabytes of data.

Latency, speed of data processing, or security concerns do not allow computation to happen in the cloud. Businesses rely on edge computing or micro clouds, which can run closer to the end devices. All this constructs the Edge.

How Kubernetes Helps Here

Containers are portable and quickly becoming a de facto standard to ship software. Kubernetes is a container orchestrator with robust built-in scaling capabilities. This gives the perfect toolset for businesses to shape their Edge computing with ease and without changing existing processes.

The cloud-native landscape has various small Kubernetes distributions that were designed and built for the Edge: k3s, microk8s, minikube, k0s, and newly released EKS Distro. They are lightweight, can be deployed with few commands, and are fully conformant. Projects like KubeEdge bring even more simplicity and standardization into the Kubernetes ecosystem on the Edge.

Running Kubernetes on the Edge also poses the challenge to manage hundreds and thousands of clusters. Google Anthos, Azure Arc, and VMWare Tanzu allow you to run your clusters anywhere and manage them through a single interface with ease.

Topologies

We are going to review various topologies that Kubernetes provides for the Edge to bring computation and software closer to the end devices.

The end device is a Kubernetes cluster

Kubernetes cluster edge

Some devices run complex software and require multiple components to operate – web servers, databases, built-in data-processing, etc. Using packages is an option, but compared to containers and automated orchestration, it is slow and sometimes turns the upgrade process into a nightmare. In such cases, it is possible to run a Kubernetes cluster on each end device and manage software and infrastructure components using well-known primitives.

The drawback of this solution is the overhead that comes from running etcd and masters’ components on every device.

The end device is a node

The end device is a node kubernetes

In this case, you can manage each end device through a single Kubernetes control plane. Deploying software to support and run your phones, printers or any other devices can be done through standard Kubernetes primitives.

Micro-clouds

kubernetes Micro-clouds

This topology is all about moving computational power closer to the end devices by creating micro-clouds on the Edge. Micro-cloud is formed by the Kubernetes nodes on the server farm on the customer premises. Running your AI/ML (like Kubeflow) or any other resource-heavy application in your own micro-cloud is done with Kubernetes and its primitives.

How Percona Addresses Edge Challenges

We at Percona continue to invest in the Kubernetes ecosystem and expand our partnership with the community. Our Kubernetes Operators for Percona XtraDB Cluster and MongoDB are open source and enable anyone to run production-ready MySQL and MongoDB databases on the Edge.

Check out how easy it is to deploy our operators on Minikube or EKS Distro (which is similar to microk8s). We are working on furthering Day 2 operations simplification and in future blog posts, you will see how to deploy and manage databases on multiple Kubernetes clusters with KubeApps.

Jan
12
2021
--

Slim.ai announces $6.6M seed to build container DevOps platform

We are more than seven years into the notion of modern containerization, and it still requires a complex set of tools and a high level of knowledge on how containers work. The DockerSlim open source project developed several years ago from a desire to remove some of that complexity for developers.

Slim.ai, a new startup that wants to build a commercial product on top of the open source project, announced a $6.6 million seed round today from Boldstart Ventures, Decibel Partners, FXP Ventures and TechAviv Founder Partners.

Company co-founder and CEO John Amaral says he and fellow co-founder and CTO Kyle Quest have worked together for years, but it was Quest who started and nurtured DockerSlim. “We started coming together around a project that Kyle built called DockerSlim. He’s the primary author, inventor and up until we started doing this company, the sole proprietor of that of that community,” Amaral explained.

At the time Quest built DockerSlim in 2015, he was working with Docker containers and he wanted a way to automate some of the lower level tasks involved in dealing with them. “I wanted to solve my own pain points and problems that I had to deal with, and my team had to deal with dealing with containers. Containers were an exciting new technology, but there was a lot of domain knowledge you needed to build production-grade applications and not everybody had that kind of domain expertise on the team, which is pretty common in almost every team,” he said.

He originally built the tool to optimize container images, but he began looking at other aspects of the DevOps lifecycle including the author, build, deploy and run phases. He found as he looked at that, he saw the possibility of building a commercial company on top of the open source project.

Quest says that while the open source project is a starting point, he and Amaral see a lot of areas to expand. “You need to integrate it into your developer workflow and then you have different systems you deal with, different container registries, different cloud environments and all of that. […] You need a solution that can address those needs and doing that through an open source tool is challenging, and that’s where there’s a lot of opportunity to provide premium value and have a commercial product offering,” Quest explained.

Ed Sim, founder and general partner at Boldstart Ventures, one of the seed investors sees a company bringing innovation to an area of technology where it has been lacking, while putting some more control in the hands of developers. “Slim can shift that all left and give developers the power through the Slim tools to answer all those questions, and then, boom, they can develop containers, push them into production and then DevOps can do their thing,” he said.

They are just 15 people right now including the founders, but Amaral says building a diverse and inclusive company is important to him, and that’s why one of his early hires was head of culture. “One of the first two or three people we brought into the company was our head of culture. We actually have that role in our company now, and she is a rock star and a highly competent and focused person on building a great culture. Culture and diversity to me are two sides of the same coin,” he said.

The company is still in the very early stages of developing that product. In the meantime, they continue to nurture the open source project and to build a community around that. They hope to use that as a springboard to build interest in the commercial product, which should be available some time later this year.

Dec
07
2020
--

Running MongoDB on Amazon EKS Distro

MongoDB on AWS EKS-D

MongoDB on AWS EKS-DLast year AWS was about to ban the “multi-cloud” term in co-branding guides for Partners, removed the ban after community and partners critique, and now embraces multi-cloud strategy.

One of the products that AWS announced during its last re:Invent was Amazon EKS Distro — Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service. It is interesting because it is the first step to the new service — EKS Anywhere — which enables AWS customers to run EKS anywhere, even on bare-metal or any other cloud, and later allows them to seamlessly migrate from on-prem EKS directly to AWS.

In this blog post, we will show how easy it is to spin up Amazon EKS Distro (EKS-D) and set up MongoDB with Percona Kubernetes Operator for Percona Server for MongoDB.

Let the Show Begin

Give Me the Cluster

I just spun up the brand new Ubuntu 20.10 virtual machine. You can spin it up anywhere, I myself use Multipass — it gives command-line interface to launch Linux machines locally in seconds.

Installing EKS-D on Ubuntu is one command “effort”:

$ sudo snap install eks --classic --edge
Run configure hook of "eks" snap if present                                                                                                                                                                                                                                                     
eks (1.18/edge) v1.18.9 from Canonical? installed

EKS on Ubuntu gives the same look and feel as microk8s — it has its own command line (eks) and allows you to add/remove nodes easily if needed. Read more here.

Check if EKS is up and running:

# eks status
eks is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none

eks kubectl

 gives you direct access to regular Kubernetes API. Hint: you can get configuration from eks and put it into

.kube

folder to control EKS with

kubectl

(you may need to install it). I’m lazy and will continue using

eks kubectl

.

# mkdir ~/.kube/ ; eks config > ~/.kube/config
# eks kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-node-rrsbd                          1/1     Running   1          15m
calico-kube-controllers-555fc8cc5c-2ll8f   1/1     Running   0          15m
coredns-6788f546c9-x8q7l                   1/1     Running   0          15m
metrics-server-768748c8f4-qpxnp            1/1     Running   0          15m
hostpath-provisioner-66667bf7f-pfg8s       1/1     Running   0          15m

hostpath-provisioner

is running, which means the host path based storage class needed for the database is already there.

Give Me the Database

As promised we will use Percona Kubernetes Operator for Percona Server for MongoDB to spin up the database. And it is the same process described in our minikube installation guide (as long as you run 1 node only).

Get the code from github:

# git clone -b v1.5.0 https://github.com/percona/percona-server-mongodb-operator
# cd percona-server-mongodb-operator

Deploy the operator:

# eks kubectl apply -f deploy/bundle.yaml
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbs.psmdb.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbbackups.psmdb.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaservermongodbrestores.psmdb.percona.com created
role.rbac.authorization.k8s.io/percona-server-mongodb-operator created
serviceaccount/percona-server-mongodb-operator created
rolebinding.rbac.authorization.k8s.io/service-account-percona-server-mongodb-operator created
deployment.apps/percona-server-mongodb-operator created

I have one node in my fancy EKS cluster and I will

  • Change the number of nodes in a replica set to 1 (
    size: 1

    )

  • Remove the
    antiAffinity

    configuration 

  • Set
    allowUnsafeConfigurations

    flag to true in

    deploy/cr.yaml

    . This flag set to

    true

    allows users to run unsafe configurations (like 1 node MongoDB cluster), this is useful for development or testing purposes, but of course not recommended for production.

spec:
...
  allowUnsafeConfigurations: true
  replsets:
  - name: rs0
    size: 1
#    affinity:
#      antiAffinityTopologyKey: "kubernetes.io/hostname"

Now, give me the database:

# eks kubectl apply -f deploy/cr.yaml
1 minute later…
# eks kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
percona-server-mongodb-operator-6b5dbccbd5-jh9x8   1/1     Running   0          7m35s
my-cluster-name-rs0-0                              2/2     Running   0          77s

Simple as that!

Conclusion

Setting up Kubernetes locally can be easily done not only with EKS Distro, but with minikube, microk8s, k3s, and other distributions. The true value of EKS-D will be shown once EKS Anywhere goes live in 2021 and unlocks the multi-cloud Kubernetes. Percona has always been an open source company: we embrace, value, and heavily invest in multi-cloud ecosystems. Our Kubernetes Operators for Percona XtraDB Cluster and MongoDB enable businesses to run their data on Kubernetes on any public or private cloud without lock-in. We also provide full support for our operators and databases running on your Kubernetes cluster.

Dec
01
2020
--

AWS brings ECS, EKS services to the data center, open sources EKS

Today at AWS re:Invent, Andy Jassy talked a lot about how companies are making a big push to the cloud, but today’s container-focussed announcements gave a big nod to the data center as the company announced ECS Anywhere and EKS Anywhere, both designed to let you run these services on-premises, as well as in the cloud.

These two services, ECS for generalized container orchestration and EKS for that’s focused on Kubernetes will let customers use these popular AWS services on premises. Jassy said that some customers still want the same tools they use in the cloud on prem and this is designed to give it to them.

Speaking of ECS he said,  “I still have a lot of my containers that I need to run on premises as I’m making this transition to the cloud, and [these] people really want it to have the same management and deployment mechanisms that they have in AWS also on premises and customers have asked us to work on this. And so I’m excited to announce two new things to you. The first is the launch, or the announcement of Amazon ECS Anywhere, which lets you run ECS and your own data center,” he told the re:Invent audience.

Image Credits: AWS

He said it gives you the same AWS API’s and cluster configuration management pieces. This will work the same for EKS, allowing this single management methodology regardless of where you are using the service.

While it was at it, the company also announced it was open sourcing EKS, its own managed Kubernetes service. The idea behind these moves is to give customers as much flexibility as possible, and recognizing what Microsoft, IBM and Google have been saying, that we live in a multi-cloud and hybrid world and people aren’t moving everything to the cloud right away.

In fact, in his opening Jassy stated that right now in 2020, just 4% of worldwide IT spend is on the cloud. That means there’s money to be made selling services on premises, and that’s what these services will do.

Nov
23
2020
--

Recover Percona XtraDB Cluster in Kubernetes From Wrong MySQL Config

Recover Percona XtraDB Cluster in Kubernetes

Recover Percona XtraDB Cluster in KubernetesKubernetes operators are meant to simplify the deployment and management of applications. Our Percona Kubernetes Operator for Percona XtraDB Cluster serves the purpose, but also provides users the flexibility to fine-tune their MySQL and proxy services configuration.

The document Changing MySQL Options describes how to provide custom

my.cnf

configuration to the operator. But what would happen if you made a mistake and specified the wrong parameter in the configuration?

Apply Configuration

I already deployed my Percona XtraDB Cluster and deliberately submitted the wrong

my.cnf

  configuration in

cr.yaml

 :

spec:
...
  pxc:
    configuration: |
      [mysqld]
      wrong_param=123
…

Apply the configuration:

$ kubectl apply -f deploy/cr.yaml

Once you do this, the Operator will apply a new MySQL configuration to one of the Pods. In a few minutes you will see that the Pod is stuck in CrashLoopBackOff status:

$ kubectl get pods
NAME                                               READY   STATUS             RESTARTS   AGE
percona-xtradb-cluster-operator-79d786dcfb-lzv4b   1/1     Running            0          5h
test-haproxy-0                                     2/2     Running            0          5m27s
test-haproxy-1                                     2/2     Running            0          4m40s
test-haproxy-2                                     2/2     Running            0          4m24s
test-pxc-0                                         1/1     Running            0          5m27s
test-pxc-1                                         1/1     Running            0          4m41s
test-pxc-2                                         0/1     CrashLoopBackOff   1          59s

In the logs it is clearly stated that this parameter is not supported and

mysqld

  process cannot start:

       2020-11-19T13:30:30.141829Z 0 [ERROR] [MY-000067] [Server] unknown variable 'wrong_param=123'.
        2020-11-19T13:30:30.142355Z 0 [ERROR] [MY-010119] [Server] Aborting
        2020-11-19T13:30:31.835199Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.20-11.1)  Percona XtraDB Cluster (GPL), Release rel11, Revision 683b26a, WSREP version 26.4.3.

It is worth noting that your Percona XtraDB Cluster is still operational and serves the requests.

Recovery

Let’s try to comment out the configuration section and reapply

cr.yaml

 :

spec:
...
  pxc:
#    configuration: |
#      [mysqld]
#      wrong_param=123
…


$ kubectl apply -f deploy/cr.yaml

And it won’t work (in v1.6). The Pod is still in CrashLoopBackOff state as the operator does not apply any changes when not all Pods are up and running. We are doing that to ensure data safety.

Fortunately, there is an easy way to recover from such a mistake: you can either delete or modify the corresponding ConfigMap resource in Kubernetes. Usually its name is

{your_cluster_name}-pxc

:

$ kubectl delete configmap test-pxc

And delete the Pod which is failing:

$ kubectl delete pod text-pxc-2

Kubernetes will restart all Percona XtraDB Cluster pods one by one after some time:

test-pxc-0                                         1/1     Running   0          2m28s
test-pxc-1                                         1/1     Running   0          3m23s
test-pxc-2                                         1/1     Running   0          4m36s

You can apply the correct MySQL configuration now through ConfigMap or cr.yaml again. We are assessing other recovery options for such cases and config validation, so stay tuned for upcoming releases.

Nov
13
2020
--

Kubernetes Scaling Capabilities with Percona XtraDB Cluster

Kubernetes Scaling Capabilities with Percona XtraDB Cluster

Kubernetes Scaling Capabilities with Percona XtraDB ClusterOur recent survey showed that many organizations saw unexpected growth around cloud and data. Unexpected bills can become a big problem, especially in such uncertain times. This blog post talks about how Kubernetes scaling capabilities work with Percona Kubernetes Operator for Percona XtraDB Cluster (PXC Operator) and can help you to control the bill.

Resources

Kubernetes is a container orchestrator and on top of it, it has great scaling capabilities. Scaling can help you to utilize your cluster better and do not waste money on excessive capacity. But before scaling we need to understand what capacity is and how Kubernetes manages CPU and memory resources.

There are two resource concepts that you should be aware of: requests and limits. Requests is the amount of CPU or memory that a container is guaranteed to get on the node. Kubernetes uses requests during scheduling decisions, and it will not schedule a container to a node that does not have enough capacity. Limits is the maximum amount of resources that a container can get on the node. There is no guarantee though. In Linux, world limits are just cgroup maximums for processes.

Each node in a cluster has its own capacity. Part of this capacity is reserved for the operating system and kubelet, and what is left can be utilized by containers (allocatable).

resource allocation in Kubernetes

Okay, now we know a thing or two about resource allocation in Kubernetes. Let’s dive into the problem space.

Problem #1: Requested Too Much

If you request resources for containers but do not utilize them well enough, you end up wasting resources. This is where Vertical Pod Autoscaler (VPA) comes in handy. It can automatically scale up or down container requests based on its historical real usage.

request resources for containers

VPA has 3 modes:

  1. Recommender – it only provides recommendations for containers’ requests. We suggest starting with this mode.
  2. Initial – webhook applies changes to the container during its creation
  3. Auto/Recreate – webhook applies changes to the container during its creation and can also dynamically change the requests for the container

Configure VPA

As a starting point, deploy Percona Kubernetes Operator for Percona XtraDB Cluster and the database by following the guide. VPA is deployed via a single command (see the guide here). VPA requires a metrics-server to get real usage for containers.

We need to create a VPA resource that will monitor our PXC cluster and provide recommendations for requests tuning. For recommender mode set UpdateMode to “Off”:

$ cat vpa.yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: pxc-vpa
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind:       StatefulSet
    name:       <name of the STS>
    namespace:  <your namespace>
  updatePolicy:
    updateMode: "Off"

Run the following command to get the name of the StatefulSet:

$ kubectl get sts
NAME           READY   AGE
...
cluster1-pxc   3/3     3h47m

The one with -pxc has the PXC cluster. Apply the VPA object:

$ kubectl apply -f vpa.yaml

After a few minutes you should be able to fetch recommendations from the VPA object:

$ kubectl get vpa pxc-vpa -o yaml
...
  recommendation:
    containerRecommendations:
    - containerName: pxc
      lowerBound:
        cpu: 25m
        memory: "503457402"
      target:
        cpu: 25m
        memory: "548861636"
      uncappedTarget:
        cpu: 25m
        memory: "548861636"
      upperBound:
        cpu: 212m
        memory: "5063059194"

Resources in the target section are the ones that VPA recommends and applies if Auto or Initial modes are configured. Read more here to understand other recommendation sections.

VPA will apply the recommendations once it is running in Auto mode and will persist the recommended configuration even after the pod being restarted. To enable Auto mode patch the VPA object:

$ kubectl patch vpa pxc-vpa --type='json' -p '[{"op": "replace", "path": "/spec/updatePolicy/updateMode", "value": "Auto"}]'

After a few minutes, VPA will restart PXC pods and apply recommended requests.

$ kubectl describe pod cluster1-pxc-0
...
Requests:
      cpu: “25m”
      memory: "548861636"

Delete VPA object to stop autoscaling:

$ kubectl delete vpa pxc-vpa

Please remember few things about VPA and Auto mode:

  1. It changes container requests, but does not change Deployments or StatefulSet resources.
  2. It is not application aware. For PXC, for example, it does not change
    innodb_buffer_pool_size

      which is configured to take 75% of RAM by the operator. To change it, please, set corresponding requests configuration in

    cr.yaml

    and apply.

  3. It respects
    podDistruptionBudget

    to protect your application. In our default

    cr.yaml

      PDB is configured to lose one pod at a time. It means VPA will change requests and restart one pod at a time:

    podDisruptionBudget:
      maxUnavailable: 1

Problem #2: Spiky Usage

The utilization of the application might change over time. It can happen gradually, but what if it is daily spikes of usage or completely unpredictable patterns? Constantly running additional containers is an option, but it leads to resource waste and increases in infrastructure costs. This is where Horizontal Pod Autoscaler (HPA) can help. It monitors container resources or even application metrics to automatically increase or decrease the number of containers serving the application.

Horizontal Pod Autoscaler

Looks nice, but unfortunately, the current version of the PXC Operator will not work with HPA. HPA tries to scale the StatefulSet, which in our case is strictly controlled by the operator. It will overwrite any scaling attempts from the horizontal scaler. We are researching the opportunities to enable this support for PXC Operator.

Problem #3: My Cluster is Too Big

You have tuned resources requests and they are close to real usage, but the cloud bill is still not going down. It might be that your Kubernetes cluster is overprovisioned and should be scaled with Cluster Autoscaler. CA adds and removes nodes to your Kubernetes cluster based on their requests usage. When nodes are removed pods are rescheduled to other nodes automatically.

Kubernetes cluster is overprovisioned

Configure CA

On Google, Kubernetes Engine Cluster Autoscaler can be enabled through gcloud utility. On AWS you need to install autoscaler manually and add corresponding autoscaling groups into the configuration.

In general, CA monitors if there are any pods that are in Pending status (waiting to be scheduled, read more on pod statuses here) and adds more nodes to the cluster to meet the demand. It removes nodes if it sees the possibility to pack pods densely on other nodes. To add and remove nodes it relies on the cloud primitives: node groups in GCP, auto-scaling groups in AWS, virtual machine scale set on Azure, and so on. The installation of CA differs from cloud to cloud, but here are some interesting tricks.

Overprovision the Cluster

If your workloads are scaling up CA needs to provision new nodes. Sometimes it might take a few minutes. If there is a requirement to scale faster it is possible to overprovision the cluster. Detailed instruction is here. The idea is to always run pause pods with low priority, real workloads with higher priority push them out from nodes when needed.

Expanders

Expanders control how to scale up the cluster; which nodes to add. Configure expanders and multiple node groups to fine-tune the scaling. My preference is to use priority expander as it allows us to cherry-pick the nodes by customizable priorities, it is especially useful for a rapidly changing spot market.

Safety

Pay extremely close attention to scaling down. First of all, you can disable it completely by setting

scale-down-enabled

  to

false

(not recommended). For clusters with big nodes with lots of pods be careful with

scale-down-utilization-threshold

  – do not set it to more than 50%, it might impact other nodes and overutilize them. For clusters with a dynamic workload and lots of nodes do not set

scale-down-delay-after-delete

and scale-down-unneeded-time too low, it will lead to non-stop cluster scaling with absolutely no value.

Cluster Autoscaler also respects

podDistruptionBudget

. When you run it along with PXC Operator please make sure PDBs are correctly configured, so that the PXC cluster does not crash in the event of scaling down the Kubernetes.

Conclusion

In cloud environments, day two operations must include cost management. Overprovisioning Kubernetes clusters is a common theme that can quickly become visible in the bills. When running Percona XtraDB Cluster on Kubernetes you can leverage Vertical Pod Autoscaler to tune requests and apply Cluster Autoscaler to reduce the number of instances to minimize your cloud spend. It will be possible to use Horizontal Pod Autoscaler in future releases as well to dynamically adjust your cluster to demand.

Nov
04
2020
--

Running Percona Kubernetes Operator for Percona XtraDB Cluster with Kata Containers

Percona Kubernetes Operator for Percona XtraDB Cluster with Kata Containers

Percona Kubernetes Operator for Percona XtraDB Cluster with Kata ContainersKata containers are containers that use hardware virtualization technologies for workload isolation almost without performance penalties. Top use cases are untrusted workloads and tenant isolation (for example in a shared Kubernetes cluster). This blog post describes how to run Percona Kubernetes Operator for Percona XtraDB Cluster (PXC Operator) using Kata containers.

Prepare Your Kubernetes Cluster

Setting up Kata containers and Kubernetes is well documented in the official github repo (cri-o, containerd, Kubernetes DaemonSet). We will just cover the most important steps and pitfalls.

Virtualization Support

First of all, remember that Kata containers require hardware virtualization support from the CPU on the nodes. To check if your linux system supports it run on the node:

$ egrep ‘(vmx|svm)’ /proc/cpuinfo

VMX (Virtual Machine Extension) and SVM (Secure Virtual Machine) are Intel and AMD features that add various instructions to allow running a guest OS with full privileges, but still keeping host OS protected.

For example, on AWS only i3.metal and r5.metal instances provide VMX capability.

Containerd

Kata containers are OCI (Open Container Interface) compliant, which means that they work pretty well with CRI (Container Runtime Interface) and hence well supported by Kubernetes. To use Kata containers please make sure your Kubernetes nodes run using CRI-O or containerd runtimes.

The image below describes pretty well how Kubernetes works with Kata.

Kubernetes works with Kata

Hint: GKE or kops allows you to start your cluster with containerd out of the box and skip manual steps.

Setting Up Nodes

To run Kata containers, k8s nodes need to have kata-runtime installed and runtime configured properly. The easiest way is to use DaemonSet which installs required packages on every node and reconfigures containerd. As a first step apply the following yamls to create the DaemonSet:

$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-deploy/base/kata-deploy.yaml

DaemonSet reconfigures containerd to support multiple runtimes. It does that by changing /etc/containerd/config.toml. Please note that some tools (ex. kops) keep containerd in a separate configuration file config-kops.toml. You need to copy the configuration created by DaemonSet to the corresponding file and restart containerd.

Create runtimeClasses for Kata. RuntimeClass is a feature that allows you to pick runtime for the container during its creation. It has been available since Kubernetes 1.14 as Beta.

$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/k8s-1.14/kata-qemu-runtimeClass.yaml

Everything is set. Deploy test nginx pod and set the runtime:

$ cat nginx-kata.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-kata
spec:
  runtimeClassName: kata-qemu
  containers:
    - name: nginx
      image: nginx

$ kubectl apply -f nginx-kata.yaml
$ kubectl describe pod nginx-kata | grep “Container ID”
    Container ID:   containerd://3ba8d62be5ee8cd57a35081359a0c08059cf08d8a53bedef3384d18699d13111

On the node verify if Kata is used for this container through ctr tool:

# ctr --namespace k8s.io containers list | grep 3ba8d62be5ee8cd57a35081359a0c08059cf08d8a53bedef3384d18699d13111
3ba8d62be5ee8cd57a35081359a0c08059cf08d8a53bedef3384d18699d13111    sha256:f35646e83998b844c3f067e5a2cff84cdf0967627031aeda3042d78996b68d35 io.containerd.kata-qemu.v2cat 

Runtime is showing kata-qemu.v2 as requested.

The current latest stable PXC Operator version (1.6) does not support runtimeClassName. It is still possible to run Kata containers by specifying

io.kubernetes.cri.untrusted-workload

annotation. To ensure containerd supports this annotation add the following into the configuration toml file on the node:

# cat <<EOF >> /etc/containerd/config.toml
[plugins.cri.containerd.untrusted_workload_runtime]
  runtime_type = "io.containerd.kata-qemu.v2"
EOF

# systemctl restart containerd

Install the Operator

We will install the operator with regular runtime but will put the PXC cluster into Kata containers.

Create the namespace and switch the context:

$ kubectl create namespace pxc-operator
$ kubectl config set-context $(kubectl config current-context) --namespace=pxc-operator

Get the operator from github:

$ git clone -b v1.6.0 https://github.com/percona/percona-xtradb-cluster-operator

Deploy the operator into your Kubernetes cluster:

$ cd percona-xtradb-cluster-operator
$ kubectl apply -f deploy/bundle.yaml

Now let’s deploy the cluster, but before that, we need to explicitly add an annotation to PXC pods and mark them untrusted to enforce Kubernetes to use Kata containers runtime. Edit

deploy/cr.yaml

 :

pxc:
  size: 3
  image: percona/percona-xtradb-cluster:8.0.20-11.1
  …
  annotations:

      io.kubernetes.cri.untrusted-workload: "true"

Now, let’s deploy the PXC cluster:

$ kubectl apply -f deploy/cr.yaml

The cluster is up and running (using 1 node for the sake of experiment):

$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
pxc-kata-haproxy-0                                 2/2     Running   0          5m32s
pxc-kata-pxc-0                                     1/1     Running   0          8m16s
percona-xtradb-cluster-operator-749b86b678-zcnsp   1/1     Running   0          44m

In crt output you should see percona-xtradb cluster running using Kata runtime:

# ctr --namespace k8s.io containers list | grep percona-xtradb-cluster | grep kata
448a985c82ae45effd678515f6cf8e11a6dfca159c9abf05a906c7090d297cba    docker.io/percona/percona-xtradb-cluster:8.0.20-11.2 io.containerd.kata-qemu.v2

We are working on adding the support for runtimeClassName option for our operators. The support of this feature enables users to freely choose any container runtime.

Conclusions

Running databases in containers is an ongoing trend and keeping data safe is always the top priority for a business. Kata containers provide security isolation through mature and extensively tested qemu virtualization with little-to-none changes to the existing environment.

Deploy Percona XtraDB Cluster with ease in your Kubernetes cluster with our Operator and Kata containers for better isolation without performance penalties.

 

Oct
01
2020
--

Cisco acquires PortShift to raise its game in DevOps and Kubernetes security

Cisco is making another acquisition to expand its reach in security solutions, this time specifically targeting DevOps and the world of container management. It is acquiring PortShift, an Israeli startup that has built a Kubernetes-native security platform.

Terms of the deal are not being disclosed. PortShift had raised about $5.3 million from Team8, an incubator and backer of security startups in Israel founded by a group of cybersecurity vets. Cisco, along with Microsoft and Walmart, are among the large corporates that back Team8. (Indeed, their participation is in part a way of getting an early look and inside scoop on some of the more cutting-edge technologies being built, and in part a way to help founders understand what corporates’ security needs are these days.)

The deal underscores not just how containerization, and specifically Kubernetes, has taken hold of the enterprise world, but also how those working in this area, and building businesses around containerization and Kubernetes, are paying increasing attention to security around them.

Others are also sharpening their focus on containers and how they are secured. Earlier this year, Venafi acquired Jetstack, which runs a certificate controller for Kubernetes; and last month StackRox raised funding for its own approach to Kubernetes security.

Cisco has been a longtime partner of Google’s around cloud services, and it has made a number of acquisitions in the area of cybersecurity in recent years. They have included Duo for $2.35 billion, OpenDNS for $635 million and, most recently, Babble Labs (which helps reduce background noise in video calls, something that both improves quality but also helps users ensure unwanted or private chatter doesn’t inadvertently get heard by unintended listeners).

But as Liz Centoni, the SVP of the Emerging Technologies and Incubation (ET&I) Group, notes in the blog post, Cisco is now turning its attention also to how it can help customers better secure applications and workloads, alongside the investments that it has made to help secure people.

In the area of containers, security issues can arise around container architecture in a number of ways: it can be due to misconfiguration; or because of how applications are monitored; or how developers use open-source libraries; and how companies implement regulatory compliance. Other security vulnerabilities include the use of insecure container images; problems with how containers interact with each other; the use of containers that have been infected with rogue processes; and having containers not isolated properly from their hosts.

Centoni notes that PortShift interested them because it provides an all-in-one platform covering the many aspects of Kubernetes security:

“Today, the application security space is highly fragmented with many vendors addressing only part of the problem,” she writes. “The Portshift team is building capabilities that span a large portion of the lifecycle of the cloud-native application.”

PortShift provides tools for better container configuration visibility, vulnerability management, configuration management, segmentation, encryption, compliance and automation.

The acquisition is expected to close in the first half of Cisco’s 2021 fiscal year, when the team will join Cisco’s ET&I Group.

Sep
10
2020
--

StackRox nabs $26.5M for a platform that secures containers in Kubernetes

Containers have become a ubiquitous cornerstone in how companies manage their data, a trend that has only accelerated in the last eight months with the larger shift to cloud services and more frequent remote working due to the coronavirus pandemic. Alongside that, startups building services to enable containers to be used better are also getting a boost.

StackRox, which develops Kubernetes-native security solutions, says that its business grew by 240% in the first half of this year, and on the back of that, it is announcing today that it has raised $26.5 million to expand its business into international markets and continue investing in its R&D.

The funding, which appears to be a Series C, has an impressive list of backers. It is being led by Menlo Ventures, with Highland Capital Partners, Hewlett-Packard Enterprise, Sequoia Capital and Redpoint Ventures also participating. Sequoia and Redpoint are previous investors, and the company has raised around $60 million to date.

HPE is a strategic backer in this round:

“At HPE, we are working with our customers to help them accelerate their digital transformations,” said Paul Glaser, VP, Hewlett Packard Enterprise, and head of Pathfinder. “Security is a critical priority as they look to modernize their applications with containers. We’re excited to invest in StackRox and see it as a great fit with our new software HPE Ezmeral to help HPE customers secure their Kubernetes environments across their full application life cycle. By directly integrating with Kubernetes, StackRox enables a level of simplicity and unification for DevOps and Security teams to apply the needed controls effectively.”

Kamal Shah, the CEO, said that StackRox is not disclosing its valuation, but he confirmed it has definitely gone up. For some context, according to PitchBook data, the company was valued at $145 million in its last funding round, a Series B in 2018. Its customers today include the likes of Priceline, Brex, Reddit, Zendesk and Splunk, as well as government and other enterprise customers, in a container security market that analysts project will be worth some $2.2 billion by 2024, up from $568 million last year.

StackRox got its start in 2014, when containers were starting to pick up momentum in the market. At the time, its focus was a little more fragmented, not unlike the container market itself — it provided solutions that could be used with Docker containers as well as others. Over time, Shah said that the company chose to hone its focus just on Kubernetes, originally developed by Google and open-sourced, and now essentially the de facto standard in containerisation.

“We made a bet on Kubernetes at a time when there were multiple orchestrators, including Mesosphere, Docker and others,” he said. “Over the last two years Kubernetes has won the war and become the default choice, the Linux of the cloud and the biggest open-source cloud application. We are all Kubernetes all the time because what we see in the market are that a majority of our customers are moving to it. It has over 35,000 contributors to the open-source project alone, it’s not just Red Hat (IBM) and Google.” Research from CNCF estimates that nearly 80% of organizations that it surveyed are running Kubernetes in production.

That is not all good news, however, with the interest underscoring a bigger need for Kubernetes-focused security solutions for enterprises that opt to use it.

Shah says that some of the typical pitfalls in container architecture arise when they are misconfigured, leading to breaches; as well as around how applications are monitored; how developers use open-source libraries; and how companies implement regulatory compliance. Other security vulnerabilities that have been highlighted by others include the use of insecure container images; how containers interact with each other; the use of containers that have been infected with rogue processes; and having containers not isolated properly from their hosts.

But, Shah noted, “Containers in Kubernetes are inherently more secure if you can deploy correctly.” And to that end that is where StackRox’s solutions attempt to help: The company has built a multi-purposes toolkit that provides developers and security engineers with risk visibility, threat detection, compliance tools, segmentation tools and more. “Kubernetes was built for scale and flexibility, but it has lots of controls, so if you misconfigure it, it can lead to breaches. So you need a security solution to make sure you configure it all correctly,” said Shah.

He added that there has been a definite shift over the years from companies considering security solutions as an optional element into one that forms part of the consideration at the very core of the IT budget — another reason why StackRox and competitors like TwistLock (acquired by Palo Alto Networks) and Aqua Security have all seen their businesses really grow.

“We’ve seen the innovation companies are enabling by building applications in containers and Kubernetes. The need to protect those applications, at the scale and pace of DevOps, is crucial to realizing the business benefits of that innovation,” said Venky Ganesan, partner, Menlo Ventures, in a statement. “While lots of companies have focused on securing the container, only StackRox saw the need to focus on Kubernetes as the control plane for security as well as infrastructure. We’re thrilled to help fuel the company’s growth as it dominates this dynamic market.”

“Kubernetes represents one of the most important paradigm shifts in the world of enterprise software in years,” said Corey Mulloy, general partner, Highland Capital Partners, in a statement. “StackRox sits at the forefront of Kubernetes security, and as enterprises continue their shift to the cloud, Kubernetes is the ubiquitous platform that Linux was for the Internet era. In enabling Kubernetes-native security, StackRox has become the security platform of choice for these cloud-native app dev environments.”

Nov
13
2019
--

Mirantis acquires Docker Enterprise

Mirantis today announced that it has acquired Docker’s Enterprise business and team. Docker Enterprise was very much the heart of Docker’s product lineup, so this sale leaves Docker as a shell of its former, high-flying unicorn self. Docker itself, which installed a new CEO earlier this year, says it will continue to focus on tools that will advance developers’ workflows. Mirantis will keep the Docker Enterprise brand alive, though, which will surely not create any confusion.

With this deal, Mirantis is acquiring Docker Enterprise Technology Platform and all associated IP: Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane and Docker CLI. It will also inherit all Docker Enterprise customers and contracts, as well as its strategic technology alliances and partner programs. Docker and Mirantis say they will both continue to work on the Docker platform’s open-source pieces.

The companies did not disclose the price of the acquisition, but it’s surely nowhere near Docker’s valuation during any of its last funding rounds. Indeed, it’s no secret that Docker’s fortunes changed quite a bit over the years, from leading the container revolution to becoming somewhat of an afterthought after Google open-sourced Kubernetes and the rest of the industry coalesced around it. It still had a healthy enterprise business, though, with plenty of large customers among the large enterprises. The company says about a third of Fortune 100 and a fifth of Global 500 companies use Docker Enterprise, which is a statistic most companies would love to be able to highlight — and which makes this sale a bit puzzling from Docker’s side, unless the company assumed that few of these customers were going to continue to bet on its technology.

Update: for reasons only known to Docker’s communications team, we weren’t told about this beforehand, but the company also today announced that it has raised a $35 million funding round from Benchmark. This doesn’t change the overall gist of the story below, but it does highlight the company’s new direction.

Here is what Docker itself had to say. “Docker is ushering in a new era with a return to our roots by focusing on advancing developers’ workflows when building, sharing and running modern applications. As part of this refocus, Mirantis announced it has acquired the Docker Enterprise platform business,” Docker said in a statement when asked about this change. “Moving forward, we will expand Docker Desktop and Docker Hub’s roles in the developer workflow for modern apps. Specifically, we are investing in expanding our cloud services to enable developers to quickly discover technologies for use when building applications, to easily share these apps with teammates and the community, and to run apps frictionlessly on any Kubernetes endpoint, whether locally or in the cloud.”

Mirantis itself, too, went through its ups and downs. While it started as a well-funded OpenStack distribution, today’s Mirantis focuses on offering a Kubernetes-centric on-premises cloud platform and application delivery. As the company’s CEO Adrian Ionel told me ahead of today’s announcement, today is possibly the most important day for the company.

So what will Mirantis do with Docker Enterprise? “Docker Enterprise is absolutely aligned and an accelerator of the direction that we were already on,” Ionel told me. “We were very much moving towards Kubernetes and containers aimed at multi-cloud and hybrid and edge use cases, with these goals to deliver a consistent experience to developers on any infrastructure anywhere — public clouds, hybrid clouds, multi-cloud and edge use cases — and make it very easy, on-demand, and remove any operational concerns or burdens for developers or infrastructure owners.”

Mirantis previously had about 450 employees. With this acquisition, it gains another 300 former Docker employees that it needs to integrate into its organization. Docker’s field marketing and sales teams will remain separate for some time, though, Ionel said, before they will be integrated. “Our most important goal is to create no disruptions for customers,” he noted. “So we’ll maintain an excellent customer experience, while at the same time bringing the teams together.”

This also means that for current Docker Enterprise customers, nothing will change in the near future. Mirantis says that it will accelerate the development of the product and merge its Kubernetes and lifecycle management technology into it. Over time, it will also offer a managed services solutions for Docker Enterprise.

While there is already some overlap between Mirantis’ and Docker Enterprise’s customer base, Mirantis will pick up about 700 new enterprise customers with this acquisition.

With this, Ionel argues, Mirantis is positioned to go up against large players like VMware and IBM/Red Hat. “We are the one real cloud-native player with meaningful scale to provide an alternative to them without lock-in into a legacy or existing technology stack.”

While this is clearly a day the Mirantis team is celebrating, it’s hard not to look at this as the end of an era for Docker, too. The company says it will share more about its future plans today, but didn’t make any spokespeople available ahead of this announcement.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com