Jul
09
2020
--

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier.

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for compute services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.

May
28
2020
--

Mirantis releases its first major update to Docker Enterprise

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year, and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90% of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”

May
27
2020
--

Docker expands relationship with Microsoft to ease developer experience across platforms

When Docker sold off its enterprise division to Mirantis last fall, that didn’t mark the end of the company. In fact, Docker still exists and has refocused as a cloud-native developer tools vendor. Today it announced an expanded partnership with Microsoft around simplifying running Docker containers in Azure.

As its new mission suggests, it involves tighter integration between Docker and a couple of Azure developer tools including Visual Studio Code and Azure Container Instances (ACI). According to Docker, it can take developers hours or even days to set up their containerized environment across the two sets of tools.

The idea of the integration is to make it easier, faster and more efficient to include Docker containers when developing applications with the Microsoft tool set. Docker CEO Scott Johnston says it’s a matter of giving developers a better experience.

“Extending our strategic relationship with Microsoft will further reduce the complexity of building, sharing and running cloud-native, microservices-based applications for developers. Docker and VS Code are two of the most beloved developer tools and we are proud to bring them together to deliver a better experience for developers building container-based apps for Azure Container Instances,” Johnston said in a statement.

Among the features they are announcing is the ability to log into Azure directly from the Docker command line interface, a big simplification that reduces going back and forth between the two sets of tools. What’s more, developers can set up a Microsoft ACI environment complete with a set of configuration defaults. Developers will also be able to switch easily between their local desktop instance and the cloud to run applications.

These and other integrations are designed to make it easier for Azure and Docker common users to work in in the Microsoft cloud service without having to jump through a lot of extra hoops to do it.

It’s worth noting that these integrations are starting in Beta, but the company promises they should be released some time in the second half of this year.

Dec
04
2019
--

GitGuardian raises $12M to help developers write more secure code and ‘fix’ GitHub leaks

Data breaches that could cause millions of dollars in potential damages have been the bane of the life of many a company. What’s required is a great deal of real-time monitoring. The problem is that this world has become incredibly complex. A SANS Institute survey found half of company data breaches were the result of account or credential hacking.

GitGuardian has attempted to address this with a highly developer-centric cybersecurity solution.

It’s now attracted the attention of major investors, to the tune of $12 million in Series A funding, led by Balderton Capital . Scott Chacon, co-founder of GitHub, and Solomon Hykes, founder of Docker, also participated in the round.

The startup plans to use the investment from Balderton Capital to expand its customer base, predominantly in the U.S. Around 75% of its clients are currently based in the U.S., with the remainder being based in Europe, and the funding will continue to drive this expansion.

Built to uncover sensitive company information hiding in online repositories, GitGuardian says its real-time monitoring platform can address the data leaks issues. Modern enterprise software developers have to integrate multiple internal and third-party services. That means they need incredibly sensitive “secrets,” such as login details, API keys and private cryptographic keys used to protect confidential systems and data.

GitGuardian’s systems detect thousands of credential leaks per day. The team originally built its launch platform with public GitHub in mind; however, GitGuardian is built as a private solution to monitor and notify on secrets that are inappropriately disseminated in internal systems as well, such as private code repositories or messaging systems.

Solomon Hykes, founder of Docker and investor at GitGuardian, said: “Securing your systems starts with securing your software development process. GitGuardian understands this, and they have built a pragmatic solution to an acute security problem. Their credentials monitoring system is a must-have for any serious organization.”

Do they have any competitors?

Co-founder Jérémy Thomas told me: “We currently don’t have any direct competitors. This generally means that there’s no market, or the market is too small to be interesting. In our case, our fundraise proves we’ve put our hands on something huge. So the reason we don’t have competitors is because the problem we’re solving is counterintuitive at first sight. Ask any developer, they will say they would never hardcode any secret in public source code. However, humans make mistakes and when that happens, they can be extremely serious: it can take a single leaked credential to jeopardize an entire organization. To conclude, I’d say our real competitors so far are black hat hackers. Black hat activity is real on GitHub. For two years, we’ve been monitoring organized groups of hackers that exchange sensitive information they find on the platform. We are competing with them on speed of detection and scope of vulnerabilities covered.”

Nov
13
2019
--

Mirantis acquires Docker Enterprise

Mirantis today announced that it has acquired Docker’s Enterprise business and team. Docker Enterprise was very much the heart of Docker’s product lineup, so this sale leaves Docker as a shell of its former, high-flying unicorn self. Docker itself, which installed a new CEO earlier this year, says it will continue to focus on tools that will advance developers’ workflows. Mirantis will keep the Docker Enterprise brand alive, though, which will surely not create any confusion.

With this deal, Mirantis is acquiring Docker Enterprise Technology Platform and all associated IP: Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane and Docker CLI. It will also inherit all Docker Enterprise customers and contracts, as well as its strategic technology alliances and partner programs. Docker and Mirantis say they will both continue to work on the Docker platform’s open-source pieces.

The companies did not disclose the price of the acquisition, but it’s surely nowhere near Docker’s valuation during any of its last funding rounds. Indeed, it’s no secret that Docker’s fortunes changed quite a bit over the years, from leading the container revolution to becoming somewhat of an afterthought after Google open-sourced Kubernetes and the rest of the industry coalesced around it. It still had a healthy enterprise business, though, with plenty of large customers among the large enterprises. The company says about a third of Fortune 100 and a fifth of Global 500 companies use Docker Enterprise, which is a statistic most companies would love to be able to highlight — and which makes this sale a bit puzzling from Docker’s side, unless the company assumed that few of these customers were going to continue to bet on its technology.

Update: for reasons only known to Docker’s communications team, we weren’t told about this beforehand, but the company also today announced that it has raised a $35 million funding round from Benchmark. This doesn’t change the overall gist of the story below, but it does highlight the company’s new direction.

Here is what Docker itself had to say. “Docker is ushering in a new era with a return to our roots by focusing on advancing developers’ workflows when building, sharing and running modern applications. As part of this refocus, Mirantis announced it has acquired the Docker Enterprise platform business,” Docker said in a statement when asked about this change. “Moving forward, we will expand Docker Desktop and Docker Hub’s roles in the developer workflow for modern apps. Specifically, we are investing in expanding our cloud services to enable developers to quickly discover technologies for use when building applications, to easily share these apps with teammates and the community, and to run apps frictionlessly on any Kubernetes endpoint, whether locally or in the cloud.”

Mirantis itself, too, went through its ups and downs. While it started as a well-funded OpenStack distribution, today’s Mirantis focuses on offering a Kubernetes-centric on-premises cloud platform and application delivery. As the company’s CEO Adrian Ionel told me ahead of today’s announcement, today is possibly the most important day for the company.

So what will Mirantis do with Docker Enterprise? “Docker Enterprise is absolutely aligned and an accelerator of the direction that we were already on,” Ionel told me. “We were very much moving towards Kubernetes and containers aimed at multi-cloud and hybrid and edge use cases, with these goals to deliver a consistent experience to developers on any infrastructure anywhere — public clouds, hybrid clouds, multi-cloud and edge use cases — and make it very easy, on-demand, and remove any operational concerns or burdens for developers or infrastructure owners.”

Mirantis previously had about 450 employees. With this acquisition, it gains another 300 former Docker employees that it needs to integrate into its organization. Docker’s field marketing and sales teams will remain separate for some time, though, Ionel said, before they will be integrated. “Our most important goal is to create no disruptions for customers,” he noted. “So we’ll maintain an excellent customer experience, while at the same time bringing the teams together.”

This also means that for current Docker Enterprise customers, nothing will change in the near future. Mirantis says that it will accelerate the development of the product and merge its Kubernetes and lifecycle management technology into it. Over time, it will also offer a managed services solutions for Docker Enterprise.

While there is already some overlap between Mirantis’ and Docker Enterprise’s customer base, Mirantis will pick up about 700 new enterprise customers with this acquisition.

With this, Ionel argues, Mirantis is positioned to go up against large players like VMware and IBM/Red Hat. “We are the one real cloud-native player with meaningful scale to provide an alternative to them without lock-in into a legacy or existing technology stack.”

While this is clearly a day the Mirantis team is celebrating, it’s hard not to look at this as the end of an era for Docker, too. The company says it will share more about its future plans today, but didn’t make any spokespeople available ahead of this announcement.

Aug
27
2019
--

Kubernetes – Introduction to Containers

Kubernetes - Introduction to Containers

Kubernetes - Introduction to ContainersHere at Percona’s Training and Education department, we are always at work with trying to keep our materials up-to-date and relevant to current technologies.  In addition, we keep an eye out for what topics are “hot” and are generating a lot of buzz.  Unless you’ve been living under a rock for the past year, Kubernetes is the current buzz word/technology. This is the first post in this blog series where we will dive into Kubernetes, and explore using it with Percona XtraDB Cluster.

Editor Disclaimer: This post is not intended to convince you to switch to a containerized environment. This post is simply aimed to educate about a growing trend in the industry.

In The Beginning…

Let’s start at the beginning. First, there were hardware-based servers; we still have these today. Despite the prevalence of “the cloud”, many large companies still use private datacenters and run their databases and applications “the traditional way.” Stepping up from hardware we venture into virtual machine territory. Popular solutions here are VMWare (commercial product), QEMU, KVM, and Virtual Box (the latter three are OSS).

Virtual Machines (VM) abstract away the hardware on which they are running using emulation software; this is generally referred to as the hypervisor. Such a methodology allows for an entirely separate operating system (OS) to run “on top of” the OS running natively against the hardware. For example, you may have CentOS 7 running inside a VM, which in turn, is running on your Mac laptop, or you could have Windows 10 running inside a VM, which is running on your Ubuntu 18 desktop PC.

There are several nice things about using VMs in your infrastructure. One of those is the ability to move a VM from one server to another. Likewise, deploying 10 more VMs just like the first one is usually just a couple of clicks or a few commands. Another is a better utilization of hardware resources. If your servers have 8 CPU cores and 128GB of memory, running one application on that server might not fully utilize all that hardware. Instead, you could run multiple VMs on that one server, thus taking advantage of all the resources. Additionally, using a VM allows you to bundle your application, and all associated libraries, and configs into one “unit” that can be deployed and copied very easily (We’ll come back to this concept in a minute).

One of the major downsides of the VM solution is the fact that you have an entire, separate, OS running.  It is a huge waste of resources to run a CentOS 7 VM on top of a CentOS 7 hardware-server.  If you have 5 CentOS 7 VMs running, you have five separate linux kernels (in addition to the host kernel), five separate copies of /usr, and five separate copies of the same libraries and application code taking up precious memory. Some might say that, in this setup, 90-99% of each VM is duplicating efforts already being made by the base OS. Additionally, each VM is dealing with its own CPU context switching management. Also, what happens when one VM needs a bit more CPU than another? There exists the concept of CPU “stealing” where one VM can take CPU from another VM. Imagine if that was your critical DB that had CPU stolen!

What if you could eliminate all that duplication but still give your application the ability to be bundled and deployed across tens or hundreds of servers with just a few commands? Enter: containers.

What is a Container?

Simply put, a container is a type of “lightweight” virtual machine without all the overhead of running an independent operating system on top of a hypervisor. A container can be as small as a single line script or as large as <insert super complicated application with lots of libraries, code, images, etc.>.

With containers, you essentially get all of the “pros” surrounding VMs with a lot less “cons,” mainly in the area of performance. A container is not an emulation, like VMs. On Linux, the kernel provides control groups (cgroups) functionality, which can limit and isolate CPU, memory, disk, and network. This action of isolation becomes the container; everything needed is ‘contained’ in one place/group. In more advanced setups, these cgroups can even limit/throttle the number of resources given to each container. With containers, there is no need for an additional OS install; no emulation, no hypervisor. Because of this, you can achieve “near bare-metal performance” while using containers.

Containers, and any applications running inside the containers, run inside their own, isolated, namespace, and thus, a container can only see its own process, disk contents, devices assigned, and granted network access.

Much like VMs, containers can be started, stopped, and copied from machine to machine, making deployment of applications across hundreds of servers quite easy.

One potential downside to containers is that they are inherently stateless. This means that if you have a running container, and make a change to its configuration or update a library, etc., that state is not saved if the container is restarted. The initial state of the launched container will be restored on restart. That said, most container implementations provide a method for supplying external storage which can hold stateful information to survive a restart or redeployment.

Dock the Ship, Capt’n

The concept of “containers” is generic. Many different implementations of containers exist, much like different distributions of Linux. The most popular implementation of containers in the Linux space is Docker. Docker is free, open-source software, but can also be sold with a commercial license for more advanced features. Docker is built upon one of the earlier linux container implementations, LXC.

Docker uses the concept of “images” to distribute containers. For example, you can download and install the Percona MySQL 8.0 image to your server, and from that image, create as many separate, independent containers of ‘Percona MySQL 8’ as your hardware can handle. Without getting too deep into the specifics, Docker images are the end result of multiple compressed layers which were used during the building process of this image. Herein lies another pro for Docker. If multiple images share common layers, there only needs to be one copy of that layer on any system, thus the potential for space savings can be huge.

Another nice feature of Docker containers is its built-in versioning capabilities. If you are familiar with the version control software Git, then you should know the concept of “tags” which is extremely similar to Docker tags. These tags allow you to track and specify Docker images when creating and deploying images. For example, using the same image (

percona/percona-server

 ), you can deploy

percona/percona-server:5.7

  or

percona/percona-server:8.0

  In this example, “5.7” and “8.0” are the tags used.

Docker also has a huge community ecosystem for sharing images. Visit https://hub.docker.com/ and search for just about any popular OSS project and you will probably find out that someone has already created an image for you to utilize.

Too Many Ships at Port

Docker containers are extremely simple to maintain, deploy, and use within your infrastructure. But what happens when you have hundreds of servers, and you’re trying to manage thousands of docker containers? Sure, if one server goes down, all you need to do, technically speaking, is re-launch those downed containers on another server. Do you have another server in your sea of servers with enough free capacity to handle this additional load? What if you were using persistent storage and need that volume moved along with the container? Running and managing large installations of containers can quickly become overwhelming.

Now we will discuss the topic of container orchestration.

Kubernetes

Pronounced, koo-ber-net-ies, and often abbreviated K8S (there are 8 letters between the ‘K’ and last ‘S’), is the current mainstream project for managing Docker containers at very large scale.  Kubernetes is Greek for “governor”, “helmsman”, or “captain”. Rather than repeating it here, you can read more about the history of K8S on your own.

Terminology

K8S has many moving parts to itself, as you might have imagined. Take a look at this image.

Kubernetes

Source: Wikipedia (modified)

Nodes

At the bottom of the image are two ‘Kubernetes Nodes’. These are usually your physical hardware servers, but could also be VMs at your cloud provider. These nodes will have a minimum OS installed, along with other necessary packages like K8S and Docker.

Kubelet is a management binary used by K8S to watch existing containers, run health checks, and launch new containers.

Kube-proxy is a simple port-mapping proxy service. Let’s say you had a simple webserver running as a container on 172.14.11.109:34223. Your end-users would access an external/public IP over 443 like normal, and kube-proxy would map/route that external access to the correct internal container.

Pods

The next concept to understand is a pod. This is typically the base term used when discussing what is running on a node. A pod can be a single container or can be an entire setup, running multiple containers, with mapped external storage, that should be grouped together as a single unit.

Networking

Networking within K8S is a complex topic, and a deep discussion is not suitable for this post. At an entry-level, what you need to know is that K8S creates an “overlay” network, allowing a container in a pod on node1 to talk to a container in a pod on node322 which is in another datacenter. Should your company policy require it, you can create multiple overlay networks across your entire K8S and use them as a way to isolate traffic between pods (ie: think old-school VLANs).

Master Node

At the top of the diagram, you can see the developer/operator interacting with the K8S master node. This node runs several processes which are critical to K8S operation:

kube-apiserver: A simple server to handle/route/process API requests from other K8S tools and even your own custom tools

etcd: A highly available key-value store. This is where K8S stores all of its stateful data.

kube-scheduler: Manages inventory and decides where to launch new pods based on available node resources.

You might have noticed that there is only 1 master node in the above diagram. This is the default installation. This is not highly available and should the master node go down, the entire K8S infrastructure will go down with it. In a true production environment, it is highly recommended that you run the master node in a highly-available manner.

Wrap It Up

In this post, we discussed how traditional hardware-based setups have been slowly migrating, first towards virtual machine setups and now to containerized deployments. We brought up the issues surrounding managing large container environments and introduced the leading container orchestration platform, Kubernetes. Lastly, we dove into K8S’s various components and architecture.

Once you have your K8S cluster set up with a master node, and many worker nodes, you’ll finally be able to launch some pods and have services accessible. This is what we will do in part 2 of this blog series.

Aug
21
2019
--

Cleaning Docker Disk Space Usage

Cleaning Docker Disk Space Usage

Cleaning Docker Disk Space UsageWhile testing our PMM2 Beta and other Dockerized applications, you may want to clean up your docker install and start from scratch. If you’re not very familiar with Docker it may not be that trivial. This post focuses on cleaning up everything in docker so you can “start from scratch” which means removing all your containers and their data. Make sure this is what you really want!

First, you have running (and stopped containers):

root@pmm2-test:~# docker ps -a
CONTAINER ID        IMAGE                               COMMAND                CREATED             STATUS              PORTS                                      NAMES
810fef9a2a05        perconalab/pmm-server:2.0.0-beta5   "/opt/entrypoint.sh"   2 weeks ago         Up 2 weeks          0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   pmm-server-2.0.0-beta5
7e2595b2b7ad        perconalab/pmm-server:2.0.0-beta5   "/bin/true"            2 weeks ago         Created                                                        pmm-data-2-0-0-beta5

Which you can stop and remove.

root@pmm2-test:~# docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)
810fef9a2a05
7e2595b2b7ad
810fef9a2a05
7e2595b2b7ad

You may feel like you are done at this point, but not really. You may have a lot of disk space held up in other Docker objects:

root@pmm2-test:~# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              1                   0                   1.331GB             1.331GB (100%)
Containers          0                   0                   0B                  0B
Local Volumes       1                   0                   52.61GB             52.61GB (100%)
Build Cache         0                   0                   0B                  0B

Now we can see that we have about 1GB still held up in images and the larger amount of 52GB in local volumes. To clean these up, we want to see their names, which you can do with the same “df” command:

root@pmm2-test:~# docker system df -v
Images space usage:

REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE                SHARED SIZE         UNIQUE SIZE         CONTAINERS
perconalab/pmm-server   2.0.0-beta5         9b3111808907        2 weeks ago         1.331GB             0B                  1.331GB             0

Containers space usage:

CONTAINER ID        IMAGE               COMMAND             LOCAL VOLUMES       SIZE                CREATED             STATUS              NAMES

Local Volumes space usage:

VOLUME NAME                                                        LINKS               SIZE
a3e9e89f3c43bdaa817e014d08e73705562a9affe4dae54817d77a1b8ad0c771   0                   52.61GB

Build cache usage: 0B

CACHE ID            CACHE TYPE          SIZE                CREATED             LAST USED           USAGE               SHARED

From here you can, of course, remove volumes and images one by one, but you may choose to do this instead:

Remove All Docker Images:

root@pmm2-test:~# docker image prune -a -f
Deleted Images:
untagged: perconalab/pmm-server:2.0.0-beta5
untagged: perconalab/pmm-server@sha256:55968571b01e1f2b02cee77ef92afa05f93c1a0b8e506ceebc10d03bff9783be
deleted: sha256:9b3111808907a3cd02d4961b114471c8ec16dc0dc430cfd8c4a4f556804e57ec
deleted: sha256:4cb3b12fc91c29837374de51f6279d911c9e21d73d22a009841ac9a00e6a7e96
deleted: sha256:d69483a6face4499acb974449d1303591fcbb5cdce5420f36f8a6607bda11854

Total reclaimed space: 1.331GB

And Remove all Docker Local Volumes:

root@pmm2-test:~#  docker system prune --volumes -f
Deleted Volumes:
a3e9e89f3c43bdaa817e014d08e73705562a9affe4dae54817d77a1b8ad0c771

Total reclaimed space: 52.61GB

We can run “docker system df” again to see if we really cleaned up everything or if anything else remains:

root@pmm2-test:~# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              0                   0                   0B                  0B
Containers          0                   0                   0B                  0B
Local Volumes       0                   0                   0B                  0B
Build Cache         0                   0                   0B                  0B

All clear. We can now have a fresh start with docker!

Jul
11
2019
--

Docker Security Considerations Part I

Docker security

Docker securityDocker Security Considerations – PART I

Why Docker Security Matters

It is a fact that Docker has found widespread use during the past years, mostly because it is very easy to use as well as fast and easy to deploy when compared with a full-blown virtual machine. More and more servers are being operated as Docker hosts on which micro-services run in containers. From a security point of view, two aspects of it arise in the context of this blog post and the inherent limitations it has as a means of communication.

First, the answer to the already quite talked-through question, “Is it secure?” Second, the less analyzed aspect of incident analysis, and the changes introduced with respect to known methods and evidence. In order to answer the first question, some key facts about Docker need to be noted. The second aspect, being less examined in general, will be a future post in this mini-series about Docker security.

The basic technology that enables containers in Linux is kernel name-spaces. These are what allow the kernel to offer private resources to a process, while hiding the resources of other processes, all without actually creating full guest OSes or kernels per process. One very important fact is that containers run without cgroup limitations by default, but CPU and memory usage can be easily constrained with the application of a few flags. Docker presents an option called cgroup-parent for this purpose, and you can check the official docker documentation on cgroup-parent. Under no circumstances should containers (or any process) be allowed to run on a production server without such cgroup limitations. In addition, predefined resource consumption for containers could help in limiting the costs for cloud infrastructure as it better defines them, leading to more targeted purchases.

Docker uses a default profile to create a system call whitelist. In this profile, “dangerous” syscalls are missing. If you have a solid understanding of exactly the system calls needed by your containerized process, you can customize this profile or write your own to lock out every syscall not needed by your container. Also, note that custom profiles can be applied on a container-by-container basis. In addition, Linux capabilities can be a very good way with regards to considerations on Docker security. Quoting from the man-pages:

For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivileged processes are subject to full permission checking based on the process’s credentials (usually: effective UID, effective GID, and supplementary group list).

Starting with kernel 2.2, Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities, which can be independently enabled and disabled.  Capabilities are a per-thread attribute.

In the example below, we can see the basic syntax for manipulating and auditing capabilities. Perhaps the most secure way to run Docker is using the –user flag (no capabilities), but this is not always possible since there is no way to add capabilities to a nonzero UID. In cases where this is needed, running as root with –cap-drop ALL and –cap-add to add back the minimum set of capabilities is the most secure practical configuration. Note that since a Docker container is fundamentally just a process, it’s simple to enforce these permissions at runtime using the native Linux functionality designed to manage capabilities per process. No ‘under-the-hood’ configuration inside the container is required, but it would be to manipulate the capabilities of groups of processes running in a VM.

A closer look at these should highlight the need to manage them and will give us a better understanding of what’s going on. The default list of capabilities available to privileged processes in a docker container (mostly quoting https://www.redhat.com/en/blog/secure-your-containers-one-weird-trick) is presented below. A complete list of capabilities can be found at the following link http://man7.org/linux/man-pages/man7/capabilities.7.html. At this point we should note that in Dockerfile v3 and above, capabilities can be added and images built; e.g.

cap_drop:

- ALL

cap_add:

- chown

The default list of capabilities available in a Docker container:

chown:

This is described as the ability to make arbitrary changes to file UIDs and GIDs. Unless a shell is running inside a container and packages installed into it, this should be dropped. If chown is needed, allow it, do the work, and then drop it again.

dac_override:

(Discretionary access control)_override.

Man pages state that this cap allows root to bypass file r, w & x permissions. It is as odd as it sounds, that any app will need this. As noted by RedHat sec experts “nothing should need this. If your container needs this, it probably doing something horrible.”

fowner:

fowner transfers the ability to bypass permission checks on operations that normally require the file system UID to match the UID of the file. Quite similar to dac_overide. Could be needed for docker build but it should be dropped when the container is in production.

fsetid:

Quoting the man page “don’t clear set-user-ID and set-group-ID mode bits when a file is modified; set the set-group-ID bit for a file whose GID does not match the file system or any of the supplementary GIDs of the calling process.”

So, if you are not running an installation, you probably do not need this capability.

kill

This capability basically means that a root-owned process can send kill signals to non-root processes. If your container is running all processes as root or the root processes never kills processes running as non-root, you do not need this capability. If you are running systemd as PID 1 inside of a container, and you want to stop a container running with a different UID, you might need this capability.

setgid

In short, a process with this capability can change its GID to any other GID. Basically allows full group access to all files on the system. If your container processes do not change UIDs/GIDs, they do not need this capability.

setuid

A process with this capability can change its UID to any other UID. Basically, it allows full access to all files on the system. If your container processes do not change UIDs/GIDs always running as the same UID, preferably non-root, they do not need this capability. Applications that need setuid usually start as root in order to bind to ports below 1024 and then change their UIDS and drop capabilities. Apache binding to port 80 requires net_bind_service, usually starting as root. It then needs setuid/setgid to switch to the apache user and drop capabilities. Most containers can safely drop setuid/setgid capability.

setpcap

From the man page description: “Add any capability from the calling thread’s bounding set to its inheritable set; drop capabilities from the bounding set (via prctl(2) PR_CAPBSET_DROP); make changes to the securebits flags.”

A process with this capability can change its current capability set within its bounding set. Meaning a process could drop capabilities or add capabilities if it did not currently have them, but is limited by the bounding set capabilities.

net_bind_service

If you have this capability, you can bind to privileged ports (< 1024).

When binding to a port below 1024 is needed, this capability will do the job. For services that listen to a port above 1024, this capability should be dropped. The risk of this capability is a rogue process interpreting a service like sshd, and collecting user passwords. Running a container in a different network namespace reduces the risk of this capability. It would be difficult for the container process to get to the public network interface

net_raw

This access allows a process to spy on packets on its network. Most container processes would not need this access so it probably should be dropped. Note this would only affect the containers that share the same network that your container process is running on, usually preventing access to the real network. RAW sockets also give an attacker the ability to inject almost anything onto the network.

sys_chroot

This capability allows the use of chroot(). In other words, it allows your processes to chroot into a different rootfs. chroot is probably not used within your container, so it should be dropped.

mknod

If you have this capability, you can create special files using mknod.

This allows your processes to create device nodes. Containers are usually provided all of the device nodes they need in /dev, so this could and should be dropped by default. Almost no containers ever do this, and even fewer containers should do this.

audit_write

With this capability, you can write a message to the kernel auditing log. Few processes attempt to write to the audit log (login programs, su, sudo) and processes inside of the container are probably not trusted. The audit subsystem is not currently namespace-aware, so this should be dropped by default.

setfcap

Finally, the setfcap capability allows you to set file capabilities on a file system. Might be needed for doing installs during builds, but in production, it should probably be dropped.

The following exercise, the output of which is displayed below, should highlight the way of manipulating capabilities along with some of the points made above.

First, we run a container that changes ownership of the root fs:

Status: Image is up to date for centos:latest
john@seceng:~/Documents/docker-sec$
john@seceng:~/Documents/docker-sec$ docker container run --rm centos:latest chown nobody /
john@seceng:~/Documents/docker-sec$

We can see that the root user is able to execute this without a problem.

Next, we drop all capabilities from the user in the container, and when we try running the above command again, we can see it fails, as the user no longer has the necessary chown capability (or any other root capabilities):

john@seceng:~/Documents/docker-sec$ docker container run --rm --cap-drop all centos:latest chown nobody /
chown: changing ownership of '/': Operation not permitted
john@seceng:~/Documents/docker-sec$

Now we add the necessary capability back in:

john@seceng:~/Documents/docker-sec$ docker container run --rm --cap-drop all --cap-add chown centos:latest chown nobody /
john@seceng:~/Documents/docker-sec$

Dropping all capabilities and adding back only the required ones makes for the most secure containers. An attacker who breaches this container won’t have any root capabilities except chown.

Now, if we try adding chown to a non-root user, the operation fails since capabilities manipulation won’t grant additional capabilities under any circumstances to a process run by a non-root user, as noted on the following screenshot.

john@seceng:~/Documents/docker-sec$ docker container run --rm --user 1000:1000 --cap-add chown centos:latest chown nobody /
chown: changing ownership of '/': Operation not permitted
john@seceng:~/Documents/docker-sec$

Another Linux feature that can aid the process of securing our containers is Secure computing mode (seccomp). Seccomp is a Linux kernel feature. You can use it to restrict the actions available within the container. The seccomp() system call operates on the seccomp state of the calling process. You can use this feature to restrict your application’s access.

This feature is available only if Docker has been built with seccomp and the kernel is configured with CONFIG_SECCOMP enabled. More info about the seccomp profiles for Docker can be found here. To check if your kernel supports seccomp:

john@seceng:~/Documents/docker-sec$ grep CONFIG_SECCOMP= /boot/config-$(uname -r)
CONFIG_SECCOMP=y
john@seceng:~/Documents/docker-sec$

While capabilities and seccomp allow restricting access to root capabilities and system calls, SELinux goes a step further and helps protect the file system of the host from malicious code. Default SELinux labels restrict access to directories and those labels can be changed by the :z and :Z flags when mounting directories to relabel those directories for sharing or private use by containers, respectively.

Additionally, AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. Docker expects to find an AppArmor policy loaded and enforced. Docker automatically generates and loads a default profile for containers named docker-default. On Docker versions 1.13.0 and later, the Docker binary generates this profile in tmpfs and then loads it into the kernel. On Docker versions earlier than 1.13.0, this profile is generated in /etc/apparmor.d/docker instead. It should be noted, though, that this profile is used on containers, not on the Docker daemon. More on AppArmor and Docker can be found at docker’s official documentation.

Last in this list of facts, but certainly not in general, is the default software-defined network firewalling imposed by Docker, and the iptables rules automatically managed by Docker. The Docker Networking Model was designed around software-defined networks exactly so that Docker could easily manage iptables firewall rules on create and destroy operations of Docker networks and containers. Rather than writing complicated iptables rules explicitly, effective firewalling can be imposed on a containerized data center simply by designing and declaring a sensible software-defined network topology.

Basically, but for sure not limited to, Docker Security considerations rise because the Docker daemon runs as root and is being used widely in production systems. Having said that and taking into account the previously mentioned facts, one can easily understand that configuration, bugs, vulnerabilities or other security issues are of high risk and impact factor. With this in mind, the following example, which highlights the non-secure default configuration of Docker, is probably the best “teaser” for the next part of this post, in which we will cover some recent vulnerabilities and their mitigation.

john@seceng:~/Documents/docker-sec$ id

uid=1000(john) gid=1000(john) groups=1000(john),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),998(docker)
john@seceng:~/Documents/docker-sec$ whoami

john
john@seceng:~/Documents/docker-sec$ docker run -v /home/${user}:/h_docs ubuntu bash -c "cp /bin/bash /h_docs/rootshell && chmod 4777 /h_docs/rootshell;" && ~/rootshell -p

rootshell-4.4# id
uid=1000(john) gid=1000(john) euid=0(root) groups=1000(john),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),998(docker)
rootshell-4.4# whoami

root
rootshell-4.4# uname -a

Linux seceng 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64 GNU/Linux
rootshell-4.4# exit

exit
john@seceng:~/Documents/docker-sec$

 

From user to root, in one line. It’s there by default.

John Lionis, Security Engineer | Percona LLC

Written by in: Docker,Zend Developer | Tags:
May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com