Sep
10
2019
--

Snyk grabs $70M more to detect security vulnerabilities in open-source code and containers

A growing number of IT breaches has led to security becoming a critical and central aspect of how computing systems are run and maintained. Today, a startup that focuses on one specific area — developing security tools aimed at developers and the work they do — has closed a major funding round that underscores the growth of that area.

Snyk — a London and Boston-based company that got its start identifying and developing security solutions for developers working on open-source code — is today announcing that it has raised $70 million, funding that it will be using to continue expanding its capabilities and overall business. For example, the company has more recently expanded to building security solutions to help developers identify and fix vulnerabilities around containers, an increasingly standard unit of software used to package up and run code across different computing environments.

Open source — Snyk works as an integration into existing developer workflows, compatible with the likes of GitHub, Bitbucket and GitLab, as well as CI/CD pipelines — was an easy target to hit. It’s used in 95% of all enterprises, with up to 77% of open-source components liable to have vulnerabilities, by Snyk’s estimates. Containers are a different issue.

“The security concerns around containers are almost more about ownership than technology,” Guy Podjarny, the president who co-founded the company with Assaf Hefetz and Danny Grander, explained in an interview. “They are in a twilight zone between infrastructure and code. They look like virtual machines and suffer many of same concerns such as being unpatched or having permissions that are too permissive.”

While containers are present in fewer than 30% of computing environments today, their growth is on the rise, according to Gartner, which forecasts that by 2022, more than 75% of global organizations will run containerized applications. Snyk estimates that a full 44% of Docker image scans (Docker being one of the major container vendors) have known vulnerabilities.

This latest round is being led by Accel with participation from existing investors GV and Boldstart Ventures. These three, along with a fourth investor (Heavybit) also put $22 million into the company as recently as September 2018. That round was made at a valuation of $100 million, and from what we understand from a source close to the startup, it’s now in the “range” of $500 million.

“Accel has a long history in the security market and we believe Snyk is bringing a truly unique, developer-first approach to security in the enterprise,” said Matt Weigand of Accel said in a statement. “The strength of Snyk’s customer base, rapidly growing free user community, leadership team and innovative product development prove the company is ready for this next exciting phase of growth and execution.”

Indeed, the company has hit some big milestones in the last year that could explain that hike. It now has some 300,000 developers using it around the globe, with its customer base growing some 200% this year and including the likes of Google, Microsoft, Salesforce and ASOS (side note: you know that if developers at developer-centric places themselves working at the vanguard of computing, like Google and Microsoft, are using your product, that is a good sign). Notably, that has largely come by word of mouth — inbound interest.

The company in July of this year took on a new CEO, Peter McKay, who replaced Podjarny. McKay was the company’s first investor and has a track record in helping to grow large enterprise security businesses, a sign of the trajectory that Snyk is hoping to follow.

“Today, every business, from manufacturing to retail and finance, is becoming a software business,” said McKay. “There is an immediate and fast growing need for software security solutions that scale at the same pace as software development. This investment helps us continue to bring Snyk’s product-led and developer-focused solutions to more companies across the globe, helping them stay secure as they embrace digital innovation – without slowing down.”

Aug
27
2019
--

Kubernetes – Introduction to Containers

Kubernetes - Introduction to Containers

Kubernetes - Introduction to ContainersHere at Percona’s Training and Education department, we are always at work with trying to keep our materials up-to-date and relevant to current technologies.  In addition, we keep an eye out for what topics are “hot” and are generating a lot of buzz.  Unless you’ve been living under a rock for the past year, Kubernetes is the current buzz word/technology. This is the first post in this blog series where we will dive into Kubernetes, and explore using it with Percona XtraDB Cluster.

Editor Disclaimer: This post is not intended to convince you to switch to a containerized environment. This post is simply aimed to educate about a growing trend in the industry.

In The Beginning…

Let’s start at the beginning. First, there were hardware-based servers; we still have these today. Despite the prevalence of “the cloud”, many large companies still use private datacenters and run their databases and applications “the traditional way.” Stepping up from hardware we venture into virtual machine territory. Popular solutions here are VMWare (commercial product), QEMU, KVM, and Virtual Box (the latter three are OSS).

Virtual Machines (VM) abstract away the hardware on which they are running using emulation software; this is generally referred to as the hypervisor. Such a methodology allows for an entirely separate operating system (OS) to run “on top of” the OS running natively against the hardware. For example, you may have CentOS 7 running inside a VM, which in turn, is running on your Mac laptop, or you could have Windows 10 running inside a VM, which is running on your Ubuntu 18 desktop PC.

There are several nice things about using VMs in your infrastructure. One of those is the ability to move a VM from one server to another. Likewise, deploying 10 more VMs just like the first one is usually just a couple of clicks or a few commands. Another is a better utilization of hardware resources. If your servers have 8 CPU cores and 128GB of memory, running one application on that server might not fully utilize all that hardware. Instead, you could run multiple VMs on that one server, thus taking advantage of all the resources. Additionally, using a VM allows you to bundle your application, and all associated libraries, and configs into one “unit” that can be deployed and copied very easily (We’ll come back to this concept in a minute).

One of the major downsides of the VM solution is the fact that you have an entire, separate, OS running.  It is a huge waste of resources to run a CentOS 7 VM on top of a CentOS 7 hardware-server.  If you have 5 CentOS 7 VMs running, you have five separate linux kernels (in addition to the host kernel), five separate copies of /usr, and five separate copies of the same libraries and application code taking up precious memory. Some might say that, in this setup, 90-99% of each VM is duplicating efforts already being made by the base OS. Additionally, each VM is dealing with its own CPU context switching management. Also, what happens when one VM needs a bit more CPU than another? There exists the concept of CPU “stealing” where one VM can take CPU from another VM. Imagine if that was your critical DB that had CPU stolen!

What if you could eliminate all that duplication but still give your application the ability to be bundled and deployed across tens or hundreds of servers with just a few commands? Enter: containers.

What is a Container?

Simply put, a container is a type of “lightweight” virtual machine without all the overhead of running an independent operating system on top of a hypervisor. A container can be as small as a single line script or as large as <insert super complicated application with lots of libraries, code, images, etc.>.

With containers, you essentially get all of the “pros” surrounding VMs with a lot less “cons,” mainly in the area of performance. A container is not an emulation, like VMs. On Linux, the kernel provides control groups (cgroups) functionality, which can limit and isolate CPU, memory, disk, and network. This action of isolation becomes the container; everything needed is ‘contained’ in one place/group. In more advanced setups, these cgroups can even limit/throttle the number of resources given to each container. With containers, there is no need for an additional OS install; no emulation, no hypervisor. Because of this, you can achieve “near bare-metal performance” while using containers.

Containers, and any applications running inside the containers, run inside their own, isolated, namespace, and thus, a container can only see its own process, disk contents, devices assigned, and granted network access.

Much like VMs, containers can be started, stopped, and copied from machine to machine, making deployment of applications across hundreds of servers quite easy.

One potential downside to containers is that they are inherently stateless. This means that if you have a running container, and make a change to its configuration or update a library, etc., that state is not saved if the container is restarted. The initial state of the launched container will be restored on restart. That said, most container implementations provide a method for supplying external storage which can hold stateful information to survive a restart or redeployment.

Dock the Ship, Capt’n

The concept of “containers” is generic. Many different implementations of containers exist, much like different distributions of Linux. The most popular implementation of containers in the Linux space is Docker. Docker is free, open-source software, but can also be sold with a commercial license for more advanced features. Docker is built upon one of the earlier linux container implementations, LXC.

Docker uses the concept of “images” to distribute containers. For example, you can download and install the Percona MySQL 8.0 image to your server, and from that image, create as many separate, independent containers of ‘Percona MySQL 8’ as your hardware can handle. Without getting too deep into the specifics, Docker images are the end result of multiple compressed layers which were used during the building process of this image. Herein lies another pro for Docker. If multiple images share common layers, there only needs to be one copy of that layer on any system, thus the potential for space savings can be huge.

Another nice feature of Docker containers is its built-in versioning capabilities. If you are familiar with the version control software Git, then you should know the concept of “tags” which is extremely similar to Docker tags. These tags allow you to track and specify Docker images when creating and deploying images. For example, using the same image (

percona/percona-server

 ), you can deploy

percona/percona-server:5.7

  or

percona/percona-server:8.0

  In this example, “5.7” and “8.0” are the tags used.

Docker also has a huge community ecosystem for sharing images. Visit https://hub.docker.com/ and search for just about any popular OSS project and you will probably find out that someone has already created an image for you to utilize.

Too Many Ships at Port

Docker containers are extremely simple to maintain, deploy, and use within your infrastructure. But what happens when you have hundreds of servers, and you’re trying to manage thousands of docker containers? Sure, if one server goes down, all you need to do, technically speaking, is re-launch those downed containers on another server. Do you have another server in your sea of servers with enough free capacity to handle this additional load? What if you were using persistent storage and need that volume moved along with the container? Running and managing large installations of containers can quickly become overwhelming.

Now we will discuss the topic of container orchestration.

Kubernetes

Pronounced, koo-ber-net-ies, and often abbreviated K8S (there are 8 letters between the ‘K’ and last ‘S’), is the current mainstream project for managing Docker containers at very large scale.  Kubernetes is Greek for “governor”, “helmsman”, or “captain”. Rather than repeating it here, you can read more about the history of K8S on your own.

Terminology

K8S has many moving parts to itself, as you might have imagined. Take a look at this image.

Kubernetes

Source: Wikipedia (modified)

Nodes

At the bottom of the image are two ‘Kubernetes Nodes’. These are usually your physical hardware servers, but could also be VMs at your cloud provider. These nodes will have a minimum OS installed, along with other necessary packages like K8S and Docker.

Kubelet is a management binary used by K8S to watch existing containers, run health checks, and launch new containers.

Kube-proxy is a simple port-mapping proxy service. Let’s say you had a simple webserver running as a container on 172.14.11.109:34223. Your end-users would access an external/public IP over 443 like normal, and kube-proxy would map/route that external access to the correct internal container.

Pods

The next concept to understand is a pod. This is typically the base term used when discussing what is running on a node. A pod can be a single container or can be an entire setup, running multiple containers, with mapped external storage, that should be grouped together as a single unit.

Networking

Networking within K8S is a complex topic, and a deep discussion is not suitable for this post. At an entry-level, what you need to know is that K8S creates an “overlay” network, allowing a container in a pod on node1 to talk to a container in a pod on node322 which is in another datacenter. Should your company policy require it, you can create multiple overlay networks across your entire K8S and use them as a way to isolate traffic between pods (ie: think old-school VLANs).

Master Node

At the top of the diagram, you can see the developer/operator interacting with the K8S master node. This node runs several processes which are critical to K8S operation:

kube-apiserver: A simple server to handle/route/process API requests from other K8S tools and even your own custom tools

etcd: A highly available key-value store. This is where K8S stores all of its stateful data.

kube-scheduler: Manages inventory and decides where to launch new pods based on available node resources.

You might have noticed that there is only 1 master node in the above diagram. This is the default installation. This is not highly available and should the master node go down, the entire K8S infrastructure will go down with it. In a true production environment, it is highly recommended that you run the master node in a highly-available manner.

Wrap It Up

In this post, we discussed how traditional hardware-based setups have been slowly migrating, first towards virtual machine setups and now to containerized deployments. We brought up the issues surrounding managing large container environments and introduced the leading container orchestration platform, Kubernetes. Lastly, we dove into K8S’s various components and architecture.

Once you have your K8S cluster set up with a master node, and many worker nodes, you’ll finally be able to launch some pods and have services accessible. This is what we will do in part 2 of this blog series.

Aug
26
2019
--

VMware is bringing VMs and containers together, taking advantage of Heptio acquisition

At VMworld today in San Francisco, VMware introduced a new set of services for managing virtual machines and containers in a single view. Called Tanzu, the product takes advantage of the knowledge the company gained when it acquired Heptio last year.

As companies face an increasingly fragmented landscape of maintaining traditional virtual machines, alongside a more modern containerized Kubernetes environment, managing the two together has created its own set of management challenges for IT. This is further complicated by trying to manage resources across multiple clouds, as well as the in-house data centers. Finally, companies need to manage legacy applications, while looking to build newer containerized applications.

VMware’s Craig McLuckie and fellow Heptio co-founder Joe Beda were part of the original Kubernetes development team. They came to VMware via last year’s acquisition. McLuckie believes that Tanzu can help with all of this by applying the power of Kubernetes across this complex management landscape.

“The intent is to construct a portfolio that has a set of assets that cover every one of these areas, a robust set of capabilities that bring the Kubernetes substrate everywhere — a control plane that enables organizations to start to think about [and view] these highly fragmented deployments with Kubernetes [as the] common lens, and then the technologies you need to be able to bring existing applications forward and to build new application and to support third-party vendors bringing their applications into [this],” McLuckie explained.

It’s an ambitious vision that involves bringing together not only VMware’s traditional VM management tooling and Kubernetes, but also open-source pieces and other recent acquisitions, including Bitnami and Cloud Health along with Wavefront, which it acquired in 2017. Although the vision was defined long before the acquisition of Pivotal last week, it will also play a role in this. Originally that was as a partner, but now it will be as part of VMware.

The idea is to eventually cover the entire gamut of building, running and managing applications in the enterprise. Among the key pieces introduced today as technology previews are the Tanzu Mission Control, a tool for managing Kubernetes clusters wherever they live, and Project Pacific, which embeds Kubernetes natively into vSphere, the company’s virtualization platform, bringing together virtual machines and containers.

Screenshot 2019 08 26 08.07.38 1

VMware Tanzu (Slide: VMware)

McLuckie sees bringing virtual machine and Kubernetes together in this fashion provides a couple of key advantages. “One is being able to bring a robust, modern API-driven way of thinking about accessing resources. And it turns out that there is this really good technology for that. It’s called Kubernetes. So being able to bring a Kubernetes control plane to vSphere is creating a new set of experiences for traditional VMware customers that is moving much closer to a kind of cloud-like agile infrastructure type of experience. At the same time, vSphere is bringing a whole bunch of capabilities to Kubernetes that’s creating more efficient isolation capabilities,” he said.

When you think about the cloud-native vision, it has always been about enabling companies to manage resources wherever they live through a single lens, and this is what this set of capabilities that VMware has brought together under Tanzu is intended to do. “Kubernetes is a way of bringing a control metaphor to modern IT processes. You provide an expression of what you want to have happen, and then Kubernetes takes that and interprets it and drives the world into that desired state,” McLuckie explained.

If VMware can take all of the pieces in the Tanzu vision and make this happen, it will be as powerful as McLuckie believes it to be. It’s certainly an interesting attempt to bring all of a company’s application and infrastructure creation and management under one roof using Kubernetes as the glue — and with Heptio co-founders McLuckie and Beda involved, it certainly has the expertise in place to drive the vision.

May
29
2019
--

Announcing Percona Kubernetes Operators

kubernetes-operators

kubernetes-operatorsPercona announced the release of Percona Kubernetes Operator for XtraDB Cluster and Percona Kubernetes Operator for Percona Server for MongoDB. Kubernetes delivers a method to orchestrate containers, providing automated deployment, management, and scalability.

In today’s cloud-native world, the ability to easily and efficiently deploy new environments or scale existing environments is key to ongoing growth and development. With Kubernetes Operators, you can launch a new environment with no single point of failure in under 10 minutes. As needs change, Kubernetes Operators can reliably orchestrate scaling the environment to meet current requirements, adding or removing nodes quickly and efficiently. Kubernetes Operators also provide for self-healing of a failed node in a cluster environment.

One of the best features of the Percona Kubernetes Operators is that they provide a deployment configuration while meeting Percona best practices. When the Operator is used to create a new XtraDB or Percona Server for MongoDB node, you can rest assured that the new node will use the same configuration as other nodes created with that same Operator. This ensures consistent results and reliability.

The consistency and ease of deployment enable your developers to focus on writing code while your operations team focuses on building pipelines. The Operator takes care of the tedious tasks of deploying and maintaining your databases following Percona’s best practices for performance, reliability, and scalability.

Percona Kubernetes Operator for XtraDB Cluster

The Percona Kubernetes Operator for XtraDB Cluster provides a way to deploy, manage, or scale an XtraDB Cluster environment. Based on our best practices, this Operator delivers a solid and secure cluster environment that can be used for development, testing, or production.

Percona XtraDB Cluster includes ProxySQL for load balancing and Percona XtraBackup for MySQL to easily backup your database environment. The Operator adds Percona Monitoring and Management to provide you with deep visibility into the performance and usage of your cluster.

Percona Kubernetes Operator for Percona Server for MongoDB

The Percona Kubernetes Operator for Percona Server for MongoDB enables you to deploy, manage, and scale a Percona Server for MongoDB replica set. With the Operator, you are ensured of an environment that does not have a single point of failure and adheres to Percona’s best practices for use of Percona Server for MongoDB. New Percona Server for MongoDB nodes can be added as either data storage nodes or as arbiter nodes.

The Percona Kubernetes Operator for Percona Server for MongoDB also includes Percona Monitoring and Management so that you can easily view and analyze activity on your replica set. It also includes Percona Backup for MongoDB, providing both scheduled and on-demand backups of your replica set.

Percona Kubernetes Operators deliver a proven method to provide a reliable and secure environment for your users, whether they are developers, testers, or end users. With these Operators, you are assured of consistent results and properly managed environments, freeing you up to focus on other tasks.

May
23
2019
--

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native, put simply, involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google, says the Kubernetes community has really embraced the serverless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition. “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained.

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice, fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container, such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open-source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points out, there isn’t a one-size-fits-all approach to cloud-native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open-source community and vendors will respond with tools to help them. Bringing serverless and containers together is just one example of that.

May
08
2019
--

Steve Singh stepping down as Docker CEO

TechCrunch has learned that Docker CEO Steve Singh will be stepping down after two years at the helm, and former Hortonworks CEO Rob Bearden will be taking over. An email announcement went out this morning to Docker employees.

People close to the company confirmed that Singh will be leaving the CEO position, staying on the job for several months to help Bearden with the transition. He will then remain with the organization in his role as chairman of the board. They indicated that Bearden has been working closely with Singh over the last several months as a candidate to join the board and as a consultant to the executive team.

Singh clicked with him and viewed him as a possible successor, especially given his background with leadership positions at several open-source companies, including taking Hortonworks public before selling to Cloudera last year. Singh apparently saw someone who could take the company to the next level as he moved on. As one person put it, he was tired of working 75 hours a week, but he wanted to leave the company in the hands of a capable steward.

Last week in an interview at DockerCon, the company’s annual customer conference in San Francisco, Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company. He spoke openly about his leadership philosophy and his efforts to lift the company from the doldrums it was in when he took over two years prior, helping transform it from a mostly free open-source offering into a revenue-generating company with 750 paying enterprise customers.

In fact, he told me that under his leadership the company was on track to become free cash flow positive by the end of this fiscal year, a step he said would mean that Docker would no longer need to seek outside capital. He even talked of the company eventually going public.

Apparently, he felt it was time to pass the torch before the company took those steps, saw a suitable successor in Bearden and offered him the position. While it might have made more sense to announce this at DockerCon with the spotlight focused on the company, it was not a done deal yet by the time the conference was underway in San Francisco, people close to the company explained.

Docker took a $92 million investment last year, which some saw as a sign of continuing struggles for the company, but Singh said he took the money to continue to invest in building revenue-generating enterprise products, some of which were announced at DockerCon last week. He indicated that the company would likely not require any additional investment moving forward.

As for Bearden, he is an experienced executive with a history of successful exits. In addition to his experience at Hortonworks, he was COO at SpringSource, a developer tool suite that was sold to VMware for $420 million in 2009 (and is now part of Pivotal). He was also COO at JBoss, an open-source middleware company acquired by Red Hat in 2006.

Whether he will do the same with Docker remains to be seen, but as the new CEO, it will be up to him to guide the company moving forward to the next steps in its evolution, whether that eventually results in a sale or the IPO that Singh alluded to.

Email to staff from Steve Singh:

Note: Docker has now confirmed this story in a press release.

May
07
2019
--

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.

Apr
30
2019
--

Docker looks to partners and packages to ease container implementation

Docker appears to be searching for ways to simplify the core value proposition of the company — creating, deploying and managing containers. While most would agree it has revolutionized software development, like many technology solutions, it takes a certain level of expertise and staffing to pull off. At DockerCon, the company’s customer conference taking place this week in San Francisco, Docker announced several ways it could help customers with the tough parts of implementing a containerized solution.

For starters, the company announced a beta of Docker Enterprise 3.0 this morning. That update is all about making life simpler for developers. As companies move to containerized environments, it’s a challenge for all but the largest organizations like Google, Amazon and Facebook, all of whom have massive resource requirements and correspondingly large engineering teams.

Most companies don’t have that luxury though, and Docker recognizes if it wants to bring containerization to a larger number of customers, it has to create packages and programs that make it easier to implement.

Docker Enterprise 3.0 is a step toward providing a solution that lets developers concentrate on the development aspects, while working with templates and other tools to simplify the deployment and management side of things.

The company sees customers struggling with implementation and how to configure and build a containerized workflow, so it is working with systems integrators to help smooth out the difficult parts. Today, the company announced Docker Enterprise as a Service, with the goal of helping companies through the process of setting up and managing a containerized environment, using the Docker stack and adjacent tooling like Kubernetes.

The service provider will take care of operational details like managing upgrades, rolling out patches, doing backups and undertaking capacity planning — all of those operational tasks that require a high level of knowledge around enterprise container stacks.

Capgemini will be the first go-to-market partner. “Capgemini has a combination of automation, technology tools, as well as services on the back end that can manage the installation, provisioning and management of the enterprise platform itself in cases where customers don’t want to do that, and they want to pay someone to do that for them,” Scott Johnston, chief product officer at Docker told TechCrunch.

The company has released tools in the past to help customers move legacy applications into containers without a lot of fuss. Today, the company announced a solution bundle called Accelerate Greenfield, a set of tools designed to help customers get up and running as container-first development companies.

“This is for those organizations that may be a little further along. They’ve gone all-in on containers committing to taking a container-first approach to new application development,” Johnston explained. He says this could be cloud native microservices or even a LAMP stack application, but the point is that they want to put everything in containers on a container platform.

Accelerate Greenfield is designed to do that. “They get the benefits where they know that from the developer to the production end point, it’s secure. They have a single way to define it all the way through the life cycle. They can make sure that it’s moving quickly, and they have that portability built into the container format, so they can deploy [wherever they wish],” he said.

These programs and products are all about providing a level of hand-holding, either by playing a direct consultative role, working with a systems integrator or providing a set of tools and technologies to walk the customer through the containerization life cycle. Whether they provide a sufficient level of help that customers require is something we will learn over time as these programs mature.

Apr
30
2019
--

Docker updates focus on simplifying containerization for developers

Over the last five years, Docker has become synonymous with software containers, but that doesn’t mean every developer understands the technical details of building, managing and deploying them. At DockerCon this week, the company’s customer conference taking place in San Francisco, it announced new tools that have been designed to make it easier for developers, who might not be Docker experts, to work with containers.

As the technology has matured, the company has seen the market broaden, but in order to take advantage of that, it needs to provide a set of tools that make it easier to work with. “We’ve found that customers typically have a small cadre of Docker experts, but there are hundreds, if not thousands, of developers who also want to use Docker. And we reasoned, how can we help them get productive very, very quickly, without them having to become Docker experts,” Scott Johnston, chief product officer at Docker, told TechCrunch.

To that end, it announced a beta of Docker Enterprise 3.0, which includes several key components. For starters, Docker Desktop Enterprise lets IT set up a Docker environment with the kind of security and deployment templates that make sense for each customer. The developers can then pick the templates that make sense for their implementations, while conforming with compliance and governance rules in the company.

“These templates already have IT-approved container images, and have IT-approved configuration settings. And what that means is that IT can provide these templates through these visual tools that allow developers to move fast and choose the ones they want without having go back for approval,” Johnston explained.

The idea is to let the developers concentrate on building applications, and the templates provide all the Docker tooling pre-built and ready to go, so they don’t have to worry about all of that.

Another piece of this is Docker Applications, which allows developers to build complex containerized applications as a single package and deploy them to any infrastructure they wish — on-prem or in the cloud. Five years ago, when Docker really got started with containers, they were a simpler idea, often involving just a single one, but as developers broke down those larger applications into microservices, it created a new level of difficulty, especially for operations that had to deploy these increasingly large sets of application containers.

“Operations can now programmatically change the parameters for the containers, depending on the environments, without having to go in and change the application. So you can imagine that ability lowers the friction of having to manage all these files in the first place,” he said.

The final piece of that is the orchestration layer, and the popular way to handle that today is with Kubernetes. Docker has created its own flavor of Kubernetes, based on the open-source tool. Johnston says, as with the other two pieces, the goal here is to take a powerful tool like Kubernetes and reduce the overall complexity associated with running it, while making it fully compatible with a Docker environment.

For that, Docker announced Docker Kubernetes Service (DKS), which has been designed with Docker users in mind, including support for Docker Compose, a scripting tool that has been popular with Docker users. While you are free to use any flavor of Kubernetes you wish, Docker is offering DKS as a Docker-friendly version for developers.

All of these components have one thing in common besides being part of Docker Enterprise 3.0. They are trying to reduce the complexity associated with deploying and managing containers and to abstract away the most difficult parts, so that developers can concentrate on developing without having to worry about connecting to the technical underpinnings of building and deploying containers. At the same time, Docker is trying to make it easier for the operations team to manage it all. That is the goal, at least. In the end, DevOps teams will be the final judges of how well Docker has done, once these tools become generally available later this year.

The Docker Enterprise 3.0 beta will be available later this quarter.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com