May
23
2019
--

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native, put simply, involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google, says the Kubernetes community has really embraced the serverless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition. “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained.

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice, fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container, such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open-source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points out, there isn’t a one-size-fits-all approach to cloud-native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open-source community and vendors will respond with tools to help them. Bringing serverless and containers together is just one example of that.

Mar
04
2019
--

Scytale grabs $5M Series A for application-to-application identity management

Scytale, a startup that wants to bring identity and access management to application-to-application activities, announced a $5 million Series A round today.

The round was led by Bessemer Venture Partners, a return investor that led the company’s previous $3 million round in 2018. Bain Capital Ventures, TechOperators and Work-Bench are also participating in this round.

The company wants to bring to applications and services in a cloud native environment the same kind of authentication that individuals are used to having with a tool like Okta. “What we’re focusing on is trying to bring to market a capability for large enterprises going through this transition to cloud native computing to evolve the existing methods of application to application authentication, so that it’s much more flexible and scalable,” company CEO Sunil James told TechCrunch.

To help with this, the company has developed the open-source, cloud-native project, Spiffe, which is managed by the Cloud Native Computing Foundation (CNCF). The project is designed to provide identity and access management for application-to-application communication in an open-source framework.

The idea is that as companies transition to a containerized, cloud-native approach to application delivery, there needs to a smooth automated way for applications and services to very quickly prove they are legitimate, in much the same way individuals provide a username and password to access a website. This could be, for example, as applications pass through API gateways, or as automation drives the use of multiple applications in a workflow.

Webscale companies like Google and Netflix have developed mechanisms to make this work in-house, but it’s been out of reach of most large enterprise companies. Scytale wants to bring to any company this capability to authenticate services and applications.

In addition to the funding announcement, the company announced Scytale Enterprise, a tool that provides a commercial layer on top of the open-source tools the company has developed. The enterprise version helps companies that might not have the personnel to deal with the open-source version on their own by providing training, consulting and support services.

Bain Capital Venture’s Enrique Salem sees a startup solving a big problem for companies that are moving to cloud-native environments and need this kind of authentication. “In an increasingly complex and fragmented enterprise IT environment, Scytale has not only built Spiffe’s amazing open-source community but has also delivered a commercial offering to address hybrid cloud authentication challenges faced by Fortune 500 identity and access management engineering teams,” Salem said in a statement.

Based in the Bay Area, Scytale launched in 2017 and currently has 24 employees.

Feb
27
2019
--

Open-source communities fight over telco market

When you think of MWC Barcelona, chances are you’re thinking about the newest smartphones and other mobile gadgets, but that’s only half the story. Actually, it’s probably far less than half the story because the majority of the business that’s done at MWC is enterprise telco business. Not too long ago, that business was all about selling expensive proprietary hardware. Today, it’s about moving all of that into software — and a lot of that software is open source.

It’s maybe no surprise then that this year, the Linux Foundation (LF) has its own booth at MWC. It’s not massive, but it’s big enough to have its own meeting space. The booth is shared by the three LF projects: the Cloud Native Computing Foundation (CNCF), Hyperleger and Linux Foundation Networking, the home of many of the foundational projects like ONAP and the Open Platform for NFV (OPNFV) that power many a modern network. And with the advent of 5G, there’s a lot of new market share to grab here.

To discuss the CNCF’s role at the event, I sat down with Dan Kohn, the executive director of the CNCF.

At MWC, the CNCF launched its testbed for comparing the performance of virtual network functions on OpenStack and what the CNCF calls cloud-native network functions, using Kubernetes (with the help of bare-metal host Packet). The project’s results — at least so far — show that the cloud-native container-based stack can handle far more network functions per second than the competing OpenStack code.

“The message that we are sending is that Kubernetes as a universal platform that runs on top of bare metal or any cloud, most of your virtual network functions can be ported over to cloud-native network functions,” Kohn said. “All of your operating support system, all of your business support system software can also run on Kubernetes on the same cluster.”

OpenStack, in case you are not familiar with it, is another massive open-source project that helps enterprises manage their own data center software infrastructure. One of OpenStack’s biggest markets has long been the telco industry. There has always been a bit of friction between the two foundations, especially now that the OpenStack Foundation has opened up its organizations to projects that aren’t directly related to the core OpenStack projects.

I asked Kohn if he is explicitly positioning the CNCF/Kubernetes stack as an OpenStack competitor. “Yes, our view is that people should be running Kubernetes on bare metal and that there’s no need for a middle layer,” he said — and that’s something the CNCF has never stated quite as explicitly before but that was always playing in the background. He also acknowledged that some of this friction stems from the fact that the CNCF and the OpenStack foundation now compete for projects.

OpenStack Foundation, unsurprisingly, doesn’t agree. “Pitting Kubernetes against OpenStack is extremely counterproductive and ignores the fact that OpenStack is already powering 5G networks, in many cases in combination with Kubernetes,” OpenStack COO Mark Collier told me. “It also reflects a lack of understanding about what OpenStack actually does, by suggesting that it’s simply a virtual machine orchestrator. That description is several years out of date. Moving away from VMs, which makes sense for many workloads, does not mean moving away from OpenStack, which manages bare metal, networking and authentication in these environments through the Ironic, Neutron and Keystone services.”

Similarly, ex-OpenStack Foundation board member (and Mirantis co-founder) Boris Renski told me that “just because containers can replace VMs, this doesn’t mean that Kubernetes replaces OpenStack. Kubernetes’ fundamental design assumes that something else is there that abstracts away low-level infrastructure, and is meant to be an application-aware container scheduler. OpenStack, on the other hand, is specifically designed to abstract away low-level infrastructure constructs like bare metal, storage, etc.”

This overall theme continued with Kohn and the CNCF taking a swipe at Kata Containers, the first project the OpenStack Foundation took on after it opened itself up to other projects. Kata Containers promises to offer a combination of the flexibility of containers with the additional security of traditional virtual machines.

“We’ve got this FUD out there around Kata and saying: telco’s will need to use Kata, a) because of the noisy neighbor problem and b) because of the security,” said Kohn. “First of all, that’s FUD and second, micro-VMs are a really interesting space.”

He believes it’s an interesting space for situations where you are running third-party code (think AWS Lambda running Firecracker) — but telcos don’t typically run that kind of code. He also argues that Kubernetes handles noisy neighbors just fine because you can constrain how many resources each container gets.

It seems both organizations have a fair argument here. On the one hand, Kubernetes may be able to handle some use cases better and provide higher throughput than OpenStack. On the other hand, OpenStack handles plenty of other use cases, too, and this is a very specific use case. What’s clear, though, is that there’s quite a bit of friction here, which is a shame.

Dec
11
2018
--

The Cloud Native Computing Foundation adds etcd to its open-source stable

The Cloud Native Computing Foundation (CNCF), the open-source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.

Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open-source projects that use etcd include Cloud Foundry, and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.

“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”

Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 projects that fall under its “incubated technologies” umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.

That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).

Aug
29
2018
--

Google takes a step back from running the Kubernetes development infrastructure

Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.

Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project, like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.

Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs more than 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry has served almost 130 million downloads since the launch of the project.

It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure, either.

“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.

“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands — making the project not just open source, but openly managed, by an open community.”

It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.

May
02
2018
--

Upbound grabs $9M Series A to automate multi-cloud management

Kubernetes, the open source container orchestration tool, does a great job of managing a single cluster, but Upbound, a new Seattle-based startup wants to extend this ability to manage multiple Kubernetes clusters across multi-cloud environment. It’s a growing requirement as companies deploy ever-larger numbers of clusters and choose a multi-vendor approach to cloud infrastructure services.

Today, the company announced a $9 million Series A investment led by GV (formerly Google Ventures) along with numerous unnamed angel investors from the cloud-native community. As part of the deal, GV’s Dave Munichiello will be joining the company board of directors.

It’s important to note that the company is currently working on the product and could be a year away from a release, but the vision is certainly compelling. As Upbound CEO and founder Bassam Tabbara says, his company’s solution could allow customers to run, scale and optimize their workloads across clusters, regions and clouds as a single entity.

That level of control could enable them to set rules and policies across those clusters and clouds. For example, a customer might control costs by creating a rule to find the cloud with lowest cost for processing a given job, or provide failover control across regions and clouds — all automatically. It would provide the general ability to have highly granular control across multiple environments that isn’t really possible now, Tabarra explained.

That vision of enterprise portability is certainly something that caught the eye of GV’s Munichiello. “Upbound presents a credible approach to multi-cloud computing built on the success of Kubernetes, and as a response to the growing enterprise demand for hybrid and multi-cloud environments,” he said in a statement.

Companies are working with multiple Kubernetes clusters today. As an example, CERN, the European physics organization is running 210 clusters. JD.com, the Chinese shopping site has over 20,000 servers running Kubernetes. The largest cluster is made up of 5000 servers. As these projects scale, they require a tool to help manage their workloads across these larger environments.

The company’s founder isn’t new to cloud-native computing or open source. Tabarra was part of the team responsible for producing the open source project, Rook, an offshoot of Kubernetes and a Cloud Native Computing Foundation Sandbox project.  Rook helps orchestrate distributed storage systems running in cloud native environments in a similar way that Kubernetes does for containerized environments. That project provided some of the ground work for what Upbound is trying to do on a broader scale beyond pure storage.

The computing world is suddenly all about abstraction. We started with virtual machines, which allowed you take an individual server and make it into multiple virtual machines. That led to containers, which could take the same machine in let you launch hundreds of containers. Kubernetes is an open source container orchestration tool that has rapidly gained acceptance by allowing operations to treat a cluster of Kubernetes nodes as a single entity, making it much easier to launch and manage containers.

Upbound launched last Fall and currently has 8 employees, but Tabbara says they are actively seeking new engineers. The nature of their business is about distributed workloads and he says the workforce will be similar. They won’t have to work in Seattle. He says the plan is to use and contribute to open source whenever possible and to open source parts of the product when it’s available.

Nov
13
2017
--

36 companies agree to a Kubernetes certification standard

 The Cloud Native Computing Foundation (CNCF) announced today that 36 members have agreed to a set of certification standards for Kubernetes, the immensely popular open source container orchestration tool. This should make it easy for users to move from one version to another without worry, while ensuring that containers under Kubernetes management will behave in a predictable way. The group of… Read More

Aug
09
2017
--

AWS just proved why standards drive technology platforms

 When AWS today became a full-fledged member of the container standards body, the Cloud Native Computing Foundation, it represented a significant milestone. By joining Google, IBM, Microsoft, Red Hat and just about every company that matters in the space, AWS has acknowledged that when it comes to container management, standards matter. Read More

Jul
26
2017
--

Microsoft’s new Azure Container Instances make using containers fast and easy

 Barely a day passes without some news about containers and that speaks to how quickly this technology is being adopted by developers and the platforms and startups that serve them. Today it’s Microsoft’s turn to launch a new container service for its Azure cloud computing platform: Azure Container Instances (ACI). The company also today announced that it is joining the Cloud… Read More

Jan
23
2017
--

Cloud Native Computing Foundation adds Linkerd as its fifth hosted project

shipping containers, boxes The Cloud Native Computing Foundation (CNCF), the open source group that’s also the home of the popular Kubernetes container management system, today announced that it has added Linkerd as its fifth hosted project. Linkerd (pronounced linker-DEE), which was incubated at Buoyant, follows in the footsteps of other CNCF projects like Kubernetes, Prometheus, OpenTracing and Fluentd. The… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com