May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

Apr
29
2019
--

Mirantis makes configuring on-premises clouds easier

Mirantis, the company you may still remember as one of the biggest players in the early days of OpenStack, launched an interesting new hosted SaaS service today that makes it easier for enterprises to build and deploy their on-premises clouds. The new Mirantis Model Designer, which is available for free, lets operators easily customize their clouds — starting with OpenStack clouds next month and Kubernetes clusters in the coming months — and build the configurations to deploy them.

Typically, doing so involves writing lots of YAML files by hand, something that’s error-prone and few developers love. Yet that’s exactly what’s at the core of the infrastructure-as-code model. Model Designer, on the other hand, takes what Mirantis learned from its highly popular Fuel installer for OpenStack and takes it a step further. The Model Designer, which Mirantis co-founder and CMO Boris Renski demoed for me ahead of today’s announcement, presents users with a GUI interface that walks them through the configuration steps. What’s smart here is that every step has a difficulty level (modeled after Doom’s levels, ranging from “I’m too young to die” to “ultraviolence” — though it’s missing Dooms “nightmare” setting), which you can choose based on how much you want to customize the setting.

Model Designer is an opinionated tool, but it does give users quite a bit of freedom, too. Once the configuration step is done, Mirantis actually takes the settings and runs them through its Jenkins automation server to validate the configuration. As Renski pointed out, that step can’t take into account all of the idiosyncrasies of every platform, but it can ensure that the files are correct. After this, the tool provides the user with the configuration files, and actually deploying the OpenStack cloud is then simply a matter of taking the files, together with the core binaries that Mirantis makes available for download, to the on-premises cloud and executing a command-line script. Ideally, that’s all there is to the process. At this point, Mirantis’ DriveTrain tools take over and provision the cloud. For upgrades, users simply have to repeat the process.

Mirantis’ monetization strategy is to offer support, which ranges from basic support to fully managing a customer’s cloud. Model Designer is yet another way for the company to make more users aware of itself and then offer them support as they start using more of the company’s tools.

Apr
29
2019
--

With Kata Containers and Zuul, OpenStack graduates its first infrastructure projects

Over the course of the last year and a half, the OpenStack Foundation made the switch from purely focusing on the core OpenStack project to opening itself up to other infrastructure-related projects as well. The first two of these projects, Kata Containers and the Zuul project gating system, have now exited their pilot phase and have become the first top-level Open Infrastructure Projects at the OpenStack Foundation.

The Foundation made the announcement at its Open Infrastructure Summit (previously known as the OpenStack Summit) in Denver today after the organization’s board voted to graduate them ahead of this week’s conference. “It’s an awesome milestone for the projects themselves,” OpenStack Foundation executive direction Jonathan Bryce told me. “It’s a validation of the fact that in the last 18 months, they have created sustainable and productive communities.”

It’s also a milestone for the OpenStack Foundation itself, though, which is still in the process of reinventing itself in many ways. It can now point at two successful projects that are under its stewardship, which will surely help it as it goes out and tries to attract others who are looking to bring their open-source projects under the aegis of a foundation.

In addition to graduating these first two projects, Airship — a collection of open-source tools for provisioning private clouds that is currently a pilot project — hit version 1.0 today. “Airship originated within AT&T,” Bryce said. “They built it from their need to bring a bunch of open-source tools together to deliver on their use case. And that’s why, from the beginning, it’s been really well-aligned with what we would love to see more of in the open-source world and why we’ve been super excited to be able to support their efforts there.”

With Airship, developers use YAML documents to describe what the final environment should look like and the result of that is a production-ready Kubernetes cluster that was deployed by OpenStack’s Helm tool — though without any other dependencies on OpenStack.

AT&T’s assistant vice president, Network Cloud Software Engineering, Ryan van Wyk, told me that a lot of enterprises want to use certain open-source components, but that the interplay between them is often difficult and that while it’s relatively easy to manage the life cycle of a single tool, it’s hard to do so when you bring in multiple open-source tools, all with their own life cycles. “What we found over the last five years working in this space is that you can go and get all the different open-source solutions that you need,” he said. “But then the operator has to invest a lot of engineering time and build extensions and wrappers and perhaps some orchestration to manage the life cycle of the various pieces of software required to deliver the infrastructure.”

It’s worth noting that nothing about Airship is specific to the telco world, though it’s no secret that OpenStack is quite popular in the telco world and unsurprisingly, the Foundation is using this week’s event to highlight the OpenStack project’s role in the upcoming 5G rollouts of various carriers.

In addition, the event will showcase OpenStack’s bare-metal capabilities, an area the project has also focused on in recent releases. Indeed, the Foundation today announced that its bare-metal tools now manage more than a million cores of compute. To codify these efforts, the Foundation also today launched the OpenStack Ironic Bare Metal program, which brings together some of the project’s biggest users, like Verizon Media (home of TechCrunch, though we don’t run on the Verizon cloud), 99Cloud, China Mobile, China Telecom, China Unicom, Mirantis, OVH, Red Hat, SUSE, Vexxhost and ZTE.

Apr
24
2019
--

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compilation.

This new capability, which will work for applications written in JavaScript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Strategic Alliances David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often significantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platforms and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Messina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure on which they run. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

Apr
18
2019
--

CloudBees acquires Electric Cloud to build out its software delivery management platform

CloudBees, the enterprise continuous integration and delivery service (and the biggest contributor to the Jenkins open-source automation server), today announced that it has acquired Electric Cloud, a continuous delivery and automation platform that first launched all the way back in 2002.

The two companies did not disclose the price of the acquisition, but CloudBees has raised a total of $113.2 million while Electric Cloud raised $64.6 million from the likes of Rembrandt Venture Partners, U.S. Venture Partners, RRE Ventures and Next47.

CloudBees plans to integrate Electric Cloud’s application release automation platform into its offerings. Electric Flow’s 110 employees will join CloudBees.

“As of today, we provide customers with best-of-breed CI/CD software from a single vendor, establishing CloudBees as a continuous delivery powerhouse,” said Sacha Labourey, the CEO and co-founder of CloudBees, in today’s announcement. “By combining the strength of CloudBees, Electric Cloud, Jenkins and Jenkins X, CloudBees offers the best CI/CD solution for any application, from classic to Kubernetes, on-premise to cloud, self-managed to self-service.”

Electric Cloud offers its users a number of tools for automating their release pipelines and managing the application life cycle afterward.

“We are looking forward to joining CloudBees and executing on our shared goal of helping customers build software that matters,” said Carmine Napolitano, CEO, Electric Cloud. “The combination of CloudBees’ industry-leading continuous integration and continuous delivery platform, along with Electric Cloud’s industry-leading application release orchestration solution, gives our customers the best foundation for releasing apps at any speed the business demands.”

As CloudBees CPO Christina Noren noted during her keynote at CloudBees’ developer conference today, the company’s customers are getting more sophisticated in their DevOps platforms, but they are starting to run into new problems now that they’ve reached this point.

“What we’re seeing is that these customers have disconnected and fragmented islands of information,” she said. “There’s the view that each development team has […] and there’s not a common language, there’s not a common data model, and there’s not an end-to-end process that unites from left to right, top to bottom.” This kind of integrated system is what CloudBees is building toward (and that competitors like GitLab would argue they already offer). Today’s announcement marks a first step into this direction toward building a full software delivery management platform, though others are likely to follow.

During his company’s developer conference, Labourey also today noted that CloudBees will profit from Electric Cloud’s longstanding expertise in continuous delivery and that the acquisition will turn CloudBees into a “DevOps powerhouse.”

Today’s announcement follows CloudBees’ acquisition of CI/CD tool CodeShip last year. As of now, CodeShip remains a standalone product in the company’s lineup. It’ll be interesting to see how CloudBees will integrate Electric Cloud’s products to build a more integrated system.

Apr
12
2019
--

OpenStack Stein launches with improved Kubernetes support

The OpenStack project, which powers more than 75 public and thousands of private clouds, launched the 19th version of its software this week. You’d think that after 19 updates to the open-source infrastructure platform, there really isn’t all that much new the various project teams could add, given that we’re talking about a rather stable code base here. There are actually a few new features in this release, though, as well as all the usual tweaks and feature improvements you’d expect.

While the hype around OpenStack has died down, we’re still talking about a very active open-source project. On average, there were 155 commits per day during the Stein development cycle. As far as development activity goes, that keeps OpenStack on the same level as the Linux kernel and Chromium.

Unsurprisingly, a lot of that development activity focused on Kubernetes and the tools to manage these container clusters. With this release, the team behind the OpenStack Kubernetes installer brought the launch time for a cluster down from about 10 minutes to five, regardless of the number of nodes. To further enhance Kubernetes support, OpenStack Stein also includes updates to Neutron, the project’s networking service, which now makes it easier to create virtual networking ports in bulk as containers are spun up, and Ironic, the bare-metal provisioning service.

All of that is no surprise, given that according to the project’s latest survey, 61 percent of OpenStack deployments now use both Kubernetes and OpenStack in tandem.

The update also includes a number of new networking features that are mostly targeted at the many telecom users. Indeed, over the course of the last few years, telcos have emerged as some of the most active OpenStack users as these companies are looking to modernize their infrastructure as part of their 5G rollouts.

Besides the expected updates, though, there are also a few new and improved projects here that are worth noting.

“The trend from the last couple of releases has been on scale and stability, which is really focused on operations,” OpenStack Foundation executive director Jonathan Bryce told me. “The new projects — and really most of the new projects from the last year — have all been pretty oriented around real-world use cases.”

The first of these is Placement. “As people build a cloud and start to grow it and it becomes more broadly adopted within the organization, a lot of times, there are other requirements that come into play,” Bryce explained. “One of these things that was pretty simplistic at the beginning was how a request for a resource was actually placed on the underlying infrastructure in the data center.” But as users get more sophisticated, they often want to run specific workloads on machines with certain hardware requirements. These days, that’s often a specific GPU for a machine learning workload, for example. With Placement, that’s a bit easier now.

It’s worth noting that OpenStack had some of this functionality before. The team, however, decided to uncouple it from the existing compute service and turn it into a more generic service that could then also be used more easily beyond the compute stack, turning it more into a kind of resource inventory and tracking tool.

Then, there is also Blazer, a reservation service that offers OpenStack users something akin to AWS Reserved Instances. In a private cloud, the use case for a feature is a bit different, though. But as some of the private clouds got bigger, some users found that they needed to be able to guarantee resources to run some of their regular, overnight batch jobs or data analytics workloads, for example.

As far as resource management goes, it’s also worth highlighting Sahara, which now makes it easier to provision Hadoop clusters on OpenStack.

In previous releases, one of the focus areas for the project was to improve the update experience. OpenStack is obviously a very complex system, so bringing it up to the latest version is also a bit of a complex undertaking. These improvements are now paying off. “Nobody even knows we are running Stein right now,” Vexxhost CEO Mohammed Nasar, who made an early bet on OpenStack for his service, told me. “And I think that’s a good thing. You want to be least impactful, especially when you’re in such a core infrastructure level. […] That’s something the projects are starting to become more and more aware of but it’s also part of the OpenStack software in general becoming much more stable.”

As usual, this release launched only a few weeks before the OpenStack Foundation hosts its bi-annual Summit in Denver. Since the OpenStack Foundation has expanded its scope beyond the OpenStack project, though, this event also focuses on a broader range of topics around open-source infrastructure. It’ll be interesting to see how this will change the dynamics at the event.

Apr
09
2019
--

Talk key takeaways from Google Cloud Next with TechCrunch writers

Google’s Cloud Next conference is taking over the Moscone Center in San Francisco this week and TechCrunch is on the scene covering all the latest announcements.

Google Cloud already powers some of the world’s premier companies and startups, and now it’s poised to put even more pressure on cloud competitors like AWS with its newly-released products and services. TechCrunch’s Frederic Lardinois will be on the ground at the event, and Ron Miller will be covering from afar. Thursday at 10:00 am PT, Frederic and Ron will be sharing what they saw and what it all means with Extra Crunch members on a conference call.

Tune in to dig into what happened onstage and off and ask Frederic and Ron any and all things cloud or enterprise.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Apr
05
2019
--

On balance, the cloud has been a huge boon to startups

Today’s startups have a distinct advantage when it comes to launching a company because of the public cloud. You don’t have to build infrastructure or worry about what happens when you scale too quickly. The cloud vendors take care of all that for you.

But last month when Pinterest announced its IPO, the company’s cloud spend raised eyebrows. You see, the company is spending $750 million a year on cloud services, more specifically to AWS. When your business is primarily focused on photos and video, and needs to scale at a regular basis, that bill is going to be high.

That price tag prompted Erica Joy, a Microsoft engineer to publish this Tweet and start a little internal debate here at TechCrunch. Startups, after all, have a dog in this fight, and it’s worth exploring if the cloud is helping feed the startup ecosystem, or sending your bills soaring as they have with Pinterest.

For starters, it’s worth pointing out that Ms. Joy works for Microsoft, which just happens to be a primary competitor of Amazon’s in the cloud business. Regardless of her personal feelings on the matter, I’m sure Microsoft would be more than happy to take over that $750 million bill from Amazon. It’s a nice chunk of business, but all that aside, do startups benefit from having access to cloud vendors?

Apr
02
2019
--

Pixeom raises $15M for its software-defined edge computing platform

Pixeom, a startup that offers a software-defined edge computing platform to enterprises, today announced that it has raised a $15 million funding round from Intel Capital, National Grid Partners and previous investor Samsung Catalyst Fund. The company plans to use the new funding to expand its go-to-market capacity and invest in product development.

If the Pixeom name sounds familiar, that may be because you remember it as a Raspberry Pi-based personal cloud platform. Indeed, that’s the service the company first launched back in 2014. It quickly pivoted to an enterprise model, though. As Pixeom CEO Sam Nagar told me, that pivot came about after a conversation the company had with Samsung about adopting its product for that company’s needs. In addition, it was also hard to find venture funding. The original Pixeom device allowed users to set up their own personal cloud storage and other applications at home. While there is surely a market for these devices, especially among privacy-conscious tech enthusiasts, it’s not massive, especially as users became more comfortable with storing their data in the cloud. “One of the major drivers [for the pivot] was that it was actually very difficult to get VC funding in an industry where the market trends were all skewing towards the cloud,” Nagar told me.

At the time of its launch, Pixeom also based its technology on OpenStack, the massive open-source project that helps enterprises manage their own data centers, which isn’t exactly known as a service that can easily be run on a single machine, let alone a low-powered one. Today, Pixeom uses containers to ship and manage its software on the edge.

What sets Pixeom apart from other edge computing platforms is that it can run on commodity hardware. There’s no need to buy a specific hardware configuration to run the software, unlike Microsoft’s Azure Stack or similar services. That makes it significantly more affordable to get started and allows potential customers to reuse some of their existing hardware investments.

Pixeom brands this capability as “software-defined edge computing” and there is clearly a market for this kind of service. While the company hasn’t made a lot of waves in the press, more than a dozen Fortune 500 companies now use its services. With that, the company now has revenues in the double-digit millions and its software manages more than a million devices worldwide.

As is so often the case in the enterprise software world, these clients don’t want to be named, but Nagar tells me they include one of the world’s largest fast food chains, for example, which uses the Pixeom platform in its stores.

On the software side, Pixeom is relatively cloud agnostic. One nifty feature of the platform is that it is API-compatible with Google Cloud Platform, AWS and Azure and offers an extensive subset of those platforms’ core storage and compute services, including a set of machine learning tools. Pixeom’s implementation may be different, but for an app, the edge endpoint on a Pixeom machine reacts the same way as its equivalent endpoint on AWS, for example.

Until now, Pixeom mostly financed its expansion — and the salary of its more than 90 employees — from its revenue. It only took a small funding round when it first launched the original device (together with a Kickstarter campaign). Technically, this new funding round is part of this, so depending on how you want to look at this, we’re either talking about a very large seed round or a Series A round.

Apr
02
2019
--

Cloud Foundry ? Kubernetes

Cloud Foundry, the open-source platform-as-a-service project that more than half of the Fortune 500 companies use to help them build, test and deploy their applications, launched well before Kubernetes existed. Because of this, the team ended up building Diego, its own container management service. Unsurprisingly, given the popularity of Kubernetes, which has become somewhat of the de facto standard for container orchestration, a number of companies in the Cloud Foundry ecosystem starting looking into how they could use Kubernetes to replace Diego.

The result of this is Project Eirini, which was first proposed by IBM. As the Cloud Foundry Foundation announced today, Project Eirini now passes the core functional tests the team runs to validate the software releases of its application runtime, the core Cloud Foundry service that deploys and manages applications (if that’s a bit confusing, don’t even think about the fact that there’s also a Cloud Foundry Container Runtime, which already uses Kubernetes, but which is mostly meant to give enterprise a single platform for running their own applications and pre-built containers from third-party vendors).

a foundry for clouds“That’s a pretty big milestone,” Cloud Foundry Foundation CTO Chip Childers told me. “The project team now gets to shift to a mode where they’re focused on hardening the solution and making it a bit more production-ready. But at this point, early adopters are also starting to deploy that [new] architecture.”

Childers stressed that while the project was incubated by IBM, which has been a long-time backer of the overall Cloud Foundry project, Google, Pivotal and others are now also contributing and have dedicated full-time engineers working on the project. In addition, SUSE, SAP and IBM are also active in developing Eirini.

Eirini started as an incubation project, and while few doubted that this would be a successful project, there was a bit of confusion around how Cloud Foundry would move forward now that it essentially had two container engines for running its core service. At the time, there was even some concern that the project could fork. “I pushed back at the time and said: no, this is the natural exploration process that open-source communities need to go through,” Childers said. “What we’re seeing now is that with Pivotal and Google stepping in, that’s a very clear sign that this is going to be the go-forward architecture for the future of the Cloud Foundry Application Runtime.”

A few months ago, by the way, Kubernetes was still missing a few crucial pieces the Cloud Foundry ecosystem needed to make this move. Childers specifically noted that Windows support — something the project’s enterprise users really need — was still problematic and lacked some important features. In recent releases, though, the Kubernetes team fixed most of these issues and improved its Windows support, rendering those issues moot.

What does all of this mean for Diego? Childers noted that the community isn’t at a point where it’ll hold developing that tool. At some point, though, it seems likely that the community will decide that it’s time to start the transition period and make the move to Kubernetes official.

It’s worth noting that IBM today announced its own preview of Eirini in its Cloud Foundry Enterprise Environment and that the latest version of SUSE’s Cloud Foundry-based Application Platform includes a similar preview as well.

In addition, the Cloud Foundry Foundation, which is hosting its semi-annual developer conference in Philadelphia this week, also announced that it has certified it first to systems integrators, Accenture and HCL as part of its recently launched certification program for companies that work in the Cloud Foundry ecosystem and have at least 10 certified developers on their teams.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com