Mar
31
2020
--

DataStax launches Kubernetes operator for open source Cassandra database

Today, DataStax, the commercial company behind the open source Apache Cassandra project, announced an open source Kubernetes operator developed by the company to run a cloud native version of the database.

When Sam Ramji, chief strategy officer at DataStax, came over from Google last year, the first thing he did was take the pulse of customers, partners and community members around Kubernetes and Cassandra, and they found there was surprisingly limited support.

While some companies had built Kubernetes support themselves, DataStax lacked one to call its own. Given that Kubernetes was born inside Google, and the company has widely embraced the notion of containerization in general, Ramji wanted there to be an operator specifically designed by the company to give customers a general starting point with Kubernetes.

“What’s special about the Kube operator that we’re offering to the community as an option — one of many — is that we have done the work to generalize the operator to Cassandra wherever it might be implemented,” Ramji told TechCrunch.

Ramji says that most companies that have created their own Kubernetes operators tend to specialize for their own particular requirements, which is fine, but as the company built on top of Cassandra, they wanted to come up with a general version that could appeal broader range of use cases.

In Kubernetes, the operator is how the DevOps team packages, manages and deploys an application, giving it the instructions it needs to run correctly. DataStax has created this operator specifically to run Cassandra with a broad set of assumptions.

Cassandra is a powerful database because it stays running when many others fall down. As such it is used by companies as varied as Apple, eBay and Netflix to run their key services. This new Kubernetes implementation will enable anyone who wishes to run Cassandra as a containerized application, helping push it into a modern development realm.

The company also announced a free help service for engineers trying to cope with increased usage on their databases due to COVID-19. They are calling the program, “Keep calm and Cassandra on.” The engineers charged with keeping systems like Cassandra running are called Site Reliability Engineers or SREs.

“The new service is completely free SRE-to-SRE support calls. So our SREs are taking calls from Apache Cassandra users anywhere in the world, no matter what version they’re using if they’re trying to figure out how to keep it up to stand up to the increased demand,” Ramji explained.

DataStax was founded in 2010 and has raised over $190 million, according to PitchBook data.

Mar
17
2020
--

Spectro Cloud launches with $7.5M investment to help developers build Kubernetes clusters their way

By now we know that Kubernetes is a wildly popular container management platform, but if you want to use it, you pretty much have to choose between having someone manage it for you or building it yourself. Spectro Cloud emerged from stealth today with a $7.5 million investment to give you a third choice that falls somewhere in the middle.

The funding was led by Sierra Ventures with participation from Boldstart Ventures.

Ed Sim, founder at Boldstart, says he liked the team and the tech. “Spectro Cloud is solving a massive pain that every large enterprise is struggling with: how to roll your own Kubernetes service on a managed platform without being beholden to any large vendor,” Sim told TechCrunch.

Spectro co-founder and CEO Tenry Fu says an enterprise should not have to compromise between control and ease of use. “We want to be the first company that brings an easy-to-use managed Kubernetes experience to the enterprise, but also gives them the flexibility to define their own Kubernetes infrastructure stacks at scale,” Fu explained.

Fu says that the stack, in this instance, consists of the base operating system to the Kubernetes version to the storage, networking and other layers like security, logging, monitoring, load balancing or anything that’s infrastructure related around Kubernetes.

“Within an organization in the enterprise you can serve the needs of your various groups, down to pretty granular level with respect to what’s in your infrastructure stack, and then you don’t have to worry about lifecycle management,” he explained. That’s because Spectro Cloud handles that for you, while still giving you that control.

That gives enterprise developers greater deployment flexibility and the ability to move between cloud infrastructure providers more easily, something that is top of mind today as companies don’t want to be locked into a single vendor.

“There’s an infrastructure control continuum that forces enterprises into trade-offs against these needs. At one extreme, the managed offerings offer a kind of nirvana around ease of use, but it’s at the expense of control over things like the cloud that you’re on or when you adopt new ecosystem options like updated versions of Kubernetes.”

Fu and his co-founders have a deep background in this, having previously been part of CliQr, a company that helped customers manage applications across hybrid cloud environments. They sold that company to Cisco in 2016 and began developing Spectro Cloud last spring.

It’s early days, but the company has been working with 16 beta customers.

Mar
10
2020
--

Hitachi Vantara acquires what’s left of Containership

Hitachi Vantara, the wholly owned subsidiary of Hitachi that focuses on building hardware and software to help companies manage their data, today announced that it has acquired the assets of Containership, one of the earlier players in the container ecosystem, which shut down its operations last October.

Containership, which launched as part of our 2015 Disrupt New York Startup Battlefield, started as a service that helped businesses move their containerized workloads between clouds, but as so many similar startups, it then moved on to focus solely on Kubernetes and helping enterprises manage their Kubernetes infrastructure. Before it called it quits, the company’s specialty was managing multi-cloud Kubernetes deployments. The company wasn’t able to monetize its Kubernetes efforts quickly enough, though, the company said at the time in a blog post that it has now removed from its website.

Containership enables customers to easily deploy and manage Kubernetes clusters and containerized applications in public cloud, private cloud, and on-premise environments,” writes Bobby Soni, the COO for digital infrastructure at Hitachi Vantara. “The software addresses critical cloud native application issues facing customers working with Kubernetes such as persistent storage support, centralized authentication, access control, audit logging, continuous deployment, workload portability, cost analysis, autoscaling, upgrades, and more.”

Hitachi Vantara tells me that it is not acquiring any of Containership’s customer contracts or employees and has no plans to keep the Containership brand. “Our primary focus is to develop new offerings based on the Containership IP. We do hope to engage with prior customers once our new offerings become commercially available,” a company spokesperson said.

The companies did not disclose the price of the acquisition. Pittsburgh-based Containership only raised about $2.6 million since it was founded in 2014, though, and things had become pretty quiet around the company in the last year or two before its early demise. Chances are then that the price wasn’t all that high. Investors include Birchmere Ventures, Draper Triangle and Innovation Works.

Hitachi Vantara says it will continue to work with the Kubernetes community. Containership was a member of the Cloud Native Computing Foundation. Hitachi never was, but after this acquisition, that may change.

Mar
05
2020
--

Google Cloud goes after the telco business with Anthos for Telecom and its Global Mobile Edge Cloud

Google Cloud today announced a new solution for its telecom customers: Anthos for Telecom. You can think of this as a specialized edition of Google’s container-based Anthos multi-cloud platform for both modernizing existing applications and building new ones on top of Kubernetes. The announcement, which was originally slated for MWC, doesn’t come as a major surprise, given Google Cloud’s focus on offering very targeted services to its enterprise customers in a number of different verticals.

Given the rise of edge computing and, in the telco business, 5G, Anthos for Telecom makes for an interesting play in what could potentially be a very lucrative market for Google. This is also the market where the open-source OpenStack project has remained the strongest.

What’s maybe even more important here is that Google is also launching a new service called the Global Mobile Edge Cloud (GMEC). With this, telco companies will be able to run their applications not just in Google’s 20+ data center regions, but also in Google’s more than 130 edge locations around the world.

“We’re basically giving you compute power on our edge, where previously it was only for Google use, through the Anthos platform,” explained Eyal Manor, the VP of Engineering for Anthos. “The edge is very powerful and I think we will now see substantially more innovation happening for applications that are latency-sensitive. We’ve been investing in edge compute and edge networking for a long time in Google over the years for the internal services. And we think it’s a fairly unique capability now to open it up for third-party customers.”

For now, Google is only making this available to its teleco partners, with AT&T being the launch customers, but over time, Manor said, it’ll likely open its edge cloud to other verticals, as well. Google also expects to be able to announce other partners in the near future.

As for Anthos for Telecom, Manor notes that this is very much what its customers are asking for, especially now that so many of their new applications are containerized.

“[Anthos] brings the best of cloud-as-a-service to our customers, wherever they are, in multiple environments and provide the lock-in free environment with the latest cloud tools,” explained Manor. “The goal is really to empower developers and operators to move faster in a consistent way, so regardless of where you are, you don’t have to train your technical staff. It works on-premise, it works on GCP and on other clouds. And that’s what we hear from customers — customers really like choice.”

In the telecom industry, those customers also want to get higher up the stack and get consistency between their data centers and the edge — and all of that, of course, is meant to bring down the cost of running these networks and services.

“We don’t want to manage the [technology] we previously invested in for many years because the upgrades were terribly expensive and slow for that. I hear that consistently. And please Google, make this seem like a service in the cloud for us,” Manor said.

For developers, Anthos also promises to provide the same development experience, no matter where the application is deployed — and Google now has an established network of partners that provides both solutions to developers as well as operators around Anthos. To this effect, Google is also launching new partnerships with the Amdocs customer experience platform and Netcracker today.

“We’re excited to unveil a new strategy today to help telecommunications companies innovate and accelerate their digital transformation through Google Cloud,” said Thomas Kurian, CEO of Google Cloud, in today’s announcement. “By collaborating closely with leading telecoms, partners and customers, we can transform the industry together and create better overall experiences for our users globally.”

Mar
03
2020
--

Datastax acquires The Last Pickle

Data management company Datastax, one of the largest contributors to the Apache Cassandra project, today announced that it has acquired The Last Pickle (and no, I don’t know what’s up with that name either), a New Zealand-based Cassandra consulting and services firm that’s behind a number of popular open-source tools for the distributed NoSQL database.

As Datastax Chief Strategy Officer Sam Ramji, who you may remember from his recent tenure at Apigee, the Cloud Foundry Foundation, Google and Autodesk, told me, The Last Pickle is one of the premier Apache Cassandra consulting and services companies. The team there has been building Cassandra-based open source solutions for the likes of Spotify, T Mobile and AT&T since it was founded back in 2012. And while The Last Pickle is based in New Zealand, the company has engineers all over the world that do the heavy lifting and help these companies successfully implement the Cassandra database technology.

It’s worth mentioning that Last Pickle CEO Aaron Morton first discovered Cassandra when he worked for WETA Digital on the special effects for Avatar, where the team used Cassandra to allow the VFX artists to store their data.

“There’s two parts to what they do,” Ramji explained. “One is the very visible consulting, which has led them to become world experts in the operation of Cassandra. So as we automate Cassandra and as we improve the operability of the project with enterprises, their embodied wisdom about how to operate and scale Apache Cassandra is as good as it gets — the best in the world.” And The Last Pickle’s experience in building systems with tens of thousands of nodes — and the challenges that its customers face — is something Datastax can then offer to its customers as well.

And Datastax, of course, also plans to productize The Last Pickle’s open-source tools like the automated repair tool Reaper and the Medusa backup and restore system.

As both Ramji and Datastax VP of Engineering Josh McKenzie stressed, Cassandra has seen a lot of commercial development in recent years, with the likes of AWS now offering a managed Cassandra service, for example, but there wasn’t all that much hype around the project anymore. But they argue that’s a good thing. Now that it is over ten years old, Cassandra has been battle-hardened. For the last ten years, Ramji argues, the industry tried to figure out what the de factor standard for scale-out computing should be. By 2019, it became clear that Kubernetes was the answer to that.

“This next decade is about what is the de facto standard for scale-out data? We think that’s got certain affordances, certain structural needs and we think that the decades that Cassandra has spent getting harden puts it in a position to be data for that wave.”

McKenzie also noted that Cassandra provides users with a number of built-in features like support for mutiple data centers and geo-replication, rolling updates and live scaling, as well as wide support across programming languages, give it a number of advantages over competing databases.

“It’s easy to forget how much Cassandra gives you for free just based on its architecture,” he said. “Losing the power in an entire datacenter, upgrading the version of the database, hardware failing every day? No problem. The cluster is 100 percent always still up and available. The tooling and expertise of The Last Pickle really help bring all this distributed and resilient power into the hands of the masses.”

The two companies did not disclose the price of the acquisition.

Jan
16
2020
--

Epsagon scores $16M Series A to monitor modern development environments

Epsagon, an Israeli startup that wants to help monitor modern development environments like serverless and containers, announced a $16 million Series A today.

U.S. Venture Partners (USVP), a new investor, led the round. Previous investors Lightspeed Venture Partners and StageOne Ventures also participated. Today’s investment brings the total raised to $20 million, according to the company.

CEO and co-founder Nitzan Shapira says that the company has been expanding its product offerings in the last year to cover not just its serverless roots, but also provide deeper insights into a number of forms of modern development.

“So we spoke around May when we launched our platform for microservices in the cloud products, and that includes containers, serverless and really any kind of workload to build microservices apps. Since then we have had a few significant announcements,” Shapira told TechCrunch.

For starters, the company announced support for tracing and metrics for Kubernetes workloads, including native Kubernetes, along with managed Kubernetes services like AWS EKS and Google GKE. “A few months ago, we announced our Kubernetes integration. So, if you’re running any Kubernetes workload, you can integrate with Epsagon in one click, and from there you get all the metrics out of the box, then you can set up a tracing in a matter of minutes. So that opens up a very big number of use cases for us,” he said.

The company also announced support for AWS AppSync, a no-code programming tool on the Amazon cloud platform. “We are the only provider today to introduce tracing for AppSync and that’s [an area] where people really struggle with the monitoring and troubleshooting of it,” he said.

The company hopes to use the money from today’s investment to expand the product offering further with support for Microsoft Azure and Google Cloud Platform in the coming year. He also wants to expand the automation of some tasks that have to be manually configured today.

“Our intention is to make the product as automated as possible, so the user will get an amazing experience in a matter of minutes, including advanced monitoring, identifying different problems and troubleshooting,” he said

Shapira says the company has around 25 employees today, and plans to double headcount in the next year.

Dec
30
2019
--

VMware completes $2.7 billion Pivotal acquisition

VMware is closing the year with a significant new component in its arsenal. Today it announced it has closed the $2.7 billion Pivotal acquisition it originally announced in August.

The acquisition gives VMware another component in its march to transform from a pure virtual machine company into a cloud native vendor that can manage infrastructure wherever it lives. It fits alongside other recent deals like buying Heptio and Bitnami, two other deals that closed this year.

They hope this all fits neatly into VMware Tanzu, which is designed to bring Kubernetes containers and VMware virtual machines together in a single management platform.

“VMware Tanzu is built upon our recognized infrastructure products and further expanded with the technologies that Pivotal, Heptio, Bitnami and many other VMware teams bring to this new portfolio of products and services,” Ray O’Farrell, executive vice president and general manager of the Modern Application Platforms Business Unit at VMware, wrote in a blog post announcing the deal had closed.

Craig McLuckie, who came over in the Heptio deal and is now VP of R&D at VMware, told TechCrunch in November at KubeCon that while the deal hadn’t closed at that point, he saw a future where Pivotal could help at a professional services level, as well.

“In the future when Pivotal is a part of this story, they won’t be just delivering technology, but also deep expertise to support application transformation initiatives,” he said.

Up until the closing, the company had been publicly traded on the New York Stock Exchange, but as of today, Pivotal becomes a wholly owned subsidiary of VMware. It’s important to note that this transaction didn’t happen in a vacuum, where two random companies came together.

In fact, VMware and Pivotal were part of the consortium of companies that Dell purchased when it acquired EMC in 2015 for $67 billion. While both were part of EMC and then Dell, each one operated separately and independently. At the time of the sale to Dell, Pivotal was considered a key piece, one that could stand strongly on its own.

Pivotal and VMware had another strong connection. Pivotal was originally created by a combination of EMC, VMware and GE (which owned a 10% stake for a time) to give these large organizations a separate company to undertake transformation initiatives.

It raised a hefty $1.7 billion before going public in 2018. A big chunk of that came in one heady day in 2016 when it announced $650 million in funding led by Ford’s $180 million investment.

The future looked bright at that point, but life as a public company was rough, and after a catastrophic June earnings report, things began to fall apart. The stock dropped 42% in one day. As I wrote in an analysis of the deal:

The stock price plunged from a high of $21.44 on May 30th to a low of $8.30 on August 14th. The company’s market cap plunged in that same time period falling from $5.828 billion on May 30th to $2.257 billion on August 14th. That’s when VMware admitted it was thinking about buying the struggling company.

VMware came to the rescue and offered $15.00 a share, a substantial premium above that August low point. As of today, it’s part of VMware.

Dec
03
2019
--

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outposts, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

Nov
22
2019
--

Making sense of a multi-cloud, hybrid world at KubeCon

More than 12,000 attendees gathered this week in San Diego to discuss all things containers, Kubernetes and cloud-native at KubeCon.

Kubernetes, the container orchestration tool, turned five this year, and the technology appears to be reaching a maturity phase where it accelerates beyond early adopters to reach a more mainstream group of larger business users.

That’s not to say that there isn’t plenty of work to be done, or that most enterprise companies have completely bought in, but it’s clearly reached a point where containerization is on the table. If you think about it, the whole cloud-native ethos makes sense for the current state of computing and how large companies tend to operate.

If this week’s conference showed us anything, it’s an acknowledgment that it’s a multi-cloud, hybrid world. That means most companies are working with multiple public cloud vendors, while managing a hybrid environment that includes those vendors — as well as existing legacy tools that are probably still on-premises — and they want a single way to manage all of this.

The promise of Kubernetes and cloud-native technologies, in general, is that it gives these companies a way to thread this particular needle, or at least that’s the theory.

Kubernetes to the rescue

Photo: Ron Miller/TechCrunch

If you were to look at the Kubernetes hype cycle, we are probably right about at the peak where many think Kubernetes can solve every computing problem they might have. That’s probably asking too much, but cloud-native approaches have a lot of promise.

Craig McLuckie, VP of R&D for cloud-native apps at VMware, was one of the original developers of Kubernetes at Google in 2014. VMware thought enough of the importance of cloud-native technologies that it bought his former company, Heptio, for $550 million last year.

As we head into this phase of pushing Kubernetes and related tech into larger companies, McLuckie acknowledges it creates a set of new challenges. “We are at this crossing the chasm moment where you look at the way the world is — and you look at the opportunity of what the world might become — and a big part of what motivated me to join VMware is that it’s successfully proven its ability to help enterprise organizations navigate their way through these disruptive changes,” McLuckie told TechCrunch.

He says that Kubernetes does actually solve this fundamental management problem companies face in this multi-cloud, hybrid world. “At the end of the day, Kubernetes is an abstraction. It’s just a way of organizing your infrastructure and making it accessible to the people that need to consume it.

“And I think it’s a fundamentally better abstraction than we have access to today. It has some very nice properties. It is pretty consistent in every environment that you might want to operate, so it really makes your on-prem software feel like it’s operating in the public cloud,” he explained.

Simplifying a complex world

One of the reasons Kubernetes and cloud-native technologies are gaining in popularity is because the technology allows companies to think about hardware differently. There is a big difference between virtual machines and containers, says Joe Fernandes, VP of product for Red Hat cloud platform.

“Sometimes people conflate containers as another form of virtualization, but with virtualization, you’re virtualizing hardware, and the virtual machines that you’re creating are like an actual machine with its own operating system. With containers, you’re virtualizing the process,” he said.

He said that this means it’s not coupled with the hardware. The only thing it needs to worry about is making sure it can run Linux, and Linux runs everywhere, which explains how containers make it easier to manage across different types of infrastructure. “It’s more efficient, more affordable, and ultimately, cloud-native allows folks to drive more automation,” he said.

Bringing it into the enterprise

Photo: Ron Miller/TechCrunch

It’s one thing to convince early adopters to change the way they work, but as this technology enters the mainstream. Gabe Monroy, partner program manager at Microsoft says to carry this technology to the next level, we have to change the way we talk about it.

Nov
22
2019
--

Cloud Foundry’s Kubernetes bet with Project Eirini hits 1.0

Cloud Foundry, the open-source platform-as-a-service that, with the help of lots of commercial backers, is currently in use by the majority of Fortune 500 companies, launched well before containers, and especially the Kubernetes orchestrator, were a thing. Instead, the project built its own container service, but the rise of Kubernetes obviously created a lot of interest in using it for managing Cloud Foundry’s container implementation. To do so, the organization launched Project Eirini last year; today, it’s officially launching version 1.0, which means it’s ready for production usage.

Eirini/Kubernetes doesn’t replace the old architecture. Instead, for the foreseeable future, they will operate side-by-side, with the operators deciding on which one to use.

The team working on this project shipped a first technical preview earlier this year and a number of commercial vendors, too, started to build their own commercial products around it and shipped it as a beta product.

“It’s one of the things where I think Cloud Foundry sometimes comes at things from a different angle,” IBM’s Julz Friedman told me. “Because it’s not about having a piece of technology that other people can build on in order to build a platform. We’re shipping the end thing that people use. So 1.0 for us — we have to have a thing that ticks all those boxes.”

He also noted that Diego, Cloud Foundry’s existing container management system, had been battle-tested over the years and had always been designed to be scalable to run massive multi-tenant clusters.

“If you look at people doing similar things with Kubernetes at the moment,” said Friedman, “they tend to run lots of Kubernetes clusters to scale to that kind of level. And Kubernetes, although it’s going to get there, right now, there are challenges around multi-tenancy, and super big multi-tenant scale”

But even without being able to get to this massive scale, Friedman argues that you can already get a lot of value even out of a small Kubernetes cluster. Most companies don’t need to run enormous clusters, after all, and they still get the value of Cloud Foundry with the power of Kubernetes underneath it (all without having to write YAML files for their applications).

As Cloud Foundry CTO Chip Childers also noted, once the transition to Eirini gets to the point where the Cloud Foundry community can start applying less effort to its old container engine, those resources can go back to fulfilling the project’s overall mission, which is about providing the best possible developer experience for enterprise developers.

“We’re in this phase in the industry where Kubernetes is the new infrastructure and [Cloud Foundry] has a very battle-tested developer experience around it,” said Childers. “But there’s also really interesting ideas that are out there that are coming from our community, so one of the things that I’ve suggested to the community writ large is, let’s use this time as an opportunity to not just evolve what we have, but also make sure that we’re paying attention to new workflows, new models, and figure out what’s going to provide benefit to that enterprise developer that we’re so focused on — and bring those types of capabilities in.”

Those new capabilities may be around technologies like functions and serverless, for example, though Friedman at least is more focused on Eirini 1.1 for the time being, which will include closing the gaps with what’s currently available in Cloud Foundry’s old scheduler, like Docker image support and support for the Cloud Foundry v3 API.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com