Pure Storage acquires data service platform Portworx for $370M

Pure Storage, the public enterprise data storage company, today announced that it has acquired Portworx, a well-funded startup that provides a cloud-native storage and data-management platform based on Kubernetes, for $370 million in cash. This marks Pure Storage’s largest acquisition to date and shows how important this market for multicloud data services has become.

Current Portworx enterprise customers include the likes of Carrefour, Comcast, GE Digital, Kroger, Lufthansa, and T-Mobile. At the core of the service is its ability to help users migrate their data and create backups. It creates a storage layer that allows developers to then access that data, no matter where it resides.

Pure Storage will use Portworx’s technology to expand its hybrid and multicloud services and provide Kubernetes -based data services across clouds.

Image Credits: Portworx

“I’m tremendously proud of what we’ve built at Portworx: An unparalleled data services platform for customers running mission-critical applications in hybrid and multicloud environments,” said Portworx CEO Murli Thirumale. “The traction and growth we see in our business daily shows that containers and Kubernetes are fundamental to the next-generation application architecture and thus competitiveness. We are excited for the accelerated growth and customer impact we will be able to achieve as a part of Pure.”

When the company raised its Series C round last year, Thirumale told me that Portworx had expanded its customer base by over 100% and its bookings increased by 376 from 2018 to 2019.

“As forward-thinking enterprises adopt cloud-native strategies to advance their business, we are thrilled to have the Portworx team and their groundbreaking technology joining us at Pure to expand our success in delivering multicloud data services for Kubernetes,” said Charles Giancarlo, chairman and CEO of Pure Storage. “This acquisition marks a significant milestone in expanding our Modern Data Experience to cover traditional and cloud native applications alike.”


Webinar September 29: Learn About Percona Kubernetes Operator for Percona Server for MongoDB

Learn About Percona Kubernetes Operator for Percona Server for MongoDB

Learn About Percona Kubernetes Operator for Percona Server for MongoDBIn this webinar, we will explore the Percona Kubernetes Operator for the Percona Server for MongoDB (PSMDB) database.

Kubernetes is a widely-deployed orchestration platform for container management.  Recently, more and more companies are moving towards containerized platforms to increase availability while lowering operational costs and utilize existing infrastructure. Percona has recognized this, and many companies have expressed their desire to run their Percona software on the Kubernetes platform.

In this webinar, we will cover: how to deploy a highly-available Percona Server for MongoDB Replica Set in a Kubernetes cluster; how to modify the replica set configuration; how to scale the database environment; how to take backups; demonstrate self-healing; and how to monitor the database environment.

Please join Stephen Thorne and Michal Nosek on Tuesday, September 29, 2020, at 11:00 am EDT for their webinar “Learn About Percona Kubernetes Operator for Percona Server for MongoDB“.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.


StackRox nabs $26.5M for a platform that secures containers in Kubernetes

Containers have become a ubiquitous cornerstone in how companies manage their data, a trend that has only accelerated in the last eight months with the larger shift to cloud services and more frequent remote working due to the coronavirus pandemic. Alongside that, startups building services to enable containers to be used better are also getting a boost.

StackRox, which develops Kubernetes-native security solutions, says that its business grew by 240% in the first half of this year, and on the back of that, it is announcing today that it has raised $26.5 million to expand its business into international markets and continue investing in its R&D.

The funding, which appears to be a Series C, has an impressive list of backers. It is being led by Menlo Ventures, with Highland Capital Partners, Hewlett-Packard Enterprise, Sequoia Capital and Redpoint Ventures also participating. Sequoia and Redpoint are previous investors, and the company has raised around $60 million to date.

HPE is a strategic backer in this round:

“At HPE, we are working with our customers to help them accelerate their digital transformations,” said Paul Glaser, VP, Hewlett Packard Enterprise, and head of Pathfinder. “Security is a critical priority as they look to modernize their applications with containers. We’re excited to invest in StackRox and see it as a great fit with our new software HPE Ezmeral to help HPE customers secure their Kubernetes environments across their full application life cycle. By directly integrating with Kubernetes, StackRox enables a level of simplicity and unification for DevOps and Security teams to apply the needed controls effectively.”

Kamal Shah, the CEO, said that StackRox is not disclosing its valuation, but he confirmed it has definitely gone up. For some context, according to PitchBook data, the company was valued at $145 million in its last funding round, a Series B in 2018. Its customers today include the likes of Priceline, Brex, Reddit, Zendesk and Splunk, as well as government and other enterprise customers, in a container security market that analysts project will be worth some $2.2 billion by 2024, up from $568 million last year.

StackRox got its start in 2014, when containers were starting to pick up momentum in the market. At the time, its focus was a little more fragmented, not unlike the container market itself — it provided solutions that could be used with Docker containers as well as others. Over time, Shah said that the company chose to hone its focus just on Kubernetes, originally developed by Google and open-sourced, and now essentially the de facto standard in containerisation.

“We made a bet on Kubernetes at a time when there were multiple orchestrators, including Mesosphere, Docker and others,” he said. “Over the last two years Kubernetes has won the war and become the default choice, the Linux of the cloud and the biggest open-source cloud application. We are all Kubernetes all the time because what we see in the market are that a majority of our customers are moving to it. It has over 35,000 contributors to the open-source project alone, it’s not just Red Hat (IBM) and Google.” Research from CNCF estimates that nearly 80% of organizations that it surveyed are running Kubernetes in production.

That is not all good news, however, with the interest underscoring a bigger need for Kubernetes-focused security solutions for enterprises that opt to use it.

Shah says that some of the typical pitfalls in container architecture arise when they are misconfigured, leading to breaches; as well as around how applications are monitored; how developers use open-source libraries; and how companies implement regulatory compliance. Other security vulnerabilities that have been highlighted by others include the use of insecure container images; how containers interact with each other; the use of containers that have been infected with rogue processes; and having containers not isolated properly from their hosts.

But, Shah noted, “Containers in Kubernetes are inherently more secure if you can deploy correctly.” And to that end that is where StackRox’s solutions attempt to help: The company has built a multi-purposes toolkit that provides developers and security engineers with risk visibility, threat detection, compliance tools, segmentation tools and more. “Kubernetes was built for scale and flexibility, but it has lots of controls, so if you misconfigure it, it can lead to breaches. So you need a security solution to make sure you configure it all correctly,” said Shah.

He added that there has been a definite shift over the years from companies considering security solutions as an optional element into one that forms part of the consideration at the very core of the IT budget — another reason why StackRox and competitors like TwistLock (acquired by Palo Alto Networks) and Aqua Security have all seen their businesses really grow.

“We’ve seen the innovation companies are enabling by building applications in containers and Kubernetes. The need to protect those applications, at the scale and pace of DevOps, is crucial to realizing the business benefits of that innovation,” said Venky Ganesan, partner, Menlo Ventures, in a statement. “While lots of companies have focused on securing the container, only StackRox saw the need to focus on Kubernetes as the control plane for security as well as infrastructure. We’re thrilled to help fuel the company’s growth as it dominates this dynamic market.”

“Kubernetes represents one of the most important paradigm shifts in the world of enterprise software in years,” said Corey Mulloy, general partner, Highland Capital Partners, in a statement. “StackRox sits at the forefront of Kubernetes security, and as enterprises continue their shift to the cloud, Kubernetes is the ubiquitous platform that Linux was for the Internet era. In enabling Kubernetes-native security, StackRox has become the security platform of choice for these cloud-native app dev environments.”


Webinar September 22: The Path to Open Source DBaaS with Kubernetes

Open Source DBaaS with Kubernetes

Open Source DBaaS with KubernetesJoin Peter Zaitsev, Percona CEO, as he discusses DBaaS and Kubernetes.

DBaaS is the fastest growing way to deploy databases. It is fast and convenient and it helps to reduce toil a lot, yet it is typically done using proprietary software and tightly coupled to the cloud vendor. We believe Kubernetes finally allows us to build a fully Open Source DBaaS Solution capable to be deployed anywhere Kubernetes runs – on the Public Cloud or in your private data center.

In this presentation, we will describe the most important user requirements and typical problems you would encounter building a DBaaS Solution and explain how you can solve them using Kubernetes Operator framework.

Please join Peter Zaitsev on Tuesday, September 22, 2020, at 11:30 am EDT for his webinar “The Path to Open Source DBaaS with Kubernetes“.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.


Percona Is Now ‘Partner Ready’ on the VMware Tanzu Platform

Percona VMware Tanzu Platform

Percona VMware Tanzu PlatformFollowing our announcement of Percona Joining the VMware Technology Alliance Partner Program, we are pleased to announce the addition of Percona Kubernetes Operator for Percona Server for MongoDB on the VMware Tanzu Platform after attaining Partner Ready validation.

The VMware Partner Ready program allows VMware partners to test and validate their software solutions to ensure they interoperate with specific VMware platforms. VMware Tanzu is a portfolio of products that enable enterprises to modernize their applications and infrastructure to continuously deliver better software to production.

VMware Tanzu Kubernetes Grid (TKG) provides a consistent, upstream-compatible implementation of Kubernetes, tested, signed, and supported by VMware. This enables customers to access enterprise-ready Kubernetes solutions to simplify their operations across multi-cloud infrastructure.

By completing the Partner Ready process and achieving Partner Ready designation, Percona has confirmed Percona Kubernetes Operator for Percona Server for MongoDB interoperability with VMware technologies.

Percona is responsible for managing any customer support requests from this combined solution, so users can be assured that they are being supported by the leading open source software experts.

Using Percona Kubernetes Operator for Percona Server for MongoDB with TKG

Organizations using Percona Kubernetes Operator for Percona Server for MongoDB with TKG can create Percona Server for MongoDB environments that are highly-available, self-healing, and autonomously deployed according to Percona Best Practices for MongoDB and Kubernetes.

Businesses can change the size of their replica set by altering the size key in the Custom Resource options configuration, and Percona Monitoring and Management can be easily deployed to monitor the replica set.

Users can also automate backups, perform an on-demand backup at any time, and support simple restores. Finally, they can set a node as an arbiter, which participates in elections for a new primary node but does not store any data. The Automate node recovery feature uses self-healing capability to automatically recover from the failure of a single Percona Server for MongoDB node.

We are delighted to be working with VMware, to bring our leading open source software to a wider audience. We look forward to adding additional Percona software solutions to VMware Tanzu in the near future.


Suse contributes EiriniX to the Cloud Foundry Foundation

Suse today announced that it has contributed EiriniX, a framework for building extensions for Eirini, a technology that brings support for Kubernetes-based container orchestration to the Cloud Foundry platform-as-a-service project.

About a year ago, Suse also contributed the KubeCF project to the foundation, which itself allows the Cloud Foundry Application Runtime — the core of Cloud Foundry — to run on top of Kubernetes.

Image Credits: Suse

“At Suse we are developing upstream first as much as possible,” said Thomas Di Giacomo, president of Engineering and Innovation at Suse. “So, after experiencing the value of contributing KubeCF to the Foundation earlier this year, we decided it would be beneficial to both the Cloud Foundry community and the EiriniX team to do it again. We have seen an uptick in contributions to and usage of KubeCF since it became a Foundation project, indicating that more organizations are investing developer time into the upstream. Contributing EiriniX to the Foundation is a surefire way to get the broader community involved.”

Suse first demonstrated EiriniX a year ago. The tool implements features like the ability to SSH into a container and debug it, for example, or to use alternative logging solutions for KubeCF.

“There is significant value in contributing this project to the Foundation, as it ensures that other project teams looking for a similar solution to creating Extensions around Eirini will not reinvent the wheel,” said Chip Childers, executive director, Cloud Foundry Foundation. “Now that EiriniX exists within the Foundation, developers can take full advantage of its library of add-ons to Eirini and modify core features of Cloud Foundry. I’m excited to see all of the use cases for this project that have not yet been invented.” 


Mirantis acquires Lens, an IDE for Kubernetes

Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes-integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the company that originally developed Lens.

Lens itself was most recently owned by Lakend Labs, though, which describes itself as “a collective of cloud native compute geeks and technologists” that is “committed to preserving and making available the open-source software and products of Kontena.” Lakend open-sourced Lens a few months ago.

Image Credits: Mirantis

“The mission of Mirantis is very simple: We want to be — for the enterprise — the fastest way to [build] modern apps at scale,” Mirantis CEO Adrian Ionel told me. “We believe that enterprises are constantly undergoing this cycle of modernizing the way they build applications from one wave to the next — and we want to provide products to the enterprise that help them make that happen.”

Right now, that means a focus on helping enterprises build cloud-native applications at scale and, almost by default, that means providing these companies with all kinds of container infrastructure services.

“But there is another piece of the story that’s always been going through our minds, which is, how do we become more developer-centric and developer-focused, because, as we’ve all seen in the past 10 years, developers have become more and more in charge off what services and infrastructure they’re actually using,” Ionel explained. And that’s where the Kontena and Lens acquisitions fit in. Managing Kubernetes clusters, after all, isn’t trivial — yet now developers are often tasked with managing and monitoring how their applications interact with their company’s infrastructure.

“Lens makes it dramatically easier for developers to work with Kubernetes, to build and deploy their applications on Kubernetes, and it’s just a huge obstacle-remover for people who are turned off by the complexity of Kubernetes to get more value,” he added.

“I’m very excited to see that we found a common vision with Adrian for how to incorporate Lens and how to make life for developers more enjoyable in this cloud-native technology landscape,” Miska Kaipiainen, the former CEO of Kontena and now Mirantis’ director of Engineering, told me.

He describes Lens as an IDE for Kubernetes. While you could obviously replicate Lens’ functionality with existing tools, Kaipiainen argues that it would take 20 different tools to do this. “One of them could be for monitoring, another could be for logs. A third one is for command-line configuration, and so forth and so forth,” he said. “What we have been trying to do with Lens is that we are bringing all these technologies [together] and provide one single, unified, easy to use interface for developers, so they can keep working on their workloads and on their clusters, without ever losing focus and the context of what they are working on.”

Among other things, Lens includes a context-aware terminal, multi-cluster management capabilities that work across clouds and support for the open-source Prometheus monitoring service.

For Mirantis, Lens is a very strategic investment and the company will continue to develop the service. Indeed, Ionel said the Lens team now basically has unlimited resources.

Looking ahead, Kaipiainen said the team is looking at adding extensions to Lens through an API within the next couple of months. “Through this extension API, we are actually able to collaborate and work more closely with other technology vendors within the cloud technology landscape so they can start plugging directly into the Lens UI and visualize the data coming from their components, so that will make it very powerful.”

Ionel also added that the company is working on adding more features for larger software teams to Lens, which is currently a single-user product. A lot of users are already using Lens in the context of very large development teams, after all.

While the core Lens tools will remain free and open source, Mirantis will likely charge for some new features that require a centralized service for managing them. What exactly that will look like remains to be seen, though.

If you want to give Lens a try, you can download the Windows, macOS and Linux binaries here.


Smart Update Strategy in Percona Kubernetes Operator for Percona XtraDB Cluster

smart update strategy percona kubernetes opeerator

smart update strategy percona kubernetes opeeratorIn Percona Kubernetes Operator for Percona XtraDB Cluster (PXC) versions prior to 1.5.0, there were two methods for upgrading PXC clusters, and both of these use built-in StatefulSet update strategies. The first one is manual (OnDelete update strategy) and the second one is semi-automatic (RollingUpdate strategy). Since the Kubernetes operator is about automating the database management, and there are use cases to always keep the database up to date, a new smart update strategy was implemented.

Smart Update Strategy

The smart update strategy can be used to enable automatic context-aware upgrades of PXC clusters between minor versions. One of the use cases for automatic upgrades is if you want to get security updates as soon as they get released.

This strategy will upgrade reader PXC Pods at first and the last one upgraded will be the writer Pod, and it will also wait for the upgraded Pod to show up as online in ProxySQL before the next Pod is upgraded. This is needed to minimize the number of failovers during the upgrade and to make the upgrade as smooth as possible.

To make this work we implemented a version-unaware entrypoint and a Version Service to be queried for the up-to-date versions information.

The non-version specific entrypoint script is included in the operator docker image and is used to start different PXC versions. This makes the operator version not tightly coupled with a specific PXC docker image like it was done before, so one version of the operator will be able to run multiple versions of PXC cluster.

Version Service, which runs at by default, provides database version and alert information for various open source products. Version Service is open source and it can be run inside your own infrastructure, but that will be covered in some other blog posts.

How Does it Work?

When smart update is enabled and a new cluster has started, the values for docker images in the cr.yaml file will be ignored since the intention is to get them from the Version Service.

If smart update is enabled for an existing cluster, then at the scheduled time a version of the currently running PXC cluster, Kubernetes operator version, and the desired upgrade path will be provided to the Version Service. Version Service will return the JSON object with a set of docker images that should be used in the current environment. After that, the operator will update the CR with the new image paths and continue with deleting and redeploying the Pods in optimal order to minimize failovers.

The upgrade will not be done if the backup operation is in progress during the check for updates since the backup has a higher priority, but instead, it will be done next time the Version Service is checked.

With smart update functionality, you can also lock your database to a specific version, basically disabling automatic upgrades, but when needed use the smart update to trigger context-aware upgrades or just changes to resources.

This is how the upgrade might look in the operator logs (some parts stripped for brevity):

{"level":"info","ts":..,"logger":"..","msg":"update PXC version to 5.7.29-32-57 (fetched from db)"}
{"level":"info","ts":..,"logger":"..","msg":"add new job: * * * * *"}
{"level":"info","ts":..,"logger":"..","msg":"update PXC version from 5.7.29-32-57 to 5.7.30-31.43"}
{"level":"info","ts":..,"logger":"..","msg":"statefullSet was changed, run smart update"}
{"level":"info","ts":..,"logger":"..","msg":"primary pod is cluster1-pxc-0.cluster1-pxc.pxc-test.svc.cluster.local"}
{"level":"info","ts":..,"logger":"..","msg":"apply changes to secondary pod cluster1-pxc-2"}
{"level":"info","ts":..,"logger":"..","msg":"pod cluster1-pxc-2 is running"}
{"level":"info","ts":..,"logger":"..","msg":"pod cluster1-pxc-2 is online"}
{"level":"info","ts":..,"logger":"..","msg":"apply changes to secondary pod cluster1-pxc-1"}
{"level":"info","ts":..,"logger":"..","msg":"pod cluster1-pxc-1 is running"}
{"level":"info","ts":..,"logger":"..","msg":"pod cluster1-pxc-1 is online"}
{"level":"info","ts":..,"logger":"..","msg":"apply changes to primary pod cluster1-pxc-0"}
{"level":"info","ts":..,"logger":"..","msg":"pod cluster1-pxc-0 is running"}
{"level":"info","ts":..,"logger":"..","msg":"pod cluster1-pxc-0 is online"}
{"level":"info","ts":..,"logger":"..","msg":"smart update finished"}

As you can see, the initial PXC version deployed is 5.7.29, after which the smart update was enabled with the schedule to check for updates every minute (this is done for the test only). After that, smart update contacted the Version Service and started the upgrade process to version 5.7.30. The Primary Pod identified was PXC Pod 0, so firstly Pods 2 and 1 (readers) were upgraded, and only after that Pod 0 (writer), and at the end, the message was logged that the upgrade process finished.

Configuration Options Inside cr.yaml File

  updateStrategy: SmartUpdate
    apply: recommended
    schedule: "0 4 * * *"

As already mentioned, updateStrategy can be OnDelete or RollingUpdate in previous versions, but to use automatic upgrades it should be set to SmartUpdate.

Value of the upgradeOptions.versionServiceEndpoint option can be changed from the default if you have your own Version Service running (e.g. if your cluster doesn’t have a connection to the Internet and you have your own custom docker image repositories).

The most important setting is the upgradeOptions.apply option, which can have several values:

  • Never or Disabled – automatic upgrades are disabled and smart update is only utilized for other types of full cluster changes such as resource alterations or ConfigMap updates.
  • Recommended – automatic upgrades will choose the most recent version of software flagged as Recommended.
  • Latest – automatic upgrades will choose the most recent version of the software available.
    If you are starting a cluster from scratch and you have selected Recommended or Latest as desired versions, the current 8.0 major version will be selected. If you are already running a cluster, in that case, Version Service should always return the upgrade path inside your major version. Basically, if you want to start with a 5.7 major version from scratch, you should explicitly specify some 5.7 version (see below). Then, after the cluster is deployed, if you wish to enable automatic upgrades, you need to change the value of upgradeOptions.apply to “Recommended” or “Latest”.
  • Version Number – when a version number is supplied, this will start an upgrade if the running version doesn’t match the explicit version and then all future upgrades are no-ops. This essentially locks your database cluster to a specific database version. Example values for this can be “ 5.7.30-31.43” or “8.0.19-10.1”.

upgradeOptions.schedule is a classic cron schedule option and by default, it is set to check for new versions every day at 4 AM.


A smart update strategy can only be used to upgrade PXC clusters and not the operator itself. It will only do the minor version upgrade automatically and it cannot be used for downgrades (since, as you may know, the downgrades in MySQL from version 8 might be problematic even between minor versions).


If there is a need to always keep the PXC cluster upgraded to the latest/recommended version, or if you just want to facilitate some benefits of the new smart update strategy even without automatic upgrades functionality, you can use version 1.5.0 of the Percona Kubernetes Operator for Percona XtraDB Cluster for this.


Microsoft launches Open Service Mesh

Microsoft today announced the launch of a new open-source service mesh based on the Envoy proxy. The Open Service Mesh is meant to be a reference implementation of the Service Mesh Interface (SMI) spec, a standard interface for service meshes on Kubernetes that has the backing of most of the players in this ecosystem.

The company plans to donate Open Service Mesh to the Cloud Native Computing Foundation (CNCF) to ensure that it is community-led and has open governance.

“SMI is really resonating with folks and so we really thought that there was room in the ecosystem for a reference implementation of SMI where the mesh technology was first and foremost implementing those SMI APIs and making it the best possible SMI experience for customers,” Microsoft director of partner management for Azure Compute (and CNCF board member) Gabe Monroy told me.

Image Credits: Microsoft

He also added that, because SMI provides the lowest common denominator API design, Open Service Mesh gives users the ability to “bail out” to raw Envoy if they need some more advanced features. This “no cliffs” design, Monroy noted, is core to the philosophy behind Open Service Mesh.

As for its feature set, SMI handles all of the standard service mesh features you’d expect, including securing communications between services using mTLS, managing access control policies, service monitoring and more.

Image Credits: Microsoft

There are plenty of other service mesh technologies in the market today, though. So why would Microsoft launch this?

“What our customers have been telling us is that solutions that are out there today, Istio being a good example, are extremely complex,” he said. “It’s not just me saying this. We see the data in the AKS support queue of customers who are trying to use this stuff — and they’re struggling right here. This is just hard technology to use, hard technology to build at scale. And so the solutions that were out there all had something that wasn’t quite right and we really felt like something lighter weight and something with more of an SMI focus was what was going to hit the sweet spot for the customers that are dabbling in this technology today.”

Monroy also noted that Open Service Mesh can sit alongside other solutions like Linkerd, for example.

A lot of pundits expected Google to also donate its Istio service mesh to the CNCF. That move didn’t materialize. “It’s funny. A lot of people are very focused on the governance aspect of this,” he said. “I think when people over-focus on that, you lose sight of how are customers doing with this technology. And the truth is that customers are not having a great time with Istio in the wild today. I think even folks who are deep in that community will acknowledge that and that’s really the reason why we’re not interested in contributing to that ecosystem at the moment.”


Kubermatic launches open-source service hub to enable complex service management

As Kubernetes and cloud-native technologies proliferate, developers and IT have found a growing set of technical challenges they need to address, and new concepts and projects have popped up to deal with them. For instance, operators provide a way to package, deploy and manage your cloud-native application in an automated way. Kubermatic wants to take that concept a step further, and today the German startup announced KubeCarrier, a new open-source, cloud-native service management hub.

Kubermatic co-founder Sebastian Scheele says three or four years ago, the cloud-native community needed to solve a bunch of technical problems around deploying Kubernetes clusters, such as overlay networking, service meshes and authentication. He sees a similar set of problems arising today where developers need more tools to manage the growing complexity of running Kubernetes clusters at scale.

Kubermatic has developed KubeCarrier to help solve one aspect of this. “What we’re currently focusing on is how to provision and manage workloads across multiple clusters, and how IT organizations can have a service hub where they can provide those services to their organizations in a centralized way,” Scheele explained.

Scheele says that KubeCarrier provides a way to manage and implement all of this, giving organizations much greater flexibility beyond purely managing Kubernetes. While he sees organizations with lots of Kubernetes operators, he says that as he sees it, it doesn’t stop there. “We have lots of Kubernetes operators now, but how do we manage them, especially when there are multiple operators, [along with] the services they are provisioning,” he asked.

This could involve provisioning something like Database as a Service inside the organization or for external customers, while combining or provisioning multiple services, which are working on multiple levels and a need a way to communicate with each other.

“That is where KubeCarrier comes in. Now, we can help our customers to build this kind of automation around provisioning, and service capability so that different teams can provide different services inside the organization or to external customers,” he said.

As the company explains it, “KubeCarrier addresses these complexities by harnessing the Kubernetes API and Operators into a central framework allowing enterprises and service providers to deliver cloud native service management from one multi-cloud, multi-cluster hub.”

KubeCarrier is available on GitHub, and Scheele says the company is hoping to get feedback from the community about how to improve it. In parallel, the company is looking for ways to incorporate this technology into its commercial offerings, and that should be available in the next 3-6 months, he said.

Powered by WordPress | Theme: Aeros 2.0 by