Aug
09
2018
--

Prometheus monitoring tool joins Kubernetes as CNCF’s latest ‘graduated’ project

The Cloud Native Computing Foundation (CNCF) may not be a household name, but it houses some important open source projects including Kubernetes, the fast-growing container orchestration tool. Today, CNCF announced that the Prometheus monitoring and alerting tool had joined Kubernetes as the second “graduated” project in the organization’s history.

The announcement was made at PromCon, the project’s dedicated conference being held in Munich this week. According to Chris Aniszczyk, CTO and COO at CNCF, a graduated project reflects the overall maturity where it has reached a tipping point in terms of diversity of contribution, community and adoption.

For Prometheus that means 20 active maintainers, more than 1,000 contributors and more than 13,000 commits. Its contributors include the likes of DigitalOcean, Weaveworks, ShowMax and Uber.

CNCF projects start in the sandbox, move onto incubation and finally to graduation. To achieve graduation level, they need to adopt the CNCF Code of Conduct, have passed an independent security audit and defined a community governance structure. Finally it needs to show an “ongoing commitment to code quality and security best practices,” according to the organization.

Aniszczyk says the tool consists of a time series database combined with a query language that lets developers search for issues or anomalies in their system and get analytics back based on their queries. Not surprisingly, it is especially well suited to containers.

Like Kubernetes, the project that became Prometheus has its roots inside Google. Google was one of the first companies to work with containers and developed Borg (the Kubernetes predecessor) and Borgmon (the Prometheus predecessor). While Borg’s job was to manage container orchestration, Borgmon’s job was to monitor the process and give engineers feedback and insight into what was happening to the containers as they moved through their lifecycle.

While its roots go back to Borgmon, Prometheus as we know it today was developed by a couple of former Google engineers at SoundCloud in 2012. It joined Kubernetes as the second CNCF project in May 2016, and appropriately is the second graduate.

The Cloud Native Computing Foundation’s role in all of this to help promote cloud native computing, the notion that you can manage your infrastructure wherever it lives in a common way, greatly reducing the complexity of managing on-prem and cloud resources. It is part of the Linux Foundation and boasts some of the biggest names in tech as members.

Jul
31
2018
--

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

Jul
24
2018
--

Google Cloud goes all-in on hybrid with its new Cloud Services Platform

The cloud isn’t right for every business, be that because of latency constraints at the edge, regulatory requirements or because it’s simply cheaper to own and operate their own data centers for their specific workloads. Given this, it’s maybe no surprise that the vast majority of enterprises today use both public and private clouds in parallel. That’s something Microsoft has long been betting on as part of its strategy for its Azure cloud, and Google, too, is now taking a number of steps in this direction.

With the open-source Kubernetes project, Google launched one of the fundamental building blocks that make running and managing applications in hybrid environments easier for large enterprises. What Google hadn’t done until today, though, is launch a comprehensive solution that includes all of the necessary parts for this kind of deployment. With its new Cloud Services Platform, though, the company is now offering businesses an integrated set of cloud services that can be deployed on both the Google Cloud Platform and in on-premise environments.

As Google Cloud engineering director Chen Goldberg noted in a press briefing ahead of today’s announcement, many businesses also simply want to be able to manage their own workloads on-premise but still be able to access new machine learning tools in the cloud, for example. “Today, to achieve this, use cases involve a compromise between cost, consistency, control and flexibility,” she said. “And this all negatively impacts the desired result.”

Goldberg stressed that the idea behind the Cloud Services Platform is to meet businesses where they are and then allow them to modernize their stack at their own pace. But she also noted that businesses want more than just the ability to move workloads between environments. “Portability isn’t enough,” she said. “Users want consistent experiences so that they can train their team once and run anywhere — and have a single playbook for all environments.”

The two services at the core of this new offering are the Kubernetes container orchestration tool and Istio, a relatively new but quickly growing tool for connecting, managing and securing microservices. Istio is about to hit its 1.0 release.

We’re not simply talking about a collection of open-source tools here. The core of the Cloud Services Platform, Goldberg noted, is “custom configured and battle-tested for enterprises by Google.” In addition, it is deeply integrated with other services in the Google Cloud, including the company’s machine learning tools.

GKE On-Prem

Among these new custom-configured tools are a number of new offerings, which are all part of the larger platform. Maybe the most interesting of these is GKE On-Prem. GKE, the Google Kubernetes Engine, is the core Google Cloud service for managing containers in the cloud. And now Google is essentially bringing this service to the enterprise data center, too.

The service includes access to all of the usual features of GKE in the cloud, including the ability to register and manage clusters and monitor them with Stackdriver, as well as identity and access management. It also includes a direct line to the GCP Marketplace, which recently launched support for Kubernetes-based applications.

Using the GCP Console, enterprises can manage both their on-premise and GKE clusters without having to switch between different environments. GKE on-prem connects seamlessly to a Google Cloud Platform environment and looks and behaves exactly like the cloud version.

Enterprise users also can get access to professional services and enterprise-grade support for help with managing the service.

“Google Cloud is the first and only major cloud vendor to deliver managed Kubernetes on-prem,” Goldberg argued.

GKE Policy Management

Related to this, Google also today announced GKE Policy Management, which is meant to provide Kubernetes administrators with a single tool for managing all of their security policies across clusters. It’s agnostic as to where the Kubernetes cluster is running, but you can use it to port your existing Google Cloud identity-based policies to these clusters. This new feature will soon launch in alpha.

Managed Istio

The other major new service Google is launching is Managed Istio (together with Apigee API Management for Istio) to help businesses manage and secure their microservices. The open source Istio service mesh gives admins and operators the tools to manage these services and, with this new managed offering, Google is taking the core of Istio and making it available as a managed service for GKE users.

With this, users get access to Istio’s service discovery mechanisms and its traffic management tools for load balancing and routing traffic to containers and VMs, as well as its tools for getting telemetry back from the workloads that run on these clusters.

In addition to these three main new services, Google is also launching a couple of auxiliary tools around GKE and the serverless computing paradigm today. The first of these is the GKE serverless add-on, which makes it easy to run serverless workloads on GKE with a single-step deploy process. This, Google says, will allow developers to go from source code to container “instantaneously.” This tool is currently available as a preview and Google is making parts of this technology available under the umbrella of its new native open source components. These are the same components that make the serverless add-on possible.

And to wrap it all up, Google also today mentioned a new fully managed continuous integration and delivery service, Google Cloud Build, though the details around this service remain under wraps.

So there you have it. By themselves, all of those announcements may seem a bit esoteric. As a whole, though, they show how Google’s bet on Kubernetes is starting to pay off. As businesses opt for containers to deploy and run their new workloads (and maybe even bring older applications into the cloud), GKE has put Google Cloud on the map to run them in a hosted environment. Now, it makes sense for Google to extend this to its users’ data centers, too. With managed Kubernetes from large and small companies like SUSE, Platform 9, containership is starting to become a big business. It’s no surprise the company that started it all wants to get a piece of this pie, too.

Jun
25
2018
--

Running Percona XtraDB Cluster in Kubernetes/OpenShift

Diagram of Percona XtraDB Cluster / MySQL running in Kubernetes Open Shift

Kubernetes, and its most popular distribution OpenShift, receives a lot of interest as a container orchestration platform. However, databases remain a foreign entity, primarily because of their stateful nature, since container orchestration systems prefer stateless applications. That said, there has been good progress in support for StatefulSet applications and persistent storage, to the extent that it might be already comfortable to have a production database instance running in Kubernetes. With this in mind, we’ve been looking at running Percona XtraDB Cluster in Kubernetes/OpenShift.

While there are already many examples on the Internet of how to start a single MySQL instance in Kubernetes, for serious usage we need to provide:

  • High Availability: how can we guarantee availability when an instance (or Pod in Kubernetes terminology) crashes or becomes unresponsive?
  • Persistent storage: we do not want to lose our data in case of instance failure
  • Backup and recovery
  • Traffic routing: in the case of multiple instances, how do we direct an application to the correct one
  • Monitoring

Percona XtraDB Cluster in Kubernetes/OpenShift

Schematically it looks like this:


Percona XtraDB Cluster in Kubernetes/OpenShift a possible configuration for a resilient solution

The picture highlights the components we are going to use

Running this in Kubernetes assumes a high degree of automation and minimal manual intervention.

We provide our proof of concept in this project: https://github.com/Percona-Lab/percona-openshift. Please treat it like a source for ideas and as an alpha-quality project, in no way it is production ready.

Details

In our implementation we rely on Helm, the package manager for Kubernetes.  Unfortunately OpenShift does not officially support Helm out of the box, but there is a guide from RedHat on how to make it work.

In the clustering setup, it is quite typical to use a service discovery software like Zookeeper, etcd or Consul. It may become necessary for our Percona XtraDB Cluster deployment, but for now, to simplify deployment, we are going to use the DNS service discovery mechanism provided by Kubernetes. It should be enough for our needs.

We also expect the Kubernetes deployment to provide Dynamic Storage Provisioning. The major cloud providers (like Google Cloud, Microsoft Azure or Amazon Cloud) should have it. Also, it might not be easy to have Dynamic Storage Provisioning for on-premise deployments. You may need to setup GlusterFS or Ceph to provide Dynamic Storage Provisioning.

The challenge with a distributed file system is how many copies of data you will end up having. Percona XtraDB Cluster by itself has three copies, and GlusterFS will also require at least two copies of the data, so in the end we will have six copies of the data. This can’t be good for write intensive applications, but it’s also not good from the capacity standpoint.

One possible approach is to have local data copies for Percona XtraDB Cluster deployments. This will provide better performance and less impact on the network, but in the case of a big dataset (100GB+ ) the node failure will require SST with a big impact on the cluster and network. So the individual solution should be tailored for your workload and your requirements.

Now, as we have a basic setup working, it would be good to understand the performance impact of running Percona XtraDB Cluster in Kubernetes.  Is the network and storage overhead acceptable or it is too big? We plan to look into this in the future.

Once again, our project is located at https://github.com/Percona-Lab/percona-openshift, we are looking for your feedback and for your experience of running databases in Kubernetes/OpenShift.

Before you leave …

Percona XtraDB Cluster

If this article has interested you and you would like to know more about Percona XtraDB Cluster, you might enjoy our recent series of webinar tutorials that introduce this software and how to use it.

The post Running Percona XtraDB Cluster in Kubernetes/OpenShift appeared first on Percona Database Performance Blog.

Jun
13
2018
--

Docker aims to federate container management across clouds

When Docker burst on the scene in 2013, it brought the idea of containers to a broad audience. Since then Kubernetes has emerged as a way to orchestrate the delivery of those containerized apps, but Docker saw a gap that wasn’t being addressed beyond pure container deployment that they are trying to address with the next release of Docker Enterprise Edition. Docker made the announcement today at DockerCon in San Francisco.

Scott Johnston, chief product officer at Docker says that Docker Enterprise Edition’s new federated application management feature helps operations manage multiple clusters, whether those clusters are on premise, in the cloud or across different public cloud providers. This allows federated management of application wherever they live and supports managed Kubernetes tools from the big three public cloud providers including Azure AKS, AWS EKS and Google GKE.

Johnston says that deploying the containers is just the first part of the problem. There is a whole set of issues to deal with outside of Kubernetes (and other orchestration tools) once your application begins being deployed. “So, you know, you get portability of containers with the Docker format and the Kubernetes or Compose description files, but once you land on an environment, that environment has deployment scripts, security models, user management and [so forth]. So while the app is portable, the management of these applications is not,” he explained.

He says that can lead to a set of separate deployment tools creating a new level of complexity that using containers was supposed to eliminate. This is especially true when deploying across multiple clouds (and on prem sometimes too). If you need load balancing, security, testing and so forth — the kinds of tasks the operations team has to undertake — and you want to apply these in a consistent way regardless of the environment, Johnston says that Docker EE should help by creating a single place to manage across environments and achieve that cloud native goal of managing all your applications and data and infrastructure in a unified way.

In addition to the federated management component, Docker also announced Windows Server containers on Kubernetes for Docker Enterprise Edition. It had previously announced support for Linux containers last year.

Finally, the company is introducing a template-based approach to Docker deployment to enable people in the organization with a bit less technical sophistication to deploy from a guided graphical process instead of a command line interface.

The federated application management is available in Beta starting the second half of this year, support for Windows Server Containers will be included in the next release of Docker Enterprise Edition later this year and Templates will be available in Docker Desktop in Beta later this year.

Jun
12
2018
--

Sumo Logic brings data analysis to containers

Sumo Logic has long held the goal to help customers understand their data wherever it lives. As we move into the era of containers, that goal becomes more challenging because containers by their nature are ephemeral. The company announced a product enhancement today designed to instrument containerized applications in spite of that.

They are debuting these new features at DockerCon, Docker’s customer conference taking place this week in San Francisco.

Sumo’s CEO Ramin Sayer says containers have begun to take hold over the last 12-18 months with Docker and Kubernetes emerging as tools of choice. Given their popularity, Sumo wants to be able to work with them. “[Docker and Kubernetes] are by far the most standard things that have developed in any new shop, or any existing shop that wants to build a brand new modern app or wants to lift and shift an app from on prem [to the cloud], or have the ability to migrate workloads from Vendor A platform to Vendor B,” he said.

He’s not wrong of course. Containers and Kubernetes have been taking off in a big way over the last 18 months and developers and operations alike have struggled to instrument these apps to understand how they behave.

“But as that standardization of adoption of that technology has come about, it makes it easier for us to understand how to instrument, collect, analyze, and more importantly, start to provide industry benchmarks,” Sayer explained.

They do this by avoiding the use of agents. Regardless of how you run your application, whether in a VM or a container, Sumo is able to capture the data and give you feedback you might otherwise have trouble retrieving.

Screen shot: Sumo Logic (cropped)

The company has built in native support for Kubernetes and Amazon Elastic Container Service for Kubernetes (Amazon EKS). It also supports the open source tool Prometheus favored by Kubernetes users to extract metrics and metadata. The goal of the Sumo tool is to help customers fix issues faster and reduce downtime.

As they work with this technology, they can begin to understand norms and pass that information onto customers. “We can guide them and give them best practices and tips, not just on what they’ve done, but how they compare to other users on Sumo,” he said.

Sumo Logic was founded in 2010 and has raised $230 million, according to data on Crunchbase. Its most recent round was a $70 million Series F led by Sapphire Ventures last June.

Jun
06
2018
--

Four years after its release, Kubernetes has come a long way

On June 6th, 2014 Kubernetes was released for the first time. At the time, nobody could have predicted that 4 years later that the project would become a de facto standard for container orchestration or that the biggest tech companies in the world would be backing it. That would come later.

If you think back to June 2014, containerization was just beginning to take off thanks to Docker, which was popularizing the concept with developers, but being so early there was no standard way to manage those containers.

Google had been using containers as a way to deliver applications for years and ran a tool called Borg to handle orchestration. It’s called an orchestrator because much like a conductor of an orchestra, it decides when a container is launched and when it shuts down once it’s completed its job.

At the time, two Google engineers, Craig McLuckie and Joe Beda, who would later go on to start Heptio, were looking at developing an orchestration tool like Borg for companies that might not have the depth of engineering talent of Google to make it work. They wanted to spread this idea of how they develop distributed applications to other developers.

Hello world

Before that first version hit the streets, what would become Kubernetes developed out of a need for an orchestration layer that Beda and McLuckie had been considering for a long time. They were both involved in bringing Google Compute Engine, Google’s Infrastructure as a Service offering, to market, but they felt like there was something missing in the tooling that would fill in the gaps between infrastructure and platform service offerings.

“We had long thought about trying to find a way to bring a sort of a more progressive orchestrated way of running applications in production. Just based on our own experiences with Google Compute Engine, we got to see firsthand some of the challenges that the enterprise faced in moving workloads to the cloud,” McLuckie explained.

He said that they also understood some of the limitations associated with virtual machine-based workloads and they were thinking about tooling to help with all of that. “And so we came up the idea to start a new project, which ultimately became Kubernetes.”

Let’s open source it

When Google began developing Kubernetes in March 2014, it wanted nothing less than to bring container orchestration to the masses. It was a big goal and McLuckie, Beda and teammate Brendan Burns believed the only way to get there was to open source the technology and build a community around it. As it turns out, they were spot on with that assessment, but couldn’t have been 100 percent certain at the time. Nobody could have.

Photo: Cloud Native Computing Foudation

“If you look at the history, we made the decision to open source Kubernetes and make it a community-oriented project much sooner than conventional wisdom would dictate and focus on really building a community in an open and engaged fashion. And that really paid dividends as Kubernetes has accelerated and effectively become the standard for container orchestration,” McLuckie said.

The next thing they did was to create the Cloud Native Computing Foundation (CNCF) as an umbrella organization for the project. If you think about it, this project could have gone in several directions, as current CNCF director Dan Kohn described in a recent interview.

Going cloud native

Kohn said Kubernetes was unique in a couple of ways. First of all, it was based on existing technology developed over many years at Google. “Even though Kubernetes code was new, the concepts and engineering and know-how behind it was based on 15 years at Google building Borg (And a Borg replacement called Omega that failed),” Kohn said. The other thing was that Kubernetes was designed from the beginning to be open sourced.

Photo: Swapnil Bhartiya on Flickr. Used under CC by SA 2.0 license

He pointed out that Google could have gone in a few directions with Kubernetes. It could have created a commercial product and sold it through Google Cloud. It could have open sourced it, but had a strong central lead as they did with Go. They could have gone to the Linux Foundation and said they wanted to create a stand-alone Kubernetes Foundation. But they didn’t do any of these things.

McLuckie says they decided to something entirely different and place it under the auspices of the Linux Foundation, but not as Kubernetes project. Instead they wanted to create a new framework for cloud native computing itself and the CNCF was born. “The CNCF is a really important staging ground, not just for Kubernetes, but for the technologies that needed to come together to really complete the narrative, to make Kubernetes a much more comprehensive framework,” McLuckie explained.

Getting everyone going in the same direction

Over the last few years, we have watched as Kubernetes has grown into a container orchestration standard. Last summer in quick succession  a slew of major enterprise players joined CNCF as AWSOracleMicrosoftVMware and Pivotal all joined. They came together with Red Hat, Intel, IBM Cisco and others who were already members.

Cloud Native Computing Foundation Platinum members

Each these players no doubt wanted to control the orchestration layer, but they saw Kubernetes gaining momentum so rapidly, they had little choice but to go along. Kohn jokes that having all these big name players on board is like herding cats, but bringing in them in has been the goal all along. He said it just happened much faster than he thought it would.

In a recent interview with TechCrunch, David Aronchick, who runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days. He is shocked by how quickly it has grown. “I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he told TechCrunch.

As it has grown, it has become readily apparent that McLuckie was right about building that cloud native framework instead of a stand-alone Kubernetes foundation. Today there are dozens of adjacent projects and the organization is thriving.

Nobody is more blown away by this than McLuckie himself who says seeing Kubernetes hit these various milestones since its initial release has been amazing for him and his team to watch. “It’s just been a series of these wonderful kind of moments as Kubernetes has gained a head of steam, and it’s been  so much fun to see the community really rally around it.”

Jun
01
2018
--

Helm moves out of Kubernetes’ shadow to become stand-alone project

Helm is an open source project that enables developers to create packages of containerized apps to make installation much simpler. Up until now, it was a sub-project of Kubernetes, the popular container orchestration tool, but as of today it is a stand-alone project.

Both Kubernetes and Helm are projects managed by the Cloud Native Computing Foundation (CNCF). The CNCF’s Technical Oversight Committee approved the project earlier this week. Dan Kohn, executive director at the CNCF says the two projects are closely aligned so it made sense for Helm to be a sub-project up until now.

“What’s nice about Helm is that it’s just an application on top of Kubernetes. Kubernetes is an API and Helm accesses that API. If you want you to install this [package], you access the Kubernetes API, and it pulls this many containers and pods and [it handles] all of the steps involved to do that,” Kohn explained.

This ability to package up a set of requirements allows you to repeat the installation process in a consistent way. “Helm addresses a common user need of deploying applications to Kubernetes by making their configurations reusable,” Brian Grant, principal engineer at Google and Kubernetes (and a member of the TOC) explained in a statement.

Packages are known as “charts,” which consist one or more containers. Kohn says for example, you might want to deploy a chart that includes WordPress and MariaDB in a single container. By creating a chart, it defines the installation process and which pieces need to go in which order to install correctly across a cluster.

Kohn said they decided to pull it out as a separate program because it doesn’t always follow the Kubernetes release schedule, and as such they wanted to make it stand-alone so it wouldn’t necessarily have to be linked to every Kubernetes release.

It also allows developers to benefit from the community, who could build Charts for common installation scenarios. “By joining CNCF, we’ll benefit from the input and participation of the community, and conversely Kubernetes will benefit when a community of developers provides a vast repository of ready-made charts for running workloads on Kubernetes,” Matt Butcher, co-creator of Helm and principal engineer at Microsoft said in a statement.

Besides Microsoft and Google, other project sponsors include Codefresh, Bitnami, Ticketmaster and Codecentric. The project website states there are currently 250 developers contributing to this project. By becoming part of CNCF that will very likely increase soon.

May
25
2018
--

Webinar Tues, 5/29: MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together

Please join Percona’s Principal Architect Alex Rubin as he presents MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together to give you a highly available cluster database environment on Tuesday, May 29th at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

 

In this webinar, Alex will discuss how to deploy a highly available MySQL database environment on Kubernetes/Openshift using Percona XtraDB Cluster (PXC) together with MySQL Proxy to implement read/write splitting.

If you have never used Kubernetes and Openshift, or never used PXC / MySQL Proxy, Alex will do a quick introduction to these technologies. There will also be a demo where Alex sets up a PXC cluster with ProxySQL in Openshift Origin and tries to break it.

By the end of this webinar you will have a better understanding of:

  • How to deploy Percona XtraDB Cluster with ProxySQL for HA solutions
  • How to leverage Kubernetes/Openshift in your environments
  • How to troubleshoot performance issues

Register for the webinar

Alexander Rubin, Principal Consultant

Alexander RubinAlexander joined Percona in 2013. Alexander worked with MySQL since 2000 as DBA and Application Developer. Before joining Percona he was doing MySQL consulting as a principal consultant for over 7 years (started with MySQL AB in 2006, then Sun Microsystems and then Oracle). He helped many customers design large, scalable and highly available MySQL systems and optimize MySQL performance. Alexander also helped customers design Big Data stores with Apache Hadoop and related technologies.

The post Webinar Tues, 5/29: MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together appeared first on Percona Database Performance Blog.

May
07
2018
--

As Kubernetes grows, a startup ecosystem develops in its wake

Kubernetes, the open source container orchestration tool, came out of Google several years ago and has gained traction amazingly fast. With each step in its growth, it has created opportunities for companies to develop businesses on top of the open source project.

The beauty of open source is that when it works, you build a base platform and an economic ecosystem follows in its wake. That’s because a project like Kubernetes (or any successful open source offering) generates new requirements as a natural extension of the growth and development of a project.

Those requirements represent opportunities for new projects, of course, but also for startups looking at building companies adjacent that open source community. Before that can happen however, a couple of key pieces have to fall into place.

Ingredients for success

For starters you need the big corporates to get behind it. In the case of Kuberentes, in a 6 week period last year in quick succession between July and the beginning of September, we saw some of the best known enterprise technology companies including AWSOracleMicrosoftVMware and Pivotal all join the Cloud Native Computing Foundation (CNCF), the professional organization behind the open source project. This was a signal that Kubernetes was becoming a standard of sorts for container orchestration.

Surely these big companies would have preferred (and tried) to control the orchestration layer themselves, but they soon found that their customers preferred to use Kubernetes and they had little choice, but to follow the clear trend that was developing around the project.

Photo: Georgijevic on Getty Images

The second piece that has to come together for an open source community to flourish is that a significant group of developers have to accept it and start building stuff on top of the platform — and Kubernetes got that too. Consider that according to CNCF, a total of 400 projects have been developed on the platform by 771 developers contributing over 19,000 commits since the launch of Kubernetes 1.0 in 2015. Since last August, the last date for which the CNCF has numbers, developer contributions had increased by 385 percent. That’s a ton of momentum.

Cue the investors

When you have those two ingredients in place — developers and large vendors — you can begin to gain velocity. As more companies and more developers come, the community continues to grow, and that’s what we’ve been seeing with Kubernetes.

As that happens, it typically doesn’t take long for investors to take notice, and according to CNCF, there has been over $4 billion in investments so far in cloud native companies — this from a project that didn’t even exist that long ago.

Photo: Fitria Ramli / EyeEm on Getty Images.

That investment has taken the form of venture capital funding startups trying to build something on top of Kubernetes, and we’ve seen some big raises. Earlier this month, Hasura raised a $1.6M seed round for a packaged version Kubernetes designed specially to meet the needs of developers. Just last week, Upbound, a new startup from Seattle got $9 million in its Series A round to help manage multi-cluster and multi-cloud environments in a standard (cloud-native) way. A little further up the maturity curve, Heptio has raised over $33 million with its most recent round being a $25 million Series B last September. Finally, there is CoreOS, which raised almost $50 million before being sold to Red Hat for $250 million in January.

CoreOS wasn’t alone by any means as we’ve seen other exits coming over the last year or two with organizations scooping up cloud native startups. In particular, when you see the largest organizations like Microsoft, Oracle and Red Hat buying relatively young startups, they are often looking for talent, customers and products to get up to speed more quickly in a growing technology area like Kubernetes.

Growing an economic ecosystem

Kubernetes has grown and developed into an economic powerhouse in short period of time as dozens of side projects have developed around it, creating even more opportunity for companies of all sizes to build products and services to meet an ever-growing set of needs in a virtuous cycle of investment, innovation and economic activity.

Cloud Native Computing Foundation projects. Photo: Cloud Native Computing Foundation

If this project continues to grow, chances are it will gain even more investment as companies continue to flow toward containers and Kubernetes, and even more startups develop to help create products to meet new needs as a result.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com