Jun
04
2019
--

How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open sourced five years ago.

Today, Kubernetes is the fastest growing open-source project and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.

May
29
2019
--

Announcing Percona Kubernetes Operators

kubernetes-operators

kubernetes-operatorsPercona announced the release of Percona Kubernetes Operator for XtraDB Cluster and Percona Kubernetes Operator for Percona Server for MongoDB. Kubernetes delivers a method to orchestrate containers, providing automated deployment, management, and scalability.

In today’s cloud-native world, the ability to easily and efficiently deploy new environments or scale existing environments is key to ongoing growth and development. With Kubernetes Operators, you can launch a new environment with no single point of failure in under 10 minutes. As needs change, Kubernetes Operators can reliably orchestrate scaling the environment to meet current requirements, adding or removing nodes quickly and efficiently. Kubernetes Operators also provide for self-healing of a failed node in a cluster environment.

One of the best features of the Percona Kubernetes Operators is that they provide a deployment configuration while meeting Percona best practices. When the Operator is used to create a new XtraDB or Percona Server for MongoDB node, you can rest assured that the new node will use the same configuration as other nodes created with that same Operator. This ensures consistent results and reliability.

The consistency and ease of deployment enable your developers to focus on writing code while your operations team focuses on building pipelines. The Operator takes care of the tedious tasks of deploying and maintaining your databases following Percona’s best practices for performance, reliability, and scalability.

Percona Kubernetes Operator for XtraDB Cluster

The Percona Kubernetes Operator for XtraDB Cluster provides a way to deploy, manage, or scale an XtraDB Cluster environment. Based on our best practices, this Operator delivers a solid and secure cluster environment that can be used for development, testing, or production.

Percona XtraDB Cluster includes ProxySQL for load balancing and Percona XtraBackup for MySQL to easily backup your database environment. The Operator adds Percona Monitoring and Management to provide you with deep visibility into the performance and usage of your cluster.

Percona Kubernetes Operator for Percona Server for MongoDB

The Percona Kubernetes Operator for Percona Server for MongoDB enables you to deploy, manage, and scale a Percona Server for MongoDB replica set. With the Operator, you are ensured of an environment that does not have a single point of failure and adheres to Percona’s best practices for use of Percona Server for MongoDB. New Percona Server for MongoDB nodes can be added as either data storage nodes or as arbiter nodes.

The Percona Kubernetes Operator for Percona Server for MongoDB also includes Percona Monitoring and Management so that you can easily view and analyze activity on your replica set. It also includes Percona Backup for MongoDB, providing both scheduled and on-demand backups of your replica set.

Percona Kubernetes Operators deliver a proven method to provide a reliable and secure environment for your users, whether they are developers, testers, or end users. With these Operators, you are assured of consistent results and properly managed environments, freeing you up to focus on other tasks.

May
23
2019
--

Takeaways from KubeCon; the latest on Kubernetes and cloud native development

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller discuss major announcements that came out of the Linux Foundation’s European KubeCon/CloudNativeCon conference and discuss the future of Kubernetes and cloud-native technologies.

Nearly doubling in size year-over-year, this year’s KubeCon conference brought big news and big players, with major announcements coming from some of the world’s largest software vendors including Google, AWS, Microsoft, Red Hat, and more. Frederic and Ron discuss how the Kubernetes project grew to such significant scale and which new initiatives in cloud-native development show the most promise from both a developer and enterprise perspective.

“This ecosystem starts sprawling, and we’ve got everything from security companies to service mesh companies to storage companies. Everybody is here. The whole hall is full of them. Sometimes it’s hard to distinguish between them because there are so many competing start-ups at this point.

I’m pretty sure we’re going to see a consolidation in the next six months or so where some of the bigger players, maybe Oracle, maybe VMware, will start buying some of these smaller companies. And I’m sure the show floor will look quite different about a year from now. All the big guys are here because they’re all trying to figure out what’s next.”

Frederic and Ron also dive deeper into the startup ecosystem rapidly developing around Kubernetes and other cloud-native technologies and offer their take on what areas of opportunity may prove to be most promising for new startups and founders down the road.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

May
23
2019
--

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native, put simply, involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google, says the Kubernetes community has really embraced the serverless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition. “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained.

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice, fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container, such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open-source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points out, there isn’t a one-size-fits-all approach to cloud-native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open-source community and vendors will respond with tools to help them. Bringing serverless and containers together is just one example of that.

May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

May
21
2019
--

Praqma puts Atlassian’s Data Center products into containers

It’s KubeCon + CloudNativeCon this week and in the slew of announcements, one name stood out: Atlassian . The company is best known as the maker of tools that allow developers to work more efficiently, and now as a cloud infrastructure provider. In this age of containerization, though, even Atlassian can bask in the glory that is Kubernetes, because the company today announced that its channel partner Praqma is launching Atlassian Software in Kubernetes (ASK), a new solution that allows enterprises to run and manage as containers its on-premise applications like Jira Data Center, with the help of Kubernetes.

Praqma is now making ASK available as open source.

As the company notes in today’s announcement, running a Data Center application and ensuring high availability can be a lot of work using today’s methods. With AKS and by containerizing the applications, scaling and management should become easier — and downtime more avoidable.

“Availability is key with ASK. Automation keeps mission-critical applications running whatever happens,” Praqma’s team explains. “If a Jira server fails, Data Center will automatically redirect traffic to healthy servers. If an application or server crashes Kubernetes automatically reconciles by bringing up a new application. There’s also zero downtime upgrades for Jira.”

AKS handles the scaling and most admin tasks, in addition to offering a monitoring solution based on the open-source Grafana and Prometheus projects.

Containers are slowly becoming the distribution medium of choice for a number of vendors. As enterprises move their existing applications to containers, it makes sense for them to also expect that they can manage their existing on-premises applications from third-party vendors in the same systems. For some vendors, that may mean a shift away from pre-server licensing to per-seat licensing, so there are business implications to this, but in general, it’s a logical move for most.

May
09
2019
--

Measuring MySQL Performance in Kubernetes

Measuring MySQL Performance in Kubernetes

Measuring MySQL Performance in KubernetesIn my previous post Running MySQL/Percona Server in Kubernetes with a Custom Config I’ve looked at how to set up MySQL in Kubernetes to utilize system resources fully. Today I want to measure if there is any performance overhead of running MySQL in Kubernetes, and show what challenges I faced trying to measure it.

I will use a very simple CPU bound benchmark to measure MySQL performance in OLTP read-only workload:

sysbench oltp_read_only --report-interval=1 --time=1800 --threads=56 --tables=10 --table-size=10000000 --mysql-user=sbtest --mysql-password=sbtest --mysql-socket=/var/lib/mysql/mysql.sock run

The hardware is as follows:

Supermicro server

  • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
  • 2 sockets / 28 cores / 56 threads
  • Memory: 256GB of RAM

The most interesting number there is 28 cores / 56 threads.  Please keep this in mind; we will need this later.

So let’s see the MySQL performance in the bare metal setup:

[ 607s ] thds: 56 tps: 22154.20 qps: 354451.12 (r/w/o: 310143.73/0.00/44307.39) lat (ms,95%): 2.61 err/s: 0.00 reconn/s: 0.00
[ 608s ] thds: 56 tps: 22247.80 qps: 355955.88 (r/w/o: 311461.27/0.00/44494.61) lat (ms,95%): 2.61 err/s: 0.00 reconn/s: 0.00
[ 609s ] thds: 56 tps: 21984.01 qps: 351641.13 (r/w/o: 307672.12/0.00/43969.02) lat (ms,95%): 2.66 err/s: 0.00 reconn/s: 0.00

So we can get about 22000 qps on this server.

Now, let’s see what we can get if the same server runs a Kubernetes node and we deploy the Percona server image on this node. I will use a modified image of Percona Server 8, which already includes sysbench inside.

You can find my image here: https://hub.docker.com/r/vadimtk/ps-8-vadim

And I use the following deployment yaml :

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  selector:
    app: mysql
  ports:
    - name: mysql
      port: 3306
      protocol: TCP
      targetPort: 3306
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      nodeSelector:
        kubernetes.io/hostname: smblade01
      volumes:
        - name: mysql-persistent-storage
          hostPath:
            path: /mnt/data/mysql
            type: Directory
        - name: config-volume
          configMap:
            name: mysql-config
            optional: true
      containers:
      - image: vadimtk/ps-8-vadim
        imagePullPolicy: Always
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
        - name: config-volume
          mountPath: /etc/my.cnf.d

The most important part here is that we deploy our image on smblade01 node (the same one I ran the bare metal benchmark).

Let’s see what kind of performance we get using this setup. The number I’ve got:

[ 605s ] thds: 56 tps: 10561.88 qps: 169045.04 (r/w/o: 147921.29/0.00/21123.76) lat (ms,95%): 12.98 err/s: 0.00 reconn/s: 0.00
[ 606s ] thds: 56 tps: 10552.00 qps: 168790.98 (r/w/o: 147685.98/0.00/21105.00) lat (ms,95%): 15.83 err/s: 0.00 reconn/s: 0.00
[ 607s ] thds: 56 tps: 10566.00 qps: 169073.97 (r/w/o: 147942.97/0.00/21131.00) lat (ms,95%): 5.77 err/s: 0.00 reconn/s: 0.00
[ 608s ] thds: 56 tps: 10581.08 qps: 169359.21 (r/w/o: 148195.06/0.00/21164.15) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00
[ 609s ] thds: 56 tps: 12873.80 qps: 205861.77 (r/w/o: 180116.17/0.00/25745.60) lat (ms,95%): 5.37 err/s: 0.00 reconn/s: 0.00
[ 610s ] thds: 56 tps: 20196.89 qps: 323184.24 (r/w/o: 282789.46/0.00/40394.78) lat (ms,95%): 3.02 err/s: 0.00 reconn/s: 0.00
[ 611s ] thds: 56 tps: 18033.21 qps: 288487.30 (r/w/o: 252421.88/0.00/36065.41) lat (ms,95%): 5.28 err/s: 0.00 reconn/s: 0.00
[ 612s ] thds: 56 tps: 11444.08 qps: 183129.22 (r/w/o: 160241.06/0.00/22888.15) lat (ms,95%): 5.37 err/s: 0.00 reconn/s: 0.00
[ 613s ] thds: 56 tps: 10597.96 qps: 169511.35 (r/w/o: 148316.43/0.00/21194.92) lat (ms,95%): 5.57 err/s: 0.00 reconn/s: 0.00
[ 614s ] thds: 56 tps: 10566.00 qps: 169103.93 (r/w/o: 147969.94/0.00/21133.99) lat (ms,95%): 5.67 err/s: 0.00 reconn/s: 0.00
[ 615s ] thds: 56 tps: 10640.07 qps: 170227.13 (r/w/o: 148948.99/0.00/21278.14) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00
[ 616s ] thds: 56 tps: 10579.04 qps: 169264.66 (r/w/o: 148106.58/0.00/21158.08) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00

You can see the numbers vary a lot, from 10550 tps to 20196 tps, with the most time being in the 10000tps range.
That’s quite disappointing. Basically, we lost half of the throughput by moving to the Kubernetes node.

But don’t panic, we can improve this. But first, we need to understand why this happens.

The answer lies in how Kubernetes applies Quality of Service for Pods. By default (if CPU or Memory limits are not defined) the QoS is BestEffort, which leads to the results we see above. To allocate all CPU resources, we need to make sure QoS Guaranteed. For this, we add the following to the image definition:

resources:
          requests:
            cpu: "55500m"
            memory: "150Gi"
          limits:
            cpu: "55500m"
            memory: "150Gi"

These are somewhat funny lines to define CPU limits. As you remember we have 56 threads, so initially I tried to set limits:

cpu: "56"

 , but it did not work as Kubernetes was not able to start the pod with the error Insufficient CPU. I guess Kubernetes allocates a few CPU percentages for the internal needs.

So the line

cpu: "55500m"

  works, which means we allocate 55.5 CPU for Percona Server.

Let’s see what results we can have with Guaranteed QoS:

[ 883s ] thds: 56 tps: 20320.06 qps: 325145.96 (r/w/o: 284504.84/0.00/40641.12) lat (ms,95%): 2.81 err/s: 0.00 reconn/s: 0.00
[ 884s ] thds: 56 tps: 20908.89 qps: 334587.21 (r/w/o: 292769.43/0.00/41817.78) lat (ms,95%): 2.81 err/s: 0.00 reconn/s: 0.00
[ 885s ] thds: 56 tps: 20529.03 qps: 328459.46 (r/w/o: 287402.40/0.00/41057.06) lat (ms,95%): 2.81 err/s: 0.00 reconn/s: 0.00
[ 886s ] thds: 56 tps: 17567.75 qps: 281051.03 (r/w/o: 245914.53/0.00/35136.50) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00
[ 887s ] thds: 56 tps: 18036.82 qps: 288509.07 (r/w/o: 252437.44/0.00/36071.63) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00
[ 888s ] thds: 56 tps: 18398.23 qps: 294399.67 (r/w/o: 257603.21/0.00/36796.46) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00
[ 889s ] thds: 56 tps: 18402.90 qps: 294484.45 (r/w/o: 257677.65/0.00/36806.81) lat (ms,95%): 5.47 err/s: 0.00 reconn/s: 0.00
[ 890s ] thds: 56 tps: 19428.12 qps: 310787.86 (r/w/o: 271934.63/0.00/38853.23) lat (ms,95%): 5.37 err/s: 0.00 reconn/s: 0.00
[ 891s ] thds: 56 tps: 19848.69 qps: 317646.11 (r/w/o: 277947.73/0.00/39698.39) lat (ms,95%): 5.28 err/s: 0.00 reconn/s: 0.00
[ 892s ] thds: 56 tps: 20457.28 qps: 327333.49 (r/w/o: 286417.93/0.00/40915.56) lat (ms,95%): 2.86 err/s: 0.00 reconn/s: 0.00

This is much better (mostly ranging in 20000 tps), but we still do not get to 22000 tps.

I do not have the full explanation of why there is still a 10% performance loss, but it might be related to this issue. And I see there is a work in progress to improve Guaranteed QoS performance but it was not merged into the mainstream releases yet. Hopefully, it will be in one of the next releases.

Conclusions:

  • Out of the box, you may see quite bad performance when deploying in Kubernetes POD
  • To improve your experience you need to make sure you use Guaranteed QoS. Unfortunately, Kubernetes does not make it easy. You need to manually set the number of CPU threads, which is not always obvious if you use dynamic cloud instances.
  • With Guaranteed QoS there is still a performance overhead of 10%, but I guess this is the cost we have to accept at the moment.
May
07
2019
--

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.

Apr
16
2019
--

Google expands its container service with GKE Advanced

With its Kubernetes Engine (GKE), Google Cloud has long offered a managed service for running containers on its platform. Kubernetes users tend to have a variety of needs, but so far, Google only offered a single tier of GKE that wasn’t necessarily geared toward the high-end enterprise users the company is trying to woo. Today, however, the company announced a new advanced edition of GKE that introduces a number of new features and an enhanced financially backed SLA, additional security tools and new automation features. You can think of GKE Advanced as the enterprise version of GKE.

The new service will launch in the second quarter of the year and hasn’t yet announced pricing. The regular version of GKE is now called GKE Standard.

Google says the service builds upon the company’s own learnings from running a complex container infrastructure internally for years.

For enterprise customers, the financially backed SLA is surely a nice bonus. The promise here is 99.95 percent guaranteed availability for regional clusters.

Most users who opt for a managed Kubernetes environment do so because they don’t want to deal with the hassle of managing these clusters themselves. With GKE Standard, there’s still some work to be done with regard to scaling the clusters. Because of this, GKE Advanced includes a Vertical Pod Autoscaler that keeps on eye on resource utilization and adjusts it as necessary, as well as Node Auto Provisioning, an enhanced version of cluster autoscaling in GKE Standard.

In addition to these new GKE Advanced features, Google is adding GKE security features like the GKE Sandbox, which is currently in beta and will come exclusively to GKE Advanced once it’s launched, and the ability to enforce that only signed and verified images are used in the container environment.

The Sandbox uses Google’s gVisor container sandbox runtime. With this, every sandbox gets its own user-space kernel, adding an additional layer of security. With Binary Authorization, GKE Advanced users also can ensure that all container images are signed by a trusted authority before they are put into production. Somebody could theoretically still smuggle malicious code into the containers, but this process, which enforces standard container release practices, for example, should ensure that only authorized containers can run in the environment.

GKE Advanced also includes support for GKE usage metering, which allows companies to keep tabs on who is using a GKE cluster and charge them according. This feature, too, will be exclusive to GKE Advanced.

Apr
12
2019
--

OpenStack Stein launches with improved Kubernetes support

The OpenStack project, which powers more than 75 public and thousands of private clouds, launched the 19th version of its software this week. You’d think that after 19 updates to the open-source infrastructure platform, there really isn’t all that much new the various project teams could add, given that we’re talking about a rather stable code base here. There are actually a few new features in this release, though, as well as all the usual tweaks and feature improvements you’d expect.

While the hype around OpenStack has died down, we’re still talking about a very active open-source project. On average, there were 155 commits per day during the Stein development cycle. As far as development activity goes, that keeps OpenStack on the same level as the Linux kernel and Chromium.

Unsurprisingly, a lot of that development activity focused on Kubernetes and the tools to manage these container clusters. With this release, the team behind the OpenStack Kubernetes installer brought the launch time for a cluster down from about 10 minutes to five, regardless of the number of nodes. To further enhance Kubernetes support, OpenStack Stein also includes updates to Neutron, the project’s networking service, which now makes it easier to create virtual networking ports in bulk as containers are spun up, and Ironic, the bare-metal provisioning service.

All of that is no surprise, given that according to the project’s latest survey, 61 percent of OpenStack deployments now use both Kubernetes and OpenStack in tandem.

The update also includes a number of new networking features that are mostly targeted at the many telecom users. Indeed, over the course of the last few years, telcos have emerged as some of the most active OpenStack users as these companies are looking to modernize their infrastructure as part of their 5G rollouts.

Besides the expected updates, though, there are also a few new and improved projects here that are worth noting.

“The trend from the last couple of releases has been on scale and stability, which is really focused on operations,” OpenStack Foundation executive director Jonathan Bryce told me. “The new projects — and really most of the new projects from the last year — have all been pretty oriented around real-world use cases.”

The first of these is Placement. “As people build a cloud and start to grow it and it becomes more broadly adopted within the organization, a lot of times, there are other requirements that come into play,” Bryce explained. “One of these things that was pretty simplistic at the beginning was how a request for a resource was actually placed on the underlying infrastructure in the data center.” But as users get more sophisticated, they often want to run specific workloads on machines with certain hardware requirements. These days, that’s often a specific GPU for a machine learning workload, for example. With Placement, that’s a bit easier now.

It’s worth noting that OpenStack had some of this functionality before. The team, however, decided to uncouple it from the existing compute service and turn it into a more generic service that could then also be used more easily beyond the compute stack, turning it more into a kind of resource inventory and tracking tool.

Then, there is also Blazer, a reservation service that offers OpenStack users something akin to AWS Reserved Instances. In a private cloud, the use case for a feature is a bit different, though. But as some of the private clouds got bigger, some users found that they needed to be able to guarantee resources to run some of their regular, overnight batch jobs or data analytics workloads, for example.

As far as resource management goes, it’s also worth highlighting Sahara, which now makes it easier to provision Hadoop clusters on OpenStack.

In previous releases, one of the focus areas for the project was to improve the update experience. OpenStack is obviously a very complex system, so bringing it up to the latest version is also a bit of a complex undertaking. These improvements are now paying off. “Nobody even knows we are running Stein right now,” Vexxhost CEO Mohammed Nasar, who made an early bet on OpenStack for his service, told me. “And I think that’s a good thing. You want to be least impactful, especially when you’re in such a core infrastructure level. […] That’s something the projects are starting to become more and more aware of but it’s also part of the OpenStack software in general becoming much more stable.”

As usual, this release launched only a few weeks before the OpenStack Foundation hosts its bi-annual Summit in Denver. Since the OpenStack Foundation has expanded its scope beyond the OpenStack project, though, this event also focuses on a broader range of topics around open-source infrastructure. It’ll be interesting to see how this will change the dynamics at the event.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com