Jan
22
2021
--

Getting Started with MongoDB in OpenShift Using the Percona Server for MongoDB Kubernetes Operator

MongoDB in OpenShift

MongoDB in OpenShiftKubernetes has become an integral part of many architectures and has continued to rise in popularity.  Kubernetes is an open source platform for managing containerized workloads and services.  One approach to running Kubernetes is Redhat’s OpenShift.  OpenShift is RedHat’s name for its family of containerization products.  The main part of OpenShift and the product we’ll be writing about today is the OpenShift Container Platform, an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux.

The other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), OpenShift Online is the platform offered as software as a service, and OpenShift Dedicated is the platform offered as a managed service.

Operators are extensions to Kubernetes that allow for the management of a particular piece of software in a Kubernetes cluster.   In this blog post, we’ll help you get started with the Percona Kubernetes Operator for Percona Server for MongoDB.

Getting Started with OpenShift

The OpenShift Container Platform can be installed in many different ways and on many different platforms, whether on-premises or in a public cloud.  I’ve found the following guides helpful for installing on AWS, Azure, GCP, and Bare Metal.

Advantages of a Kubernetes Operator

You may be aware that you can easily spin up a containerized MongoDB instance on your container of choice.   So what advantages does a Kubernetes Operator provide?  In this case, the Operator can provide a lot of  Operational tasks that aren’t always as straightforward to do in a Kubernetes cluster context.  This includes, but is not limited to, creating users needed for the Operator, performing upgrades, initializing and configuring replica sets, scaling replica sets in and out, performing backups and restores, and, as of version 1.6.0 of the Percona Kubernetes Operator for Percona Server for MongoDB, setting up a 1 shard sharded cluster with all the networking needed.

Installing the Operator

First, we need to download the Operator from Github and change it to the directory that the operator was downloaded to.  Note that you can change the branch that you pull down by changing the value after the -b flag if you need a different version of the Operator.   At the time of writing, version 1.6.0 is the latest version of the Operator.

git clone -b v1.6.0 https://github.com/percona/percona-server-mongodb-operator

cd percona-server-mongodb-operator

After we’ve downloaded the Operator, we need to set up the Custom Resource Definitions (CRD) that the operator uses.  This CRD file tells Kubernetes about the resources which the Operator will be using.  To install these, run the following against your OpenShift cluster as a cluster admin user:

oc apply -f deploy/crd.yaml

There are times when you will not want your database cluster to be managed by a cluster admin user.   To have your Percona Server for MongoDB cluster Custom Resource (CR) managed with a non-privileged Kubernetes user, you would need to run the following:

oc create clusterrole psmdb-admin --verb="*" --resource=perconaservermongodbs.psmdb.percona.com,perconaservermongodbs.psmdb.percona.com/status,perconaservermongodbbackups.psmdb.percona.com,perconaservermongodbbackups.psmdb.percona.com/status,perconaservermongodbrestores.psmdb.percona.com,perconaservermongodbrestores.psmdb.percona.com/status

oc adm policy add-cluster-role-to-user psmdb-admin <some-user>

Additionally, if you have a cert-manager installed, there will be two more commands needed to be able to manage certificates with a non-privileged user.

oc create clusterrole cert-admin --verb="*" --resource=issuers.certmanager.k8s.io,certificates.certmanager.k8s.io

oc adm policy add-cluster-role-to-user cert-admin <some-user>

Creating a Project for your MongoDB Database

Next, we’ll need to create an OpenShift Project for your MongoDB Deployment.  In OpenShift, projects/namespaces are used to allow a community of users to organize and manage their content in isolation from other communities.   These projects may map to individual applications, pieces of software, or whole application stacks.

oc new-project psmdb

Configuring the Operator

The first part of configuring the Operator involves setting up Role-Based Access Control (RBAC) to define what actions can be performed by the Operator.  For example, a Role used by cert-manager should only have permissions on resources that are related to certificates and shouldn’t be granted access to say, stop and start a MongoDB deployment.

oc apply -f deploy/rbac.yaml

Next, we’ll start the Operator container.  This YAML file contains the definitions for the Operator itself.

oc apply -f deploy/operator.yaml

Defining Users and Secrets

Now we need to define the users and secrets that will be created in our MongoDB deployment.   These are administrative users that will perform different tasks as part of the Operator.  These users are User Admins for creating additional users, clusterAdmin for adding or removing replica set members, cluster monitor for monitoring specific parts of the replica set/sharded cluster by Percona Monitoring and Management (PMM), and a backup user for performing backups and restores.   Make sure you update these to match the required security policies for your organization. Don’t forget to rotate passwords according to security policies during the life of your cluster.

oc create -f deploy/secrets.yaml

Configuring Cluster Custom Resource for OpenShift and Deploying

First, we need to update deploy/cr.yaml to have our platform be openshift. It is needed because OpenShift (compared to vanilla Kubernetes) has important security settings that are applied by default:

spec:
  # platform: openshift

To:

spec:
  platform: openshift

Optionally, if you’re using MiniShift, which runs on an older version of OpenShift, an older version of Kubernetes and is not suitable for production usage, you need to change all instances of the following in your deploy/cr.yaml:

affinity:
     antiAffinityTopologyKey: "kubernetes.io/hostname"

To:

  affinity:
     antiAffinityTopologyKey: "none"

This CR file also defines how much CPU and Memory we’re granting to each container, and lets us define how many replica set members, config server replica set members, mongos’ we have and lets us add configuration options to our MongoDB-related processes.  Finally, we can apply the CR file using the following command:

oc apply -f deploy/cr.yaml

As of version 1.6 of the Operator, if you stick with all the defaults and your OpenShift cluster has sufficient memory and CPU, this will deploy a 1 shard sharded MongoDB cluster to your OpenShift cluster.

To verify your cluster is up and running, run the following command:

oc get pods

Depending on the resources for your OpenShift cluster, after some time, your results should look similar to this:

mgrayson@Michaels-MBP percona-server-mongodb-operator % oc get pods
NAME                                              READY     STATUS    RESTARTS   AGE
my-cluster-name-cfg-0                             2/2       Running   0          51s
my-cluster-name-cfg-1                             2/2       Running   0          30s
my-cluster-name-cfg-2                             2/2       Running   0          15s
my-cluster-name-mongos-6d84cb48c9-c6hxv           1/1       Running   0          48s
my-cluster-name-mongos-6d84cb48c9-hhrx7           1/1       Running   0          48s
my-cluster-name-mongos-6d84cb48c9-z6j5z           1/1       Running   0          48s
my-cluster-name-rs0-0                             2/2       Running   0          53s
my-cluster-name-rs0-1                             2/2       Running   0          36s
my-cluster-name-rs0-2                             2/2       Running   0          21s
percona-server-mongodb-operator-885dcbd4f-6nc7c   1/1       Running   0          1m

Connecting to Your Cluster

So, now that we have a working cluster, how do we connect to it?   The cluster is not available outside of the Kubernetes cluster by default, so we need to spin up a MongoDB client container to be able to connect to the cluster.  You can use the following commands to create a container with a MongoDB client:

oc run -i --rm --tty percona-client --image=percona/percona-server-mongodb:4.2.11-12 --restart=Never -- bash -il

Once you are in the container you can run the following command to connect to your cluster, substituting the username and password as appropriate:

mongo "mongodb://userAdmin:userAdmin123456@my-cluster-name-mongos.psmdb.svc.cluster.local/admin?ssl=false"

Summary:

We hope this has been helpful in getting you started using our Percona Kubernetes Operator for Percona Server for MongoDB on RedHat’s OpenShift Container Platform.   Thanks for reading!

Oct
07
2020
--

Kong launches Kong Konnect, its cloud-native connectivity platform

At its (virtual) Kong Summit 2020, API platform Kong today announced the launch of Kong Konnect, its managed end-to-end cloud-native connectivity platform. The idea here is to give businesses a single service that allows them to manage the connectivity between their APIs and microservices and help developers and operators manage their workflows across Kong’s API Gateway, Kubernetes Ingress and Kong Service Mesh runtimes.

“It’s a universal control plane delivery cloud that’s consumption-based, where you can manage and orchestrate API gateway runtime, service mesh runtime, and Kubernetes Ingress controller runtime — and even Insomnia for design — all from one platform,” Kong CEO and co-founder Augusto “Aghi” Marietti told me.

The new service is now in private beta and will become generally available in early 2021.

Image Credits: Kong

At the core of the platform is Kong’s new so-called ServiceHub, which provides that single pane of glass for managing a company’s services across the organization (and make them accessible across teams, too).

As Marietti noted, organizations can choose which runtime they want to use and purchase only those capabilities of the service that they currently need. The platform also includes built-in monitoring tools and supports any cloud, Kubernetes provider or on-premises environment, as long as they are Kubernetes-based.

The idea here, too, is to make all these tools accessible to developers and not just architects and operators. “I think that’s a key advantage, too,” Marietti said. “We are lowering the barrier by making a connectivity technology easier to be used by the 50 million developers — not just by the architects that were doing big grand plans at a large company.”

To do this, Konnect will be available as a self-service platform, reducing the friction of adopting the service.

Image Credits: Kong

This is also part of the company’s grander plan to go beyond its core API management services. Those services aren’t going away, but they are now part of the larger Kong platform. With its open-source Kong API Gateway, the company built the pathway to get to this point, but that’s a stable product now and it’s now clearly expanding beyond that with this cloud connectivity play that takes the company’s existing runtimes and combines them to provide a more comprehensive service.

“We have upgraded the vision of really becoming an end-to-end cloud connectivity company,” Marietti said. “Whether that’s API management or Kubernetes Ingress, […] or Kuma Service Mesh. It’s about connectivity problems. And so the company uplifted that solution to the enterprise.”

 

Sep
11
2019
--

IBM brings Cloud Foundry and Red Hat OpenShift together

At the Cloud Foundry Summit in The Hague, IBM today showcased its Cloud Foundry Enterprise Environment on Red Hat’s OpenShift container platform.

For the longest time, the open-source Cloud Foundry Platform-as-a-Service ecosystem and Red Hat’s Kubernetes-centric OpenShift were mostly seen as competitors, with both tools vying for enterprise customers who want to modernize their application development and delivery platforms. But a lot of things have changed in recent times. On the technical side, Cloud Foundry started adopting Kubernetes as an option for application deployments and as a way of containerizing and running Cloud Foundry itself.

On the business side, IBM’s acquisition of Red Hat has brought along some change, too. IBM long backed Cloud Foundry as a top-level foundation member, while Red Hat bet on its own platform instead. Now that the acquisition has closed, it’s maybe no surprise that IBM is working on bringing Cloud Foundry to Red Hat’s platform.

For now, this work is still officially still a technology experiment, but our understanding is that IBM plans to turn this into a fully supported project that will give Cloud Foundry users the option to deploy their application right to OpenShift, while OpenShift customers will be able to offer their developers the Cloud Foundry experience.

“It’s another proof point that these things really work well together,” Cloud Foundry Foundation CTO Chip Childers told me ahead of today’s announcement. “That’s the developer experience that the CF community brings and in the case of IBM, that’s a great commercialization story for them.”

While Cloud Foundry isn’t seeing the same hype as in some of its earlier years, it remains one of the most widely used development platforms in large enterprises. According to the Cloud Foundry Foundation’s latest user survey, the companies that are already using it continue to move more of their development work onto the platform and the according to the code analysis from source{d}, the project continues to see over 50,000 commits per month.

“As businesses navigate digital transformation and developers drive innovation across cloud native environments, one thing is very clear: they are turning to Cloud Foundry as a proven, agile, and flexible platform — not to mention fast — for building into the future,” said Abby Kearns, executive director at the Cloud Foundry Foundation. “The survey also underscores the anchor Cloud Foundry provides across the enterprise, enabling developers to build, support, and maximize emerging technologies.”image024

Also at this week’s Summit, Pivotal (which is in the process of being acquired by VMware) is launching the alpha version of the Pivotal Application Service (PAS) on Kubernetes, while Swisscom, an early Cloud Foundry backer, is launching a major update to its Cloud Foundry-based Application Cloud.

Aug
01
2019
--

With the acquisition closed, IBM goes all in on Red Hat

IBM’s massive $34 billion acquisition of Red Hat closed a few weeks ago and today, the two companies are now announcing the first fruits of this process. For the most part, today’s announcement furthers IBM’s ambitions to bring its products to any public and private cloud. That was very much the reason why IBM acquired Red Hat in the first place, of course, so this doesn’t come as a major surprise, though most industry watchers probably didn’t expect this to happen this fast.

Specifically, IBM is announcing that it is bringing its software portfolio to Red Hat OpenShift, Red Hat’s Kubernetes-based container platform that is essentially available on any cloud that allows its customers to run Red Hat Enterprise Linux.

In total, IBM has already optimized more than 100 products for OpenShift and bundled them into what it calls “Cloud Paks.” There are currently five of these Paks: Cloud Pak for Data, Application, Integration, Automation and Multicloud Management. These technologies, which IBM’s customers can now run on AWS, Azure, Google Cloud Platform or IBM’s own cloud, among others, include DB2, WebSphere, API Connect, Watson Studio and Cognos Analytics.

“Red Hat is unlocking innovation with Linux-based technologies, including containers and Kubernetes, which have become the fundamental building blocks of hybrid cloud environments,” said Jim Whitehurst, president and CEO of Red Hat, in today’s announcement. “This open hybrid cloud foundation is what enables the vision of any app, anywhere, anytime. Combined with IBM’s strong industry expertise and supported by a vast ecosystem of passionate developers and partners, customers can create modern apps with the technologies of their choice and the flexibility to deploy in the best environment for the app – whether that is on-premises or across multiple public clouds.”

IBM argues that a lot of the early innovation on the cloud was about bringing modern, customer-facing applications to market, with a focus on basic cloud infrastructure. Now, however, enterprises are looking at how they can take their mission-critical applications to the cloud, too. For that, they want access to an open stack that works across clouds.

In addition, IBM also today announced the launch of a fully managed Red Hat OpenShift service on its own public cloud, as well as OpenShift on IBM Systems, including the IBM Z and LinuxONE mainframes, as well as the launch of its new Red Hat consulting and technology services.

Jun
04
2019
--

How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open sourced five years ago.

Today, Kubernetes is the fastest growing open-source project and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.

May
07
2019
--

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.

Nov
27
2018
--

Red Hat acquires hybrid cloud data management service NooBaa

Red Hat is in the process of being acquired by IBM for a massive $34 billion, but that deal hasn’t closed yet and, in the meantime, Red Hat is still running independently and making its own acquisitions, too. As the company today announced, it has acquired Tel Aviv-based NooBaa, an early-stage startup that helps enterprises manage their data more easily and access their various data providers through a single API.

NooBaa’s technology makes it a good fit for Red Hat, which has recently emphasized its ability to help enterprise more effectively manage their hybrid and multicloud deployments. At its core, NooBaa is all about bringing together various data silos, which should make it a good fit in Red Hat’s portfolio. With OpenShift and the OpenShift Container Platform, as well as its Ceph Storage service, Red Hat already offers a range of hybrid cloud tools, after all.

“NooBaa’s technologies will augment our portfolio and strengthen our ability to meet the needs of developers in today’s hybrid and multicloud world,” writes Ranga Rangachari, the VP and general manager for storage and hyperconverged infrastructure at Red Hat, in today’s announcement. “We are thrilled to welcome a technical team of nine to the Red Hat family as we work together to further solidify Red Hat as a leading provider of open hybrid cloud technologies.”

While virtually all of Red Hat’s technology is open source, NooBaa’s code is not. The company says that it plans to open source NooBaa’s technology in due time, though the exact timeline has yet to be determined.

NooBaa was founded in 2013. The company has raised some venture funding from the likes of Jerusalem Venture Partners and OurCrowd, with a strategic investment from Akamai Capital thrown in for good measure. The company never disclosed the size of that round, though, and neither Red Hat nor NooBaa are disclosing the financial terms of the acquisition.

Jun
25
2018
--

Running Percona XtraDB Cluster in Kubernetes/OpenShift

Diagram of Percona XtraDB Cluster / MySQL running in Kubernetes Open Shift

Kubernetes, and its most popular distribution OpenShift, receives a lot of interest as a container orchestration platform. However, databases remain a foreign entity, primarily because of their stateful nature, since container orchestration systems prefer stateless applications. That said, there has been good progress in support for StatefulSet applications and persistent storage, to the extent that it might be already comfortable to have a production database instance running in Kubernetes. With this in mind, we’ve been looking at running Percona XtraDB Cluster in Kubernetes/OpenShift.

While there are already many examples on the Internet of how to start a single MySQL instance in Kubernetes, for serious usage we need to provide:

  • High Availability: how can we guarantee availability when an instance (or Pod in Kubernetes terminology) crashes or becomes unresponsive?
  • Persistent storage: we do not want to lose our data in case of instance failure
  • Backup and recovery
  • Traffic routing: in the case of multiple instances, how do we direct an application to the correct one
  • Monitoring

Percona XtraDB Cluster in Kubernetes/OpenShift

Schematically it looks like this:


Percona XtraDB Cluster in Kubernetes/OpenShift a possible configuration for a resilient solution

The picture highlights the components we are going to use

Running this in Kubernetes assumes a high degree of automation and minimal manual intervention.

We provide our proof of concept in this project: https://github.com/Percona-Lab/percona-openshift. Please treat it like a source for ideas and as an alpha-quality project, in no way it is production ready.

Details

In our implementation we rely on Helm, the package manager for Kubernetes.  Unfortunately OpenShift does not officially support Helm out of the box, but there is a guide from RedHat on how to make it work.

In the clustering setup, it is quite typical to use a service discovery software like Zookeeper, etcd or Consul. It may become necessary for our Percona XtraDB Cluster deployment, but for now, to simplify deployment, we are going to use the DNS service discovery mechanism provided by Kubernetes. It should be enough for our needs.

We also expect the Kubernetes deployment to provide Dynamic Storage Provisioning. The major cloud providers (like Google Cloud, Microsoft Azure or Amazon Cloud) should have it. Also, it might not be easy to have Dynamic Storage Provisioning for on-premise deployments. You may need to setup GlusterFS or Ceph to provide Dynamic Storage Provisioning.

The challenge with a distributed file system is how many copies of data you will end up having. Percona XtraDB Cluster by itself has three copies, and GlusterFS will also require at least two copies of the data, so in the end we will have six copies of the data. This can’t be good for write intensive applications, but it’s also not good from the capacity standpoint.

One possible approach is to have local data copies for Percona XtraDB Cluster deployments. This will provide better performance and less impact on the network, but in the case of a big dataset (100GB+ ) the node failure will require SST with a big impact on the cluster and network. So the individual solution should be tailored for your workload and your requirements.

Now, as we have a basic setup working, it would be good to understand the performance impact of running Percona XtraDB Cluster in Kubernetes.  Is the network and storage overhead acceptable or it is too big? We plan to look into this in the future.

Once again, our project is located at https://github.com/Percona-Lab/percona-openshift, we are looking for your feedback and for your experience of running databases in Kubernetes/OpenShift.

Before you leave …

Percona XtraDB Cluster

If this article has interested you and you would like to know more about Percona XtraDB Cluster, you might enjoy our recent series of webinar tutorials that introduce this software and how to use it.

The post Running Percona XtraDB Cluster in Kubernetes/OpenShift appeared first on Percona Database Performance Blog.

May
08
2018
--

Microsoft and Red Hat now offer a jointly managed OpenShift service on Azure

Microsoft and Red Hat are deepening their existing alliance around cloud computing. The two companies will now offer a managed version of OpenShift, Red Hat’s container application platform, on Microsoft Azure. This service will be jointly developed and managed by Microsoft and Red Hat and will be integrated into the overall Azure experience.

Red Hat OpenShift on Azure is meant to make it easier for enterprises to create hybrid container solutions that can span their on-premise networks and the cloud. That’ll give these companies the flexibility to move workloads around as needed and will give those companies that have bet on OpenShift the option to move their workloads close to the rest of Azure’s managed services like Cosmos DB or Microsoft’s suite of machine learning tools.

Microsoft’s Brendan Burns, one of the co-creators of Kubernetes, told me that the companies decided that this shouldn’t just be a service that runs on top of Azure and consumes the Azure APIs. Instead, the companies made the decision to build a native integration of OpenShift into Azure — and specifically the Azure Portal. “This is a first in class fully enterprise-supported application platform for containers,” he said. “This is going to be an experience where enterprises can have all the experience and support they expect.”

Red Hat VP for business development and architecture Mike Ferris echoed this and added that his company is seeing a lot of demand for managed services around containers.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com