Apr
19
2017
--

Mirantis launches its new OpenStack and Kubernetes cloud platform

 Mirantis, one of the earliest players in the OpenStack ecosystem, today announced that it will end-of-life Mirantis OpenStack support in September 2019. The Mirantis Cloud Platform, which combines OpenStack with the Kubernetes container platform (or which could even be used to run Kubernetes separately), is going to take its place. While Mirantis is obviously not getting out of the OpenStack… Read More

Dec
21
2016
--

Installing Percona Monitoring and Management on Google Container Engine (Kubernetes)

gcshell

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that pmm-server uses.

The regular install instructions are here: https://www.percona.com/doc/percona-monitoring-and-management/install.html

Since Google Container Engine runs on Kubernetes, we had to do some interesting changes to the server install instructions.

First, you will want to get the gcloud shell. This is done by clicking the gcloud shell button at the top right of your screen when logged into your GCS project.

Installing Percona Monitoring and Management

Once you are in the shell, you just need to run some commands to get up and running.

Let’s set our availability zone and region:

manjot_singh@googleproject:~$ gcloud config set compute/zone asia-east1-c
Updated property [compute/zone].

Then let’s set up our auth:

manjot_singh@googleproject:~$ gcloud auth application-default login
...
These credentials will be used by any library that requests
Application Default Credentials.

Now we are ready to go.

Normally, we create a persistent container called pmm-data to hold the data the server collects and survive container deletions and upgrades. For GCS, we will create persistent disks, and use the minimum (Google) recommended size for each.

manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-prom-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-prom-data-pv].
NAME              ZONE          SIZE_GB  TYPE         STATUS
pmm-prom-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-consul-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-consul-data-pv].
NAME                ZONE          SIZE_GB  TYPE         STATUS
pmm-consul-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-mysql-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-mysql-data-pv].
NAME               ZONE          SIZE_GB  TYPE         STATUS
pmm-mysql-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-grafana-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-grafana-data-pv].
NAME                 ZONE          SIZE_GB  TYPE         STATUS
pmm-grafana-data-pv  asia-east1-c  200      pd-standard  READY

Ignoring messages about disk formatting, we are ready to create our Kubernetes cluster:

manjot_singh@googleproject:~$ gcloud container clusters create pmm-server --num-nodes 1 --machine-type n1-standard-2
Creating cluster pmm-server...done.
Created [https://container.googleapis.com/v1/projects/googleproject/zones/asia-east1-c/clusters/pmm-server].
kubeconfig entry generated for pmm-server.
NAME        ZONE          MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
pmm-server  asia-east1-c  1.4.6           999.911.999.91  n1-standard-2  1.4.6         1          RUNNING

You should now see something like:

manjot_singh@googleproject:~$ gcloud compute instances list
NAME                                       ZONE          MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
gke-pmm-server-default-pool-73b3f656-20t0  asia-east1-c  n1-standard-2               10.14.10.14  911.119.999.11  RUNNING

Now that our container manager is up, we need to create 2 configs for the “pod” we are creating to run our container. One will be used only to initialize the server and move the container drives to the persistent disks and the second one will be the actual running server.

manjot_singh@googleproject:~$ vi pmm-server-init.json
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
      "name": "pmm-server",
      "labels": {
          "name": "pmm-server"
      }
  },
  "spec": {
    "containers": [{
        "name": "pmm-server",
        "image": "percona/pmm-server:1.0.6",
        "env": [{
                "name":"SERVER_USER",
                "value":"http_user"
            },{
                "name":"SERVER_PASSWORD",
                "value":"http_password"
            },{
                "name":"ORCHESTRATOR_USER",
                "value":"orchestrator"
            },{
                "name":"ORCHESTRATOR_PASSWORD",
                "value":"orch_pass"
            }
        ],
        "ports": [{
            "containerPort": 80
            }
        ],
        "volumeMounts": [{
          "mountPath": "/opt/prometheus/d",
          "name": "pmm-prom-data"
        },{
          "mountPath": "/opt/c",
          "name": "pmm-consul-data"
        },{
          "mountPath": "/var/lib/m",
          "name": "pmm-mysql-data"
        },{
          "mountPath": "/var/lib/g",
          "name": "pmm-grafana-data"
        }]
      }
    ],
    "restartPolicy": "Always",
    "volumes": [{
      "name":"pmm-prom-data",
      "gcePersistentDisk": {
          "pdName": "pmm-prom-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-consul-data",
      "gcePersistentDisk": {
          "pdName": "pmm-consul-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-mysql-data",
      "gcePersistentDisk": {
          "pdName": "pmm-mysql-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-grafana-data",
      "gcePersistentDisk": {
          "pdName": "pmm-grafana-data-pv",
          "fsType": "ext4"
      }
    }]
  }
}

manjot_singh@googleproject:~$ vi pmm-server.json
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
      "name": "pmm-server",
      "labels": {
          "name": "pmm-server"
      }
  },
  "spec": {
    "containers": [{
        "name": "pmm-server",
        "image": "percona/pmm-server:1.0.6",
        "env": [{
                "name":"SERVER_USER",
                "value":"http_user"
            },{
                "name":"SERVER_PASSWORD",
                "value":"http_password"
            },{
                "name":"ORCHESTRATOR_USER",
                "value":"orchestrator"
            },{
                "name":"ORCHESTRATOR_PASSWORD",
                "value":"orch_pass"
            }
        ],
        "ports": [{
            "containerPort": 80
            }
        ],
        "volumeMounts": [{
          "mountPath": "/opt/prometheus/data",
          "name": "pmm-prom-data"
        },{
          "mountPath": "/opt/consul-data",
          "name": "pmm-consul-data"
        },{
          "mountPath": "/var/lib/mysql",
          "name": "pmm-mysql-data"
        },{
          "mountPath": "/var/lib/grafana",
          "name": "pmm-grafana-data"
        }]
      }
    ],
    "restartPolicy": "Always",
    "volumes": [{
      "name":"pmm-prom-data",
      "gcePersistentDisk": {
          "pdName": "pmm-prom-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-consul-data",
      "gcePersistentDisk": {
          "pdName": "pmm-consul-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-mysql-data",
      "gcePersistentDisk": {
          "pdName": "pmm-mysql-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-grafana-data",
      "gcePersistentDisk": {
          "pdName": "pmm-grafana-data-pv",
          "fsType": "ext4"
      }
    }]
  }
}

Then create it:

manjot_singh@googleproject:~$ kubectl create -f pmm-server-init.json
pod "pmm-server" created

Now we need to move data to persistent disks:

manjot_singh@googleproject:~$ kubectl exec -it pmm-server bash
root@pmm-server:/opt# supervisorctl stop grafana
grafana: stopped
root@pmm-server:/opt# supervisorctl stop prometheus
prometheus: stopped
root@pmm-server:/opt# supervisorctl stop consul
consul: stopped
root@pmm-server:/opt# supervisorctl stop mysql
mysql: stopped
root@pmm-server:/opt# mv consul-data/* c/
root@pmm-server:/opt# chown pmm.pmm c
root@pmm-server:/opt# cd prometheus/
root@pmm-server:/opt/prometheus# mv data/* d/
root@pmm-server:/opt/prometheus# chown pmm.pmm d
root@pmm-server:/var/lib# cd /var/lib
root@pmm-server:/var/lib# mv mysql/* m/
root@pmm-server:/var/lib# chown mysql.mysql m
root@pmm-server:/var/lib# mv grafana/* g/
root@pmm-server:/var/lib# chown grafana.grafana g
root@pmm-server:/var/lib# exit
manjot_singh@googleproject:~$ kubectl delete pods pmm-server
pod "pmm-server" deleted

Now recreate the pmm-server container with the actual configuration:

manjot_singh@googleproject:~$ kubectl create -f pmm-server.json
pod "pmm-server" created

It’s up!

Now let’s get access to it by exposing it to the internet:

manjot_singh@googleproject:~$ kubectl expose deployment pmm-server --type=LoadBalancer
service "pmm-server" exposed

You can get more information on this by running:

manjot_singh@googleproject:~$ kubectl describe services pmm-server
Name:                   pmm-server
Namespace:              default
Labels:                 run=pmm-server
Selector:               run=pmm-server
Type:                   LoadBalancer
IP:                     10.3.10.3
Port:                   <unset> 80/TCP
NodePort:               <unset> 31757/TCP
Endpoints:              10.0.0.8:80
Session Affinity:       None
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  22s           22s             1       {service-controller }                   Normal          CreatingLoadBalancer    Creating load balancer

To find the public IP of your PMM server, look under “EXTERNAL-IP”

manjot_singh@googleproject:~$ kubectl get services
NAME         CLUSTER-IP    EXTERNAL-IP      PORT(S)   AGE
kubernetes   10.3.10.3     <none>           443/TCP   7m
pmm-server   10.3.10.99    999.911.991.91   80/TCP    1m

That’s it, just visit the external IP in your browser and you should see the PMM landing page!

One of the things we didn’t resolve was being able to access the pmm-server container within the vpc. The client had to go through the open internet and hit PMM via the public IP. I hope to work on this some more and resolve this in the future.

I have also talked to our team about making mounts for persistent disks easier so that we can use less mounts and make the configuration and setup easier.

 

 

Nov
17
2016
--

Kubernetes founders launch Heptio with $8.5M in funding to help bring containers to the enterprise

Kepple Terminal with cityscape For years, the public face of Kubernetes was one of the project’s founders: Google group product manager Craig McLuckie. He started the open-source container-management project together with Joe Beda, Brendan Burns and a few other engineers inside of Google, which has since brought it under the guidance of the newly formed Cloud Native Computing Foundation.
Beda became an… Read More

Nov
03
2016
--

CoreOS introduces Operators to streamline Kubernetes container management

Old Cord Switchboard Operator CoreOS introduced a new container management concept today known as Operators, which they say will advance the Kubernetes project by offering more automated container management. What’s more, they are open sourcing the technology. “We are introducing the concept of an ‘Operator.’ It’s a concept for taking a lot of the knowledge an engineer [or developer] has inside… Read More

Jul
06
2016
--

Google launches a more scalable and robust Kubernetes

A warm glow highlights the ship's wheel on board the sailing yacht "Sincerity" as sunset approaches. Google today announced the next version of Kubernetes, its open source orchestration service for deploying, scaling and managing software containers. The focus of version 1.3 is on providing Kubernetes users with a more scalable and robust system for managing their containers in production. In addition, Kubernetes now also supports more emerging standards including CoreOS’s rkt,… Read More

Jun
16
2016
--

Scaling Percona XtraDB Cluster with ProxySQL in Kubernetes

Percona XtraDB Cluster nodes with Linux Network namespaces

Percona XtraDB Cluster with ProxySQL in KubernetesHow do you scale Percona XtraDB Cluster with ProxySQL in Kubernetes?

In my previous post I looked how to run Percona XtraDB Cluster in a Docker Swarm orchestration system, and today I want to review how can we do it in the more advanced Kubernetes environment.

There are already some existing posts from Patrick Galbraith (https://github.com/kubernetes/kubernetes/tree/release-1.2/examples/mysql-galera) and Raghavendra Prabhu (https://github.com/ronin13/pxc-kubernetes) on this topic. For this post, I will show how to run as many nodes as I want, see what happens if we add/remove nodes dynamically and handle incoming traffic with ProxySQL (which routes queries to one of working nodes). I also want to see if we can reuse the ReplicationController infrastructure from Kubernetes to scale nodes to a given number.

These goals should be easy to accomplish using our existing Docker images for Percona XtraDB Cluster (https://hub.docker.com/r/percona/percona-xtradb-cluster/), and I will again rely on the running service discovery (right now the images only work with etcd).

The process of setting up Kubernetes can be pretty involved (but it can be done; check out the Kubernetes documentation to see how: http://kubernetes.io/docs/getting-started-guides/ubuntu/). It is much more convenient to use a cloud that supports it already (Google Cloud, for example). I will use Microsoft Azure, and follow this guide: http://kubernetes.io/docs/getting-started-guides/coreos/azure/. Unfortunately the scripts from the guide install previous version of Kubernetes (1.1.2), which does not allow me to use ConfigMap. To compensate, I will duplicate the ENVIRONMENT variables definitions for Percona XtraDB Cluster and ProxySQL pods. This can be done more optimally in the recent version of Kubernetes.

After getting Kurbernetes running, starting Percona XtraDB Cluster with ProxySQL is easy using following pxc.yaml file (which you also can find with our Docker sources https://github.com/percona/percona-docker/tree/master/pxc-57/kubernetes):

apiVersion: v1
kind: ReplicationController
metadata:
 name: pxc-rc
 app: pxc-app
spec:
 replicas: 3 # tells deployment to run N pods matching the template
 selector:
 app: pxc-app
 template: # create pods using pod definition in this template
 metadata:
 name: pxc
 labels:
 app: pxc-app
 spec:
 containers:
 - name: percona-xtradb-cluster
 image: perconalab/percona-xtradb-cluster:5.6test
 ports:
 - containerPort: 3306
 - containerPort: 4567
 - containerPort: 4568
 env:
 - name: MYSQL_ROOT_PASSWORD
 value: "Theistareyk"
 - name: DISCOVERY_SERVICE
 value: "172.18.0.4:4001"
 - name: CLUSTER_NAME
 value: "k8scluster2"
 - name: XTRABACKUP_PASSWORD
 value: "Theistare"
 volumeMounts:
 - name: mysql-persistent-storage
 mountPath: /var/lib/mysql
 volumes:
 - name: mysql-persistent-storage
 emptyDir: {}
 imagePullPolicy: Always
---
apiVersion: v1
kind: ReplicationController
metadata:
 name: proxysql-rc
 app: proxysql-app
spec:
 replicas: 1 # tells deployment to run N pods matching the template
 selector:
 front: proxysql
 template: # create pods using pod definition in this template
 metadata:
 name: proxysql
 labels:
 app: pxc-app
 front: proxysql
 spec:
 containers:
 - name: proxysql
 image: perconalab/proxysql
 ports:
 - containerPort: 3306
 - containerPort: 6032
 env:
 - name: MYSQL_ROOT_PASSWORD
 value: "Theistareyk"
 - name: DISCOVERY_SERVICE
 value: "172.18.0.4:4001"
 - name: CLUSTER_NAME
 value: "k8scluster2"
 - name: MYSQL_PROXY_USER
 value: "proxyuser"
 - name: MYSQL_PROXY_PASSWORD
 value: "s3cret"
---
apiVersion: v1
kind: Service
metadata:
 name: pxc-service
 labels:
 app: pxc-app
spec:
 ports:
 # the port that this service should serve on
 - port: 3306
 targetPort: 3306
 name: "mysql"
 - port: 6032
 targetPort: 6032
 name: "proxyadm"
 # label keys and values that must match in order to receive traffic for this service
 selector:
 front: proxysql

Here is the command to start the cluster:

kubectl create -f pxc.yaml

The command will start three pods with Percona XtraDB Cluster and one pod with ProxySQL.

Percona XtraDB Cluster nodes will register themselves in the discovery service and we will need to add them to ProxySQL (it can be done automatically with scripting, for now it is a manual task):

kubectl exec -it proxysql-rc-4e936 add_cluster_nodes.sh

Increasing the cluster size can be done with the scale command:

kubectl scale --replicas=6 -f pxc.yaml

You can connect to the cluster using a single connection point with ProxySQL: You can find it this way:

kubectl describe -f pxc.yaml
Name: pxc-service
Namespace: default
Labels: app=pxc-app
Selector: front=proxysql
Type: ClusterIP
IP: 10.23.123.236
Port: mysql 3306/TCP
Endpoints: <none>
Port: proxyadm 6032/TCP
Endpoints: <none>
Session Affinity: None

It exposes the endpoint IP address 10.23.123.236 and two ports: 3306 for the MySQL connection and 6032 for the ProxySQL admin connection.

So you can see that scaling Percona XtraDB Cluster with ProxySQL in Kubernetes is pretty easy. In the next post, I want to run benchmarks in the different Docker network environments.

May
09
2016
--

Rancher Labs raises $20M Series B round for its container management platform

rancher_labs Rancher Labs, a container management platform that supports both Kubernetes and Docker Swarm, today announced that it has raised a $20 million Series B round. This new funding round was led by new investor GRC SinoGreen, a Chinese private equity and venture capital fund, with participation from existing investors Mayfield and Nexus Venture Partners. This round brings the company’s… Read More

May
09
2016
--

CoreOS raises $28M Series B round led by GV

1047673820_b93b65ad27_o CoreOS, the company behind the container-centric CoreOS Linux distribution and Tectonic container management service, today announced that it has raised a $28 million Series B round led by GV, the fund formerly known as Google Ventures. Other investors include Accel, Fuel Capital, Kleiner Perkins Caufield & Byers (KPCB) and the Y Combinator Continuity Fund. In total, the company has… Read More

Apr
26
2016
--

CoreOS’s Stackanetes lets you use Kubernetes to run OpenStack in containers

Container CoreOS today announced Stackanetes at the OpenStack Summit in Austin. Stackanetes (and yes, that name probably isn’t ideal) brings together OpenStack, the open source cloud computing platform that allows enterprises to run their own AWS-style cloud computing services in their private and public clouds, and Kubernetes, Google’s open source container management service. The… Read More

Apr
06
2015
--

CoreOS Raises $12M Funding Round Led By Google Ventures To Bring Kubernetes To The Enterprise

tectonic-launch CoreOS, a Docker-centric Linux distribution for large-scale server deployments, today announced that it has raised a $12 million funding round led by Google Ventures with participation by Kleiner Perkins Caufield & Byers, Fuel Capital and Accel Partners. This new round brings the company’s total funding to $20 million.
In addition, CoreOS is also launching Tectonic today. This new… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com