Sep
20
2017
--

Kubernetes gains momentum as big-name vendors flock to Cloud Native Computing Foundation

 Like a train gaining speed as it leaves the station, the Cloud Native Computing Foundation is quickly gathering momentum, attracting some of the biggest names in tech. In the last month and a half alone AWS, Oracle, Microsoft, VMware and Pivotal have all joined. It’s not every day you see this group of companies agree on anything, but as Kubernetes has developed into an essential… Read More

Sep
13
2017
--

Heptio raises $25M Series B to help bring cloud-native computing to the enterprise

 Heptio, the startup founded by Kubernetes co-founders Craig McLuckie and Joe Beda, today announced that it has raised a $25 million Series B funding round led by Madrona Venture partners. Lightspeed Venture Partners and Accel Partners also joined in this round, which comes less than a year after the company’s $8.5 million Series A round. Read More

Sep
06
2017
--

Mesosphere adds Kubernetes support to its data center operating system

 Mesosphere, which was one of the early companies to adopt containers and which focuses on allowing businesses to run their big data and analytics workloads in the cloud, today announced that it now also supports Kubernetes on its DC/OS platform for running big data applications in the cloud. This announcement is going to come as quite a surprise to many. Read More

Aug
29
2017
--

Pivotal-VMware-Google forge container partnership

 Pivotal, VMware and Google have teamed up on a containerization project that the companies say should simplify creating, deploying and managing container projects at scale. The companies are taking what is a set of open-source products and providing a commercial underpinning with the various parties in the partnership bringing the product to market. Read More

Aug
09
2017
--

Red Hat updates OpenShift container platform with new service catalog

 Red Hat issued its quarterly update to the OpenShift platform today, adding among other things, a Service Catalog that enables IT or third-party vendors to create connections to internal or external services. OpenShift is RedHat’s Platform as a Service, based on Kubernetes, the open source container management platform, which was originally developed by Google. It also supports Docker,… Read More

Aug
03
2017
--

Heptio launches two new open source projects that make using Kubernetes easier

 Heptio, the Seattle-based company recently launched by Kubernetes co-founders Craig McLuckie and Joe Beda, wants to make it easier for businesses to use Kubernetes in production. Since its launch in late 2016, the well-funded company has remained pretty quiet about its products, but today, the team released two open source projects into the wild: Ark and Sonobuoy.
While Kubernetes&#8217… Read More

May
31
2017
--

CoreOS updates its Tectonic container platform with the latest Kubernetes release, etcd as a service

 CoreOS is hosting its user conference in San Francisco today. Unsurprisingly, the company had a few announcements to make at the event. Most of these centered around its Tectonic platform for managing Kubernetes-based container infrastructures. For the most part, these updates are pretty straightforward. Tectonic now uses the latest version of Kubernetes, for example (1.6.4), which the… Read More

Apr
19
2017
--

Mirantis launches its new OpenStack and Kubernetes cloud platform

 Mirantis, one of the earliest players in the OpenStack ecosystem, today announced that it will end-of-life Mirantis OpenStack support in September 2019. The Mirantis Cloud Platform, which combines OpenStack with the Kubernetes container platform (or which could even be used to run Kubernetes separately), is going to take its place. While Mirantis is obviously not getting out of the OpenStack… Read More

Dec
21
2016
--

Installing Percona Monitoring and Management on Google Container Engine (Kubernetes)

gcshell

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that pmm-server uses.

The regular install instructions are here: https://www.percona.com/doc/percona-monitoring-and-management/install.html

Since Google Container Engine runs on Kubernetes, we had to do some interesting changes to the server install instructions.

First, you will want to get the gcloud shell. This is done by clicking the gcloud shell button at the top right of your screen when logged into your GCS project.

Installing Percona Monitoring and Management

Once you are in the shell, you just need to run some commands to get up and running.

Let’s set our availability zone and region:

manjot_singh@googleproject:~$ gcloud config set compute/zone asia-east1-c
Updated property [compute/zone].

Then let’s set up our auth:

manjot_singh@googleproject:~$ gcloud auth application-default login
...
These credentials will be used by any library that requests
Application Default Credentials.

Now we are ready to go.

Normally, we create a persistent container called pmm-data to hold the data the server collects and survive container deletions and upgrades. For GCS, we will create persistent disks, and use the minimum (Google) recommended size for each.

manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-prom-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-prom-data-pv].
NAME              ZONE          SIZE_GB  TYPE         STATUS
pmm-prom-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-consul-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-consul-data-pv].
NAME                ZONE          SIZE_GB  TYPE         STATUS
pmm-consul-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-mysql-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-mysql-data-pv].
NAME               ZONE          SIZE_GB  TYPE         STATUS
pmm-mysql-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-grafana-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-grafana-data-pv].
NAME                 ZONE          SIZE_GB  TYPE         STATUS
pmm-grafana-data-pv  asia-east1-c  200      pd-standard  READY

Ignoring messages about disk formatting, we are ready to create our Kubernetes cluster:

manjot_singh@googleproject:~$ gcloud container clusters create pmm-server --num-nodes 1 --machine-type n1-standard-2
Creating cluster pmm-server...done.
Created [https://container.googleapis.com/v1/projects/googleproject/zones/asia-east1-c/clusters/pmm-server].
kubeconfig entry generated for pmm-server.
NAME        ZONE          MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
pmm-server  asia-east1-c  1.4.6           999.911.999.91  n1-standard-2  1.4.6         1          RUNNING

You should now see something like:

manjot_singh@googleproject:~$ gcloud compute instances list
NAME                                       ZONE          MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
gke-pmm-server-default-pool-73b3f656-20t0  asia-east1-c  n1-standard-2               10.14.10.14  911.119.999.11  RUNNING

Now that our container manager is up, we need to create 2 configs for the “pod” we are creating to run our container. One will be used only to initialize the server and move the container drives to the persistent disks and the second one will be the actual running server.

manjot_singh@googleproject:~$ vi pmm-server-init.json
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
      "name": "pmm-server",
      "labels": {
          "name": "pmm-server"
      }
  },
  "spec": {
    "containers": [{
        "name": "pmm-server",
        "image": "percona/pmm-server:1.0.6",
        "env": [{
                "name":"SERVER_USER",
                "value":"http_user"
            },{
                "name":"SERVER_PASSWORD",
                "value":"http_password"
            },{
                "name":"ORCHESTRATOR_USER",
                "value":"orchestrator"
            },{
                "name":"ORCHESTRATOR_PASSWORD",
                "value":"orch_pass"
            }
        ],
        "ports": [{
            "containerPort": 80
            }
        ],
        "volumeMounts": [{
          "mountPath": "/opt/prometheus/d",
          "name": "pmm-prom-data"
        },{
          "mountPath": "/opt/c",
          "name": "pmm-consul-data"
        },{
          "mountPath": "/var/lib/m",
          "name": "pmm-mysql-data"
        },{
          "mountPath": "/var/lib/g",
          "name": "pmm-grafana-data"
        }]
      }
    ],
    "restartPolicy": "Always",
    "volumes": [{
      "name":"pmm-prom-data",
      "gcePersistentDisk": {
          "pdName": "pmm-prom-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-consul-data",
      "gcePersistentDisk": {
          "pdName": "pmm-consul-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-mysql-data",
      "gcePersistentDisk": {
          "pdName": "pmm-mysql-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-grafana-data",
      "gcePersistentDisk": {
          "pdName": "pmm-grafana-data-pv",
          "fsType": "ext4"
      }
    }]
  }
}

manjot_singh@googleproject:~$ vi pmm-server.json
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
      "name": "pmm-server",
      "labels": {
          "name": "pmm-server"
      }
  },
  "spec": {
    "containers": [{
        "name": "pmm-server",
        "image": "percona/pmm-server:1.0.6",
        "env": [{
                "name":"SERVER_USER",
                "value":"http_user"
            },{
                "name":"SERVER_PASSWORD",
                "value":"http_password"
            },{
                "name":"ORCHESTRATOR_USER",
                "value":"orchestrator"
            },{
                "name":"ORCHESTRATOR_PASSWORD",
                "value":"orch_pass"
            }
        ],
        "ports": [{
            "containerPort": 80
            }
        ],
        "volumeMounts": [{
          "mountPath": "/opt/prometheus/data",
          "name": "pmm-prom-data"
        },{
          "mountPath": "/opt/consul-data",
          "name": "pmm-consul-data"
        },{
          "mountPath": "/var/lib/mysql",
          "name": "pmm-mysql-data"
        },{
          "mountPath": "/var/lib/grafana",
          "name": "pmm-grafana-data"
        }]
      }
    ],
    "restartPolicy": "Always",
    "volumes": [{
      "name":"pmm-prom-data",
      "gcePersistentDisk": {
          "pdName": "pmm-prom-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-consul-data",
      "gcePersistentDisk": {
          "pdName": "pmm-consul-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-mysql-data",
      "gcePersistentDisk": {
          "pdName": "pmm-mysql-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-grafana-data",
      "gcePersistentDisk": {
          "pdName": "pmm-grafana-data-pv",
          "fsType": "ext4"
      }
    }]
  }
}

Then create it:

manjot_singh@googleproject:~$ kubectl create -f pmm-server-init.json
pod "pmm-server" created

Now we need to move data to persistent disks:

manjot_singh@googleproject:~$ kubectl exec -it pmm-server bash
root@pmm-server:/opt# supervisorctl stop grafana
grafana: stopped
root@pmm-server:/opt# supervisorctl stop prometheus
prometheus: stopped
root@pmm-server:/opt# supervisorctl stop consul
consul: stopped
root@pmm-server:/opt# supervisorctl stop mysql
mysql: stopped
root@pmm-server:/opt# mv consul-data/* c/
root@pmm-server:/opt# chown pmm.pmm c
root@pmm-server:/opt# cd prometheus/
root@pmm-server:/opt/prometheus# mv data/* d/
root@pmm-server:/opt/prometheus# chown pmm.pmm d
root@pmm-server:/var/lib# cd /var/lib
root@pmm-server:/var/lib# mv mysql/* m/
root@pmm-server:/var/lib# chown mysql.mysql m
root@pmm-server:/var/lib# mv grafana/* g/
root@pmm-server:/var/lib# chown grafana.grafana g
root@pmm-server:/var/lib# exit
manjot_singh@googleproject:~$ kubectl delete pods pmm-server
pod "pmm-server" deleted

Now recreate the pmm-server container with the actual configuration:

manjot_singh@googleproject:~$ kubectl create -f pmm-server.json
pod "pmm-server" created

It’s up!

Now let’s get access to it by exposing it to the internet:

manjot_singh@googleproject:~$ kubectl expose deployment pmm-server --type=LoadBalancer
service "pmm-server" exposed

You can get more information on this by running:

manjot_singh@googleproject:~$ kubectl describe services pmm-server
Name:                   pmm-server
Namespace:              default
Labels:                 run=pmm-server
Selector:               run=pmm-server
Type:                   LoadBalancer
IP:                     10.3.10.3
Port:                   <unset> 80/TCP
NodePort:               <unset> 31757/TCP
Endpoints:              10.0.0.8:80
Session Affinity:       None
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  22s           22s             1       {service-controller }                   Normal          CreatingLoadBalancer    Creating load balancer

To find the public IP of your PMM server, look under “EXTERNAL-IP”

manjot_singh@googleproject:~$ kubectl get services
NAME         CLUSTER-IP    EXTERNAL-IP      PORT(S)   AGE
kubernetes   10.3.10.3     <none>           443/TCP   7m
pmm-server   10.3.10.99    999.911.991.91   80/TCP    1m

That’s it, just visit the external IP in your browser and you should see the PMM landing page!

One of the things we didn’t resolve was being able to access the pmm-server container within the vpc. The client had to go through the open internet and hit PMM via the public IP. I hope to work on this some more and resolve this in the future.

I have also talked to our team about making mounts for persistent disks easier so that we can use less mounts and make the configuration and setup easier.

 

 

Nov
17
2016
--

Kubernetes founders launch Heptio with $8.5M in funding to help bring containers to the enterprise

Kepple Terminal with cityscape For years, the public face of Kubernetes was one of the project’s founders: Google group product manager Craig McLuckie. He started the open-source container-management project together with Joe Beda, Brendan Burns and a few other engineers inside of Google, which has since brought it under the guidance of the newly formed Cloud Native Computing Foundation.
Beda became an… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com