May
24
2018
--

Setting up PMM on Google Compute Engine in 15 minutes or less

Percona Monitoring and Management on Google Compute Engine

In this blog post, I will show you how easy it is to set up a Percona Monitoring and Management server on Google Compute Engine from the command line.

First off you will need to have a Google account and install the Cloud SDK tool. You need to create a GCP (Google Cloud Platform) project and enable billing to proceed. This blog assumes you are able to authenticate and SSH into instances from the command line.

Here are the steps to install PMM server in Google Cloud Platform.

1) Create the Compute engine instance with the following command. The example creates an Ubuntu Xenial 16.04 LTS compute instance in the us-west1-b zone with a 100GB persistent disk. For production systems it would be best to use a 500GB disk instead (size=500GB). This should be enough for default data retention settings, although your needs may vary.

jerichorivera@percona-support:~/GCE$ gcloud compute instances create pmm-server --tags pmmserver --image-family ubuntu-1604-lts --image-project ubuntu-os-cloud --machine-type n1-standard-4 --zone us-west1-b --create-disk=size=100GB,type=pd-ssd,device-name=sdb --description "PMM Server on GCP" --metadata-from-file startup-script=deploy-pmm-xenial64.sh
Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server].
NAME        ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
pmm-server  us-west1-b  n1-standard-4               10.138.0.2   35.233.216.225  RUNNING

Notice that we’ve used

--metadata-from-file startup-script=deploy-pmm-xenial64.sh

  The file has the following contents:

jerichorivera@percona-support:~$ cat GCE/deploy-pmm-xenial64.sh
#!/bin/bash
set -v
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
# Format the persistent disk, mount it then add to /etc/fstab
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
sudo mkdir -p /mnt/disks/pdssd
sudo mount -o discard,defaults /dev/sdb /mnt/disks/pdssd/
sudo chmod a+w /mnt/disks/pdssd/
sudo cp /etc/fstab /etc/fstab.backup
echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/disks/pdssd ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
# Change docker’s root directory before installing Docker
sudo mkdir /etc/systemd/system/docker.service.d/
cat << EOF > /etc/systemd/system/docker.service.d/docker.root.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/
EOF
sudo apt-get install -y docker-ce
# Creates the deploy.sh script
cat << EOF > /tmp/deploy.sh
#!/bin/bash
set -v
docker pull percona/pmm-server:latest
docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true
docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest
EOF

This startup script will be executed right after the compute instance is created. The script will format the persistent disk and mount the file system; create a custom Docker unit file for the purpose of creating Docker’s root directory from /var/lib/docker to /mnt/disks/pdssd/docker; install the Docker package; and create the deploy.sh script.

2) Once the compute engine instance is created, SSH into the instance, check that Docker is running and the root directory pointing to the desired folder.

jerichorivera@pmm-server:~$ sudo systemctl status docker
? docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/docker.service.d
           ??docker.root.conf
   Active: active (running) since Wed 2018-05-16 12:53:30 UTC; 45s ago
     Docs: https://docs.docker.com
 Main PID: 4744 (dockerd)
   CGroup: /system.slice/docker.service
           ??4744 /usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/
           ??4764 docker-containerd --config /var/run/docker/containerd/containerd.toml
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391566708Z" level=warning msg="Your kernel does not support swap memory limit"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391638253Z" level=warning msg="Your kernel does not support cgroup rt period"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391680203Z" level=warning msg="Your kernel does not support cgroup rt runtime"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.392913043Z" level=info msg="Loading containers: start."
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.767048674Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.847907241Z" level=info msg="Loading containers: done."
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875129963Z" level=info msg="Docker daemon" commit=9ee9f40 graphdriver(s)=overlay2 version=18.03.1-ce
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875285809Z" level=info msg="Daemon has completed initialization"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.884566419Z" level=info msg="API listen on /var/run/docker.sock"
May 16 12:53:30 pmm-server systemd[1]: Started Docker Application Container Engine.

3) Add your user to the docker group as shown below and change deploy.sh script to executable.

jerichorivera@pmm-server:~$ sudo usermod -aG docker $USER
jerichorivera@pmm-server:~$ sudo chmod +x /tmp/deploy.sh

4) Log off from the instance, and then log back in and then execute the deploy.sh script.

jerichorivera@pmm-server:~$ cd /tmp/
jerichorivera@pmm-server:/tmp$ ./deploy.sh
docker pull percona/pmm-server:latest
latest: Pulling from percona/pmm-server
697841bfe295: Pull complete
fa45d21b9629: Pull complete
Digest: sha256:98d2717b4f0ae83fbca63330c39590d69a7fca7ae6788f52906253ac75db6838
Status: Downloaded newer image for percona/pmm-server:latest
docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true
8977102d419cf8955fd8bbd0ed2c663c75a39f9fbc635238d56b480ecca8e749
docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest
83c2e6db2efc752a6beeff0559b472f012062d3f163c042e5e0d41cda6481d33

5) Finally, create a firewall rule to allow HTTP port 80 to access the PMM Server. For security reasons, we recommend that you secure your PMM server by adding a password, or limit access to it with a stricter firewall rule to specify which IP addresses can access port 80.

jerichorivera@percona-support:~$ gcloud compute firewall-rules create allow-http-pmm-server --allow tcp:80 --target-tags pmmserver --description "Allow HTTP traffic to PMM Server"
Creating firewall...-Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/global/firewalls/allow-http-pmm-server].
Creating firewall...done.
NAME                   NETWORK  DIRECTION  PRIORITY  ALLOW   DENY
allow-http-pmm-server  default  INGRESS    1000      tcp:80
jerichorivera@percona-support:~/GCE$ gcloud compute firewall-rules list
NAME                    NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY
allow-http-pmm-server   default  INGRESS    1000      tcp:80
default-allow-icmp      default  INGRESS    65534     icmp
default-allow-internal  default  INGRESS    65534     tcp:0-65535,udp:0-65535,icmp
default-allow-rdp       default  INGRESS    65534     tcp:3389
default-allow-ssh       default  INGRESS    65534     tcp:22

At this point you should have a PMM Server in GCP running on a Compute Engine instance.

The next steps is to install pmm-client on the database hosts and add services for monitoring.

Here I’ve launched a single standalone Percona Server 5.6 on another Compute Engine instance in the same project (thematic-acumen-204008).

jerichorivera@percona-support:~/GCE$ gcloud compute instances create mysql1 --tags mysql1 --image-family centos-7 --image-project centos-cloud --machine-type n1-standard-2 --zone us-west1-b --create-disk=size=50GB,type=pd-standard,device-name=sdb --description "MySQL1 on GCP" --metadata-from-file startup-script=compute-instance-deploy.sh
Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/mysql1].
NAME    ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
mysql1  us-west1-b  n1-standard-2               10.138.0.3   35.233.187.253  RUNNING

Installed Percona Server 5.6 and pmm-client and then added services. Take note that since the PMM Server and the MySQL server is in the same project and same VPC network, we can connect directly through INTERNAL_IP 10.138.0.2, otherwise use the EXTERNAL_IP 35.223.216.225.

[root@mysql1 jerichorivera]# pmm-admin config --server 10.138.0.2
OK, PMM server is alive.
PMM Server      | 10.138.0.2
Client Name     | mysql1
Client Address  | 10.138.0.3
[root@mysql1 jerichorivera]#
[root@mysql1 jerichorivera]# pmm-admin check-network
PMM Network Status
Server Address | 10.138.0.2
Client Address | 10.138.0.3
* System Time
NTP Server (0.pool.ntp.org)         | 2018-05-22 06:45:47 +0000 UTC
PMM Server                          | 2018-05-22 06:45:47 +0000 GMT
PMM Client                          | 2018-05-22 06:45:47 +0000 UTC
PMM Server Time Drift               | OK
PMM Client Time Drift               | OK
PMM Client to PMM Server Time Drift | OK
* Connection: Client --> Server
-------------------- -------
SERVER SERVICE       STATUS
-------------------- -------
Consul API           OK
Prometheus API       OK
Query Analytics API  OK
Connection duration | 408.185µs
Request duration    | 6.810709ms
Full round trip     | 7.218894ms
No monitoring registered for this node identified as 'mysql1'.
[root@mysql1 jerichorivera]# pmm-admin add mysql --create-user
[linux:metrics] OK, now monitoring this system.
[mysql:metrics] OK, now monitoring MySQL metrics using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)
[mysql:queries] OK, now monitoring MySQL queries from slowlog using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)
[root@mysql1 jerichorivera]# pmm-admin list
pmm-admin 1.10.0
PMM Server      | 10.138.0.2
Client Name     | mysql1
Client Address  | 10.138.0.3
Service Manager | linux-systemd
-------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------
SERVICE TYPE   NAME    LOCAL PORT  RUNNING  DATA SOURCE                                     OPTIONS
-------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------
mysql:queries  mysql1  -           YES      pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)  query_source=slowlog, query_examples=true
linux:metrics  mysql1  42000       YES      -
mysql:metrics  mysql1  42002       YES      pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)

Lastly, in case you need to delete the PMM Server instance. Just execute this delete command below to completely remove the instance and the attached disk. Be aware that you may remove the boot disk and retain the attached persistent disk if you prefer.

jerichorivera@percona-support:~/GCE$ gcloud compute instances delete pmm-server
The following instances will be deleted. Any attached disks configured
 to be auto-deleted will be deleted unless they are attached to any
other instances or the `--keep-disks` flag is given and specifies them
 for keeping. Deleting a disk is irreversible and any data on the disk
 will be lost.
 - [pmm-server] in [us-west1-b]
Do you want to continue (Y/n)?  y
Deleted [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server].

The other option is to install PMM on Google Container engine which was explained by Manjot Singh in his blog post.

The post Setting up PMM on Google Compute Engine in 15 minutes or less appeared first on Percona Database Performance Blog.

Dec
21
2016
--

Installing Percona Monitoring and Management on Google Container Engine (Kubernetes)

gcshell

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that

I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that pmm-server uses.

The regular install instructions are here: https://www.percona.com/doc/percona-monitoring-and-management/install.html

Since Google Container Engine runs on Kubernetes, we had to do some interesting changes to the server install instructions.

First, you will want to get the gcloud shell. This is done by clicking the gcloud shell button at the top right of your screen when logged into your GCS project.

Installing Percona Monitoring and Management

Once you are in the shell, you just need to run some commands to get up and running.

Let’s set our availability zone and region:

manjot_singh@googleproject:~$ gcloud config set compute/zone asia-east1-c
Updated property [compute/zone].

Then let’s set up our auth:

manjot_singh@googleproject:~$ gcloud auth application-default login
...
These credentials will be used by any library that requests
Application Default Credentials.

Now we are ready to go.

Normally, we create a persistent container called pmm-data to hold the data the server collects and survive container deletions and upgrades. For GCS, we will create persistent disks, and use the minimum (Google) recommended size for each.

manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-prom-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-prom-data-pv].
NAME              ZONE          SIZE_GB  TYPE         STATUS
pmm-prom-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-consul-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-consul-data-pv].
NAME                ZONE          SIZE_GB  TYPE         STATUS
pmm-consul-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-mysql-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-mysql-data-pv].
NAME               ZONE          SIZE_GB  TYPE         STATUS
pmm-mysql-data-pv  asia-east1-c  200      pd-standard  READY
manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-grafana-data-pv
Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-grafana-data-pv].
NAME                 ZONE          SIZE_GB  TYPE         STATUS
pmm-grafana-data-pv  asia-east1-c  200      pd-standard  READY

Ignoring messages about disk formatting, we are ready to create our Kubernetes cluster:

manjot_singh@googleproject:~$ gcloud container clusters create pmm-server --num-nodes 1 --machine-type n1-standard-2
Creating cluster pmm-server...done.
Created [https://container.googleapis.com/v1/projects/googleproject/zones/asia-east1-c/clusters/pmm-server].
kubeconfig entry generated for pmm-server.
NAME        ZONE          MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
pmm-server  asia-east1-c  1.4.6           999.911.999.91  n1-standard-2  1.4.6         1          RUNNING

You should now see something like:

manjot_singh@googleproject:~$ gcloud compute instances list
NAME                                       ZONE          MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
gke-pmm-server-default-pool-73b3f656-20t0  asia-east1-c  n1-standard-2               10.14.10.14  911.119.999.11  RUNNING

Now that our container manager is up, we need to create 2 configs for the “pod” we are creating to run our container. One will be used only to initialize the server and move the container drives to the persistent disks and the second one will be the actual running server.

manjot_singh@googleproject:~$ vi pmm-server-init.json
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
      "name": "pmm-server",
      "labels": {
          "name": "pmm-server"
      }
  },
  "spec": {
    "containers": [{
        "name": "pmm-server",
        "image": "percona/pmm-server:1.0.6",
        "env": [{
                "name":"SERVER_USER",
                "value":"http_user"
            },{
                "name":"SERVER_PASSWORD",
                "value":"http_password"
            },{
                "name":"ORCHESTRATOR_USER",
                "value":"orchestrator"
            },{
                "name":"ORCHESTRATOR_PASSWORD",
                "value":"orch_pass"
            }
        ],
        "ports": [{
            "containerPort": 80
            }
        ],
        "volumeMounts": [{
          "mountPath": "/opt/prometheus/d",
          "name": "pmm-prom-data"
        },{
          "mountPath": "/opt/c",
          "name": "pmm-consul-data"
        },{
          "mountPath": "/var/lib/m",
          "name": "pmm-mysql-data"
        },{
          "mountPath": "/var/lib/g",
          "name": "pmm-grafana-data"
        }]
      }
    ],
    "restartPolicy": "Always",
    "volumes": [{
      "name":"pmm-prom-data",
      "gcePersistentDisk": {
          "pdName": "pmm-prom-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-consul-data",
      "gcePersistentDisk": {
          "pdName": "pmm-consul-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-mysql-data",
      "gcePersistentDisk": {
          "pdName": "pmm-mysql-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-grafana-data",
      "gcePersistentDisk": {
          "pdName": "pmm-grafana-data-pv",
          "fsType": "ext4"
      }
    }]
  }
}

manjot_singh@googleproject:~$ vi pmm-server.json
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": {
      "name": "pmm-server",
      "labels": {
          "name": "pmm-server"
      }
  },
  "spec": {
    "containers": [{
        "name": "pmm-server",
        "image": "percona/pmm-server:1.0.6",
        "env": [{
                "name":"SERVER_USER",
                "value":"http_user"
            },{
                "name":"SERVER_PASSWORD",
                "value":"http_password"
            },{
                "name":"ORCHESTRATOR_USER",
                "value":"orchestrator"
            },{
                "name":"ORCHESTRATOR_PASSWORD",
                "value":"orch_pass"
            }
        ],
        "ports": [{
            "containerPort": 80
            }
        ],
        "volumeMounts": [{
          "mountPath": "/opt/prometheus/data",
          "name": "pmm-prom-data"
        },{
          "mountPath": "/opt/consul-data",
          "name": "pmm-consul-data"
        },{
          "mountPath": "/var/lib/mysql",
          "name": "pmm-mysql-data"
        },{
          "mountPath": "/var/lib/grafana",
          "name": "pmm-grafana-data"
        }]
      }
    ],
    "restartPolicy": "Always",
    "volumes": [{
      "name":"pmm-prom-data",
      "gcePersistentDisk": {
          "pdName": "pmm-prom-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-consul-data",
      "gcePersistentDisk": {
          "pdName": "pmm-consul-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-mysql-data",
      "gcePersistentDisk": {
          "pdName": "pmm-mysql-data-pv",
          "fsType": "ext4"
      }
    },{
      "name":"pmm-grafana-data",
      "gcePersistentDisk": {
          "pdName": "pmm-grafana-data-pv",
          "fsType": "ext4"
      }
    }]
  }
}

Then create it:

manjot_singh@googleproject:~$ kubectl create -f pmm-server-init.json
pod "pmm-server" created

Now we need to move data to persistent disks:

manjot_singh@googleproject:~$ kubectl exec -it pmm-server bash
root@pmm-server:/opt# supervisorctl stop grafana
grafana: stopped
root@pmm-server:/opt# supervisorctl stop prometheus
prometheus: stopped
root@pmm-server:/opt# supervisorctl stop consul
consul: stopped
root@pmm-server:/opt# supervisorctl stop mysql
mysql: stopped
root@pmm-server:/opt# mv consul-data/* c/
root@pmm-server:/opt# chown pmm.pmm c
root@pmm-server:/opt# cd prometheus/
root@pmm-server:/opt/prometheus# mv data/* d/
root@pmm-server:/opt/prometheus# chown pmm.pmm d
root@pmm-server:/var/lib# cd /var/lib
root@pmm-server:/var/lib# mv mysql/* m/
root@pmm-server:/var/lib# chown mysql.mysql m
root@pmm-server:/var/lib# mv grafana/* g/
root@pmm-server:/var/lib# chown grafana.grafana g
root@pmm-server:/var/lib# exit
manjot_singh@googleproject:~$ kubectl delete pods pmm-server
pod "pmm-server" deleted

Now recreate the pmm-server container with the actual configuration:

manjot_singh@googleproject:~$ kubectl create -f pmm-server.json
pod "pmm-server" created

It’s up!

Now let’s get access to it by exposing it to the internet:

manjot_singh@googleproject:~$ kubectl expose deployment pmm-server --type=LoadBalancer
service "pmm-server" exposed

You can get more information on this by running:

manjot_singh@googleproject:~$ kubectl describe services pmm-server
Name:                   pmm-server
Namespace:              default
Labels:                 run=pmm-server
Selector:               run=pmm-server
Type:                   LoadBalancer
IP:                     10.3.10.3
Port:                   <unset> 80/TCP
NodePort:               <unset> 31757/TCP
Endpoints:              10.0.0.8:80
Session Affinity:       None
Events:
  FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  22s           22s             1       {service-controller }                   Normal          CreatingLoadBalancer    Creating load balancer

To find the public IP of your PMM server, look under “EXTERNAL-IP”

manjot_singh@googleproject:~$ kubectl get services
NAME         CLUSTER-IP    EXTERNAL-IP      PORT(S)   AGE
kubernetes   10.3.10.3     <none>           443/TCP   7m
pmm-server   10.3.10.99    999.911.991.91   80/TCP    1m

That’s it, just visit the external IP in your browser and you should see the PMM landing page!

One of the things we didn’t resolve was being able to access the pmm-server container within the vpc. The client had to go through the open internet and hit PMM via the public IP. I hope to work on this some more and resolve this in the future.

I have also talked to our team about making mounts for persistent disks easier so that we can use less mounts and make the configuration and setup easier.

 

 

Nov
04
2014
--

Google Launches Managed Service For Running Docker-Based Applications On Its Cloud Platform

Docker Google Google today announced the alpha launch of Google Container Engine, a new managed service for building and running Docker container-based applications on its cloud platform. Docker is probably the hottest technology in developer circles these days — it’s almost impossible to have a discussion with a developer without it coming up — and Google’s Cloud Platform team… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com