Nov
27
2019
--

Running PMM1 and PMM2 Clients on the Same Host

Running PMM1 and PMM2 Clients

Running PMM1 and PMM2 ClientsWant to try out Percona Monitoring and Management 2 (PMM 2) but you’re not ready to turn off your PMM 1 environment?  This blog is for you! Keep in mind that the methods described are not intended to be a long-term migration strategy, but rather, simply a way to deploy a few clients in order to sample PMM 2 before you commit to the upgrade. ?

Here are step-by-step instructions for deploying PMM 1 & 2 client functionality i.e. pmm-client and pmm2-client, on the same host.

  1. Deploy PMM 1 on Server1 (you’ve probably already done this)
  2. Install and setup pmm-client for connectivity to Server1
  3. Deploy PMM 2 on Server2
  4. Install and setup pmm2-client for connectivity to Server2
  5. Remove pmm-client and switched completely to pmm2-client

The first few steps are already described in our PMM1 documentation so we are simply providing links to those documents.  Here we’ll focus on steps 4 and 5.

Install and Setup pmm2-client Connectivity to Server2

It’s not possible to install both clients from a repository at the same time. So you’ll need to download a tarball of pmm2-client. Here’s a link to the latest version directly from our site.

Download pmm2-client Tarball

* Note that depending on when you’re seeing this, the commands below may not be for the latest version, so the commands may need to be updated for the version you downloaded.

$ wget https://www.percona.com/downloads/pmm2/2.1.0/binary/tarball/pmm2-client-2.1.0.tar.gz

Extract Files From pmm2-client Tarball

$ tar -zxvf pmm2-client-2.1.0.tar.gz 
$ cd pmm2-client-2.1.0

Register and Generate Configuration File

Now it’s time to set up a PMM 2 client. In our example, the PMM2 server IP is 172.17.0.2 and the monitored host IP is 172.17.0.1.

$ ./bin/pmm-agent setup --config-file=config/pmm-agent.yaml \
--paths-node_exporter="$PWD/pmm2-client-2.1.0/bin/node_exporter" \
--paths-mysqld_exporter="$PWD/pmm2-client-2.1.0/bin/mysqld_exporter" \
--paths-mongodb_exporter="$PWD/pmm2-client-2.1.0/bin/mongodb_exporter" \
--paths-postgres_exporter="$PWD/pmm2-client-2.1.0/bin/postgres_exporter" \
--paths-proxysql_exporter="$PWD/pmm2-client-2.1.0/bin/proxysql_exporter" \
--server-insecure-tls --server-address=172.17.0.2:443 \
--server-username=admin  --server-password="admin" 172.17.0.1 generic node8.ca

Start pmm-agent

Let’s run the pmm-agent using a screen.  There’s no service manager integration when deploying alongside pmm-client, so if your server restarts, pmm-agent won’t automatically resume.

# screen -S pmm-agent

$ ./bin/pmm-agent --config-file="$PWD/config/pmm-agent.yaml"

Check the Current State of the Agent

$ ./bin/pmm-admin list
Service type  Service name         Address and port  Service ID

Agent type                  Status     Agent ID                                        Service ID
pmm-agent                   connected  /agent_id/805db700-3607-40a9-a1fa-be61c76fe755  
node_exporter               running    /agent_id/805eb8f6-3514-4c9b-a05e-c5705755a4be

Add MySQL Service

Detach the screen, then add the mysql service:

$ ./bin/pmm-admin add mysql --use-perfschema --username=root mysqltest
MySQL Service added.
Service ID  : /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b
Service name: mysqltest

Here is the state of pmm-agent:

$ ./bin/pmm-admin list
Service type  Service name         Address and port  Service ID
MySQL         mysqltest            127.0.0.1:3306    /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b

Agent type                  Status     Agent ID                                        Service ID
pmm-agent                   connected  /agent_id/805db700-3607-40a9-a1fa-be61c76fe755   
node_exporter               running    /agent_id/805eb8f6-3514-4c9b-a05e-c5705755a4be   
mysqld_exporter             running    /agent_id/efb01d86-58a3-401e-ae65-fa8417f9feb2  /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b
qan-mysql-perfschema-agent  running    /agent_id/26836ca9-0fc7-4991-af23-730e6d282d8d  /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b

Confirm you can see activity in each of the two PMM Servers:

PMM 1 PMM 2

Remove pmm-client and Switch Completely to pmm2-client

Once you’ve decided to move over completely to PMM2, it’s better to make a switch from the tarball version to installation from the repository. It will allow you to perform client updates much easier as well as register the new agent as a service for automatically starting with the server. Also, we will show you how to make a switch without re-adding monitored instances.

Configure Percona Repositories

$ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm 
$ sudo percona-release disable all 
$ sudo percona-release enable original release 
$ yum list | grep pmm 
pmm-client.x86_64                    1.17.2-1.el6                  percona-release-x86_64
pmm2-client.x86_64                   2.1.0-1.el6                   percona-release-x86_64

Here is a link to the apt variant.

Remove pmm-client

yum remove pmm-client

Install pmm2-client

$ yum install pmm2-client
Loaded plugins: priorities, update-motd, upgrade-helper
4 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package pmm2-client.x86_64 0:2.1.0-5.el6 will be installed
...
Installed:
  pmm2-client.x86_64 0:2.1.0-5.el6                                                                                                                                                           

Complete!

Configure pmm2-client

Let’s copy the currently used pmm2-client configuration file in order to omit re-adding monitored instances.

$ cp pmm2-client-2.1.0/config/pmm-agent.yaml /tmp

It’s required to set the new location of exporters (/usr/local/percona/pmm2/exporters/) in the file.

$ sed -i 's|node_exporter:.*|node_exporter: /usr/local/percona/pmm2/exporters/node_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|mysqld_exporter:.*|mysqld_exporter: /usr/local/percona/pmm2/exporters/mysqld_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|mongodb_exporter:.*|mongodb_exporter: /usr/local/percona/pmm2/exporters/mongodb_exporter|g' /tmp/pmm-agent.yaml 
$ sed -i 's|postgres_exporter:.*|postgres_exporter: /usr/local/percona/pmm2/exporters/postgres_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|proxysql_exporter:.*|proxysql_exporter: /usr/local/percona/pmm2/exporters/proxysql_exporter|g' /tmp/pmm-agent.yaml

The default configuration file has to be replaced by our file and the service pmm-agent has to be restarted.

$ cp /tmp/pmm-agent.yaml /usr/local/percona/pmm2/config/
$ systemctl restart pmm-agent

Check Monitored Services

So now we can verify the current state of monitored instances.

$ pmm-admin list

Also, it can be checked on PMM server-side.

Nov
22
2019
--

Tips for Designing Grafana Dashboards

Designing Grafana Dashboards

As Grafana powers our star product – Percona Monitoring and Management (PMM) – we have developed a lot of experience creating Grafana Dashboards over the last few years.   In this article, I will share some of the considerations for designing Grafana Dashboards. As usual, when it comes to questions of design they are quite subjective, and I do not expect you to chose to apply all of them to your dashboards, but I hope they will help you to think through your dashboard design better.

Design Practical Dashboards

Grafana features many panel types, and even more are available as plugins. It may be very attractive to use many of them in your dashboards using many different visualization options. Do not!  Stick to a few data visualization patterns and only add additional visualizations when they provide additional practical value not because they are cool.  Graph and Singlestat panel types probably cover 80% of use cases.

Do Not Place Too Many Graphs Side by Side

This probably will depend a lot on how your dashboards are used.  If your dashboard is designed for large screens placed on the wall you may be able to fit more graphs side by side, if your dashboard needs to scale down to lower resolution small laptop screen I would suggest sticking to 2-3 graphs in a row.

Use Proper Units

Grafana allows you to specify a unit for the data type displayed. Use it! Without type set values will not be properly shortened and very hard to read:

Grafana Dashboards

Compare this to

Grafana Dashboards2

Mind Decimals

You can specify the number of values after decimal points you want to display or leave it default.  I found default picking does not always work very well, for example here:

Grafana Dashboards3

For some reason on the panel Axis, we have way too many values displayed after the decimal point.  Grafana also often picks three values after decimal points as in the table below which I find inconvenient – from the glance view, it is hard to understand if we’re dealing with a decimal point or with “,” as a “thousands” separator, so I may be looking at 2462 GiB there.  While it is not feasible in this case, there are cases such as data rate where a 1000x value difference is quite possible.  Instead, I prefer setting it to one decimal (or one if it is enough) which makes it clear that we’re not looking at thousands.

Label your Axis

You can label your axis (which especially makes sense) if the presentation is something not as obvious as in this example; we’re using a negative value to lot writes to a swap file.

Grafana Dashboards4

Use Shared Crosshair or Tooltip

In Dashboard Settings, you will find “Graph Tooltip” option and set it to “Default”,
“Shared Crosshair” or “Share Tooltip”  This is how these will look:

Grafana Dashboards5

Grafana Dashboards 6

Grafana Dashboards 6

 

Shared crosshair shows the line matching the same time on all dashboards while Tooltip shows the tooltip value on all panels at the same time.  You can pick what makes sense for you; my favorite is using the tooltip setting because it allows me to visually compare the same time without making the dashboard too slow and busy.

Note there is handy shortcut CTRL-O which allows you to cycle between these settings for any dashboard.

Pick Colors

If you’re displaying truly dynamic information you will likely have to rely on Grafana’s automatic color assignment, but if not, you can pick specific colors for all values being plotted.  This will prevent colors from potentially being re-assigned to different values without you planning to do so.

Grafana Dashboards 7

Picking colors you also want to make sure you pick colors that make logical sense. For example, I think for free memory “green” is a better color than “red”.  As you pick the colors, use the same colors for the same type of information when you show it on the different panels if possible, because it makes it easier to understand.

I would even suggest sticking to the same (or similar) color for the Same Kind of Data – if you have many panels which show disk Input and Output using similar colors, this can be a good idea.

Fill Stacking Graphs

Grafana does not require it, but I would suggest you use filling when you display stacking data and don’t use filling when you’re plotting multiple independent values.  Take a look at these graphs:

In the first graph, I need to look at the actual value of the plotted value to understand what I’m looking at. At the same time, in the second graph, that value is meaningless and what is valuable is the filled amount. I can see on the second graph what amount of the Cache, blue value, has shrunk.

I prefer using a fill factor of 6+ so it is easier to match the fill colors with colors in the table.   For the same reason, I prefer not to use the fill gradient on such graphs as it makes it much harder to see the color and the filled volume.

Do Not Abuse Double Axis

Graphs that use double axis are much harder to understand.  I used to use it very often, but now I avoid it when possible, only using it when I absolutely want to limit the number of panels.

Note in this case I think gradient fits OK because there is only one value displayed as the line, so you can’t get confused if you need to look at total value or “filled volume”.

Separate Data of Different Scales on Different Graphs

I used to plot Innodb Rows Read and Written at the same graph. It is quite common to have reads to be 100x higher in volume than writes, crowding them out and making even significant changes in writes very hard to see.  Splitting them to different graphs solved this issue.

Consider Staircase Graphs

In the monitoring applications, we often display average rates computed over a period of time.  If this is the case, we do not know how the rate was changing within that period and it would be misleading to show that. This especially makes sense if you’re displaying only a few data points.

Let’s look at this graph which is being viewed with one-hour resolution:

This visually shows what amount of rows read was falling from 16:00 to 18:00, and if we compare it to the staircase graph:

It simply shows us that the value at 18 am was higher than 17 am, but does not make any claim about the change.

This display, however, has another issue. Let’s look at the same data set with 5min resolution:

We can see the average value from 16:00 to 17:00 was lower than from 17:00 to 18:00, but this is however NOT what the lower resolution staircase graph shows – the value for 17 to 18 is actually lower!

The reason for that is if we compute on Prometheus side rate() for 1 hour at 17:00 it will be returned as a data point for 17:00 where this average rate is really for 16:00 to 17:00, while staircase graph will plot it from 17:00 to 18:00 until a new value is available.  It is off by one hour.

To fix it, you need to shift the data appropriately. In Prometheus, which we use in PMM, I can use an offset operator to shift the data to be displayed correctly:

Provide Multiple Resolutions

I’m a big fan of being able to see the data on the same dashboard with different resolutions, which can be done through a special dashboard variable of type “Interval”.  High-resolution data can provide a great level of detail but can be very volatile.

While lower resolution can hide this level of detail, it does show trends better.

Multiple Aggregates for the Same Metrics

To get even more insights, you can consider plotting the same metrics with different aggregates applied to it:

In this case, we are looking at the same variable – threads_running – but at its average value over a period of time versus max (peak) value. Both of them are meaningful in a different way.

You can also notice here that points are used for the Max value instead of a line. This is in general good practice for highly volatile data, as a plottings line for something which changes wildly is messy and does not provide much value.

Use Help and Panel Links

If you fill out a description for the panel, it will be visible if you place your mouse over the tiny “i” sign. This is very helpful to explain what the panel shows and how to use this data.  You can use Markup for formatting.  You can also provide one or more panel links, that you can use for additional help or drill down.

With newer Grafana versions, you can even define a more advanced drill-down, which can contain different URLs based on the series you are looking at, as well as other templating variables:

Summary

This list of considerations for designing Grafana Dashboards and best practices is by no means complete, but I hope you pick up an idea or two which will allow you to create better dashboards!

Oct
29
2019
--

Monitoring PostgreSQL Databases Using PMM

Monitoring PostgreSQL with Percona Monitoring Management

PostgreSQLPostgreSQL is a widely-used Open Source database and has been the DBMS of the year for the past 2 years in DB-Engine rankings. As such, there is always a need for reliable and robust monitoring solutions. While there are some commercial monitoring tools, there is an equally good number of open source tools available for monitoring PostgreSQL. Percona Monitoring and Management (PMM) is one of those open source solutions that have continuous improvements and is maintained forever by Percona. It is simple to set up and easy to use.

PMM can monitor not only PostgreSQL but also MySQL and MongoDB databases, so it is a simple monitoring solution for monitoring multiple types of databases. In this blog post, you will see all the steps involved in monitoring PostgreSQL databases using PMM.

This is what we will be discussing:

  1. Using the PMM docker image to create a PMM server.
  2. Installing PMM client on a Remote PostgreSQL server and connecting the PostgreSQL Client to PMM Server.
  3. Creating required users and permissions on the PostgreSQL server.
  4. Enabling PostgreSQL Monitoring with and without QAN (Query Analytics)

If you already know how to create a PMM Server, please skip the PMM server setup and proceed to the PostgreSQL client setup.

Using the PMM docker image to create a PMM server

PMM is a client-server architecture where clients are the PostgreSQL, MySQL, or MongoDB databases and the server is the PMM Server. We see a list of metrics on the Grafana dashboard by connecting to the PMM server on the UI. In order to demonstrate this setup, I have created 2 virtual machines where one of them is the PMM Server and the second server is the PostgreSQL database server.

192.168.80.10 is my PMM-Server
192.168.80.20 is my PG 11 Server

Step 1 : 

On the PMM Server, install and start docker.

# yum install docker -y
# systemctl start docker

Here are the installation instructions of PMM Server.

Step 2 :

Pull the pmm-server docker image. I am using the latest PMM2 docker image for this setup.

$ docker pull percona/pmm-server:2

You see a docker image of size 1.48 GB downloaded after the above step.

$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/percona/pmm-server 2 cd30e7343bb1 2 weeks ago 1.48 GB

Step 3 :

Create a container for persistent PMM data.

$ docker create \
-v /srv \
--name pmm-data \
percona/pmm-server:2 /bin/true

Step 4 :

Create and launch the PMM Server. In the following step, you can see that we are binding the port 80 of the container to the port 80 of the host machine. Likewise for port 443.

$ docker run -d \
-p 80:80 \
-p 443:443 \
--volumes-from pmm-data \
--name pmm-server \
--restart always \
percona/pmm-server:2

At this stage, you can modify certain settings such as the memory you wish to allocate to the container or the CPU share, etc. You can also see more such configurable options using

docker run --help

. The following is just an example of how you can modify the above step with some memory or CPU allocations.

$ docker run -d \
-p 80:80 \
-p 443:443 \
--volumes-from pmm-data \
--name pmm-server \
--cpu-shares 100 \
--memory 1024m \
--restart always \
percona/pmm-server:2

You can list the containers started for validation using

docker ps

.

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb6043082d3b percona/pmm-server:2 "/opt/entrypoint.sh" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp pmm-server

Step 5 : 

You can now see the PMM Server Dashboard in the browser using the Host IP address. For my setup, the PMM Server’s IP Address is

192.168.80.10

. As soon as you put the IP in the browser, you will be asked to enter the credentials as seen in the image below. Default user and password are both:

admin

create a PMM server

And then you will be asked to change the password or skip.

PMM Server setup is completed after this step.

Installing PMM client on a Remote PostgreSQL server

I have a PostgreSQL 11.5 Server running on

192.168.80.20

. The following steps demonstrate how we can install and configure the PMM client to enable monitoring from the PMM server (

192.168.80.10

).

Before you proceed further, you must ensure that ports 80 and 443 are both enabled on the PMM server for the PG 11 Server to connect. In order to test that, I have used telnet to validate whether ports 80 and 443 are open on the PMM Server for the pg11 server.

[root@pg11]$ hostname -I
192.168.80.20

[root@pg11]$ telnet 192.168.80.10 80
Trying 192.168.80.10...
Connected to 192.168.80.10.
Escape character is '^]'.

[root@pg11]$ telnet 192.168.80.10 443
Trying 192.168.80.10...
Connected to 192.168.80.10.
Escape character is '^]'.

Step 6 :

There are very few steps you need to perform on the PostgreSQL server to enable it as a client for PMM server. The first step is to install the PMM Client on the PostgreSQL Database server as follows. Based on the current PMM release, I am installing

pmm2-client

today. But, this may change once we have a new PMM release.

$ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
$ sudo yum install pmm2-client -y

Step 7 :

The next step is to connect the client (PostgreSQL server) to the PMM Server. We could use

pmm-admin config

in order to achieve that. Following is a simple syntax that you could use in general.

$ pmm-admin config [<flags>] [<node-address>] [<node-type>] [<node-name>]

The following are the flags and other options I could use with my setup.

flags        : --server-insecure-tls
               --server-url=https://admin:admin@192.168.80.10:443
               (--server-url should contain the PMM Server Host information)

node-address : 192.168.80.20
               (My PostgreSQL Server)

node-type    : generic
               (As I am running my PostgreSQL database on a Virtual Machine but not on a Container, it is generic.)

node-name    : pg-client
               (Can be any nodename you could use to uniquely identify this database server on your PMM Server Dashboard)

So the final syntax for my setup looks like the below. We can run this command as root or by using the sudo command.

Syntax : 7a

$ pmm-admin config --server-insecure-tls --server-url=https://admin:admin@192.168.80.10:443 192.168.80.20 generic pg-client

$ pmm-admin config --server-insecure-tls --server-url=https://admin:admin@192.168.80.10:443 192.168.80.20 generic pg-client
Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.

Syntax : 7b

You could also use a simple syntax such as following without

node-address, node-type, node-name

 :

$ pmm-admin config --server-insecure-tls --server-url=https://admin:admin@192.168.80.10:443

But when you use such a simple syntax as above,

node-address, node-type, node-name

are defaulted to certain values. If the defaults are incorrect due to your server configuration, you may better pass these details explicitly like I have done in the

syntax : 7a

. In order to validate whether the defaults are correct, you can simply use

# pmm-admin config --help

. In the following log, you see that the

node-address

  defaults to

10.0.2.15

which is incorrect for my setup. It should be

192.168.80.20

.

# pmm-admin config --help
usage: pmm-admin config [<flags>] [<node-address>] [<node-type>] [<node-name>]

Configure local pmm-agent

Flags:
  -h, --help                   Show context-sensitive help (also try --help-long and --help-man)
      --version                Show application version
...
...
...
Args:
  [<node-address>]  Node address (autodetected default: 10.0.2.15)

Below is an example where the default settings were perfect because I had configured my database server the right way.

# pmm-admin config --help
usage: pmm-admin config [<flags>] [<node-address>] [<node-type>] [<node-name>]

Configure local pmm-agent

Flags:
  -h, --help                   Show context-sensitive help (also try --help-long and --help-man)
...
...
Args:
  [<node-address>]  Node address (autodetected default: 192.168.80.20)
  [<node-type>]     Node type, one of: generic, container (default: generic)
  [<node-name>]     Node name (autodetected default: pg-client)

Using steps 6 and 7a, I have finished installing the PMM client on the PostgreSQL server and also connected it to the PMM Server. If the above steps are successful, you should see the client listed under Nodes, as seen in the following image. Else, something went wrong.

Creating required users and permissions on the PostgreSQL server

In order to monitor your PostgreSQL server using PMM, you need to create a user *using* which the database stats can be collected by the PMM agent. However, starting from PostgreSQL 10, you do not need to grant SUPERUSER or use SECURITY DEFINER (to avoid granting SUPERUSER). You can simply grant the role

pg_monitor

to a user (monitoring user). In my next blog post, you will see how we could use SECURITY DEFINER to avoid granting SUPERUSER for monitoring PostgreSQL databases with 9.6 or older.

Assuming that your PostgreSQL Version is 10 or higher, you can use the following steps.

Step 1 : 

Create a postgres user that can be used for monitoring. You could choose any username;

pmm_user

in the following command is just an example.

$ psql -c "CREATE USER pmm_user WITH ENCRYPTED PASSWORD 'secret'"

Step 2 : 

Grant

pg_monitor

role to the

pmm_user

.

$ psql -c "GRANT pg_monitor to pmm_user"

Step 3 : 

If you are not using localhost, but using the IP address of the PostgreSQL server while enabling monitoring in the next steps, you should ensure to add appropriate entries to enable connections from the

IP

and the

pmm_user

 in the

pg_hba.conf

file.

$ echo "host    all             pmm_user        192.168.80.20/32        md5" >> $PGDATA/pg_hba.conf
$ psql -c "select pg_reload_conf()"

In the above step, replace

192.168.80.20

with the appropriate PostgreSQL Server’s IP address.

Step 4 : 

Validate whether you are able to connect as

pmm_user

to the postgres database from the postgres server itself.

# psql -h 192.168.80.20 -p 5432 -U pmm_user -d postgres
Password for user pmm_user: 
psql (11.5)
Type "help" for help.

postgres=>

Enabling PostgreSQL Monitoring with and without QAN (Query Analytics)

Using PMM, we can monitor several metrics in PostgreSQL such as database connections, locks, checkpoint stats, transactions, temp usage, etc. However, you could additionally enable Query Analytics to look at the query performance and understand the queries that need some tuning. Let us see how we can simply enable PostgreSQL monitoring with and without QAN.

Without QAN

Step 1 :

In order to start monitoring PostgreSQL, we could simply use

pmm-admin add postgresql

. It accepts additional arguments such as the service name and PostgreSQL address and port. As we are talking about enabling monitoring without QAN, we could use the flag:

--query-source=none

to disable QAN.

# pmm-admin add postgresql --query-source=none --username=pmm_user --password=secret postgres 192.168.80.20:5432
PostgreSQL Service added.
Service ID  : /service_id/b2ca71cf-a2a4-48e3-9c5b-6ecd1a596aea
Service name: postgres

Step 2 :

Once you have enabled monitoring, you could validate the same using

pmm-admin list

.

# pmm-admin list
Service type  Service name         Address and port  Service ID
PostgreSQL    postgres             192.168.80.20:5432 /service_id/b2ca71cf-a2a4-48e3-9c5b-6ecd1a596aea

Agent type                  Status     Agent ID                                        Service ID
pmm-agent                   connected  /agent_id/13fd2e0a-a01a-4ac2-909a-cae533eba72e  
node_exporter               running    /agent_id/f6ba099c-b7ba-43dd-a3b3-f9d65394976d  
postgres_exporter           running    /agent_id/1d046311-dad7-467e-b024-d2c8cb7f33c2  /service_id/b2ca71cf-a2a4-48e3-9c5b-6ecd1a596aea

You can now access the PostgreSQL Dashboards and see several metrics being monitored.

With QAN

With PMM2, there is an additional step needed to enable QAN. You should create a database with the same name as the monitoring user (

pmm_user

here). And then, you should create the extension:

pg_stat_statements

in that database. This behavior is going to change on the next release so that you can avoid creating the database.

Step 1 : 

Create the database with the same name as the monitoring user. Create the extension:

pg_stat_statements

in the database.

$ psql -c "CREATE DATABASE pmm_user"
$ psql -c -d pmm_user "CREATE EXTENSION pg_stat_statements"

Step 2 : 

If

shared_preload_libraries

has not been set to

pg_stat_statements

, we need to set it and restart PostgreSQL.

$ psql -c "ALTER SYSTEM SET shared_preload_libraries TO 'pg_stat_statements'"
$ pg_ctl -D $PGDATA restart -mf
waiting for server to shut down.... done
server stopped
...
...
 done
server started

Step 3 :

In the previous steps, we used the flag:

--query-source=none

to disable QAN. In order to enable QAN, you could just remove this flag and use

pmm-admin add postgresql

without the flag.

# pmm-admin add postgresql --username=pmm_user --password=secret postgres 192.168.80.20:5432
PostgreSQL Service added.
Service ID  : /service_id/24efa8b2-02c2-4a39-8543-d5fd54314f73
Service name: postgres

Step 4 : 

Once the above step is completed, you could validate the same again using

pmm-admin list

. But this time, you should see an additional service:

qan-postgresql-pgstatements-agent

.

# pmm-admin list
Service type  Service name         Address and port  Service ID
PostgreSQL    postgres             192.168.80.20:5432 /service_id/24efa8b2-02c2-4a39-8543-d5fd54314f73

Agent type                  Status     Agent ID                                        Service ID
pmm-agent                   connected  /agent_id/13fd2e0a-a01a-4ac2-909a-cae533eba72e  
node_exporter               running    /agent_id/f6ba099c-b7ba-43dd-a3b3-f9d65394976d  
postgres_exporter           running    /agent_id/7039f7c4-1431-4518-9cbd-880c679513fb  /service_id/24efa8b2-02c2-4a39-8543-d5fd54314f73
qan-postgresql-pgstatements-agent running    /agent_id/7f0c2a30-6710-4191-9373-fec179726422  /service_id/24efa8b2-02c2-4a39-8543-d5fd54314f73

After this step, you can now see the Queries and their statistics captured on the

Query Analytics Dashboard

.

Meanwhile, have you tried Percona Distribution for PostgreSQL? It is a collection of finely-tested and implemented open source tools and extensions along with PostgreSQL 11, maintained by Percona. PMM works for both Community PostgreSQL and also the Percona Distribution for PostgreSQL. Please subscribe to our blog posts to learn more interesting features in PostgreSQL.

Oct
22
2019
--

PMM Server + podman: Running a Container Without root Privileges

PMM server and podman

PMM server and podmanIn this article, we will take a look at how to run Percona Monitoring and Management (PMM) Server in a container without root privileges.

Some of the concerns companies have about using Docker relate to the security risks that exist due to the requirement for root privileges in order to run the service and therefore the containers. If processes inside the container are running as root then they are also running as such outside of the container, which means that you should take measures to mitigate privilege escalation issues that could occur from container breakout. Recently, Docker added experimental support for running in rootless mode which requires a number of extra changes.

From a personal perspective, it seems that Docker tries to do it “all in one”, which has its issues. While I was looking to overcome some gripes with Docker, reduce requirements, and also run containers as a chosen user regardless of the process users inside the container, podman and buildah caught my attention, especially as podman also supports Kubernetes. To quote the docs:

Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.

Configuring user namespaces

In order to use containers without the need for root privileges, some initial configuration is likely to be needed to set the mapping of namespaces, as well as a suitable limit. Using the shadow-utils package on CentOS, or uidmap package on Debian, it is possible to assign subordinate user (man subuid) and group IDs (man subgid) to allow the mapping of IDs inside a container to a dedicated set of IDs outside the container. This allows us to resolve the issue where a process is running as root both inside and outside of a container. 

The following is an Ansible task snippet that, on a dedicated CentOS machine, makes the necessary changes, although you may need/want to adjust this if already using namespaces:

- name: install packages
    yum:
      name:
      - runc
      - podman
      - podman-docker
      - buildah
      - shadow-utils
      - slirp4netns
      update_cache: yes
      state: latest
    become: yes
    tags:
    - packages

- name: set user namespace
   copy:
     content: '{{ ansible_user }}:100000:65536'
     dest: '/etc/{{ item }}'
   become: yes
   loop:
     - subuid
     - subgid
   register: set_user_namespace_map
   tags:
   - namespace

 - name: set sysctl user.max_user_namespaces
   sysctl:
     name: user.max_user_namespaces
     value: 100000
     state: present
     sysctl_file: /etc/sysctl.d/01-namespaces.conf
     sysctl_set: yes
     reload: yes
   become: yes
   register: set_user_namespace_sysctl
   tags:
   - namespace

 - name: reboot to apply
   reboot:
     post_reboot_delay: 60
   become: yes
   when: set_user_namespace_map.changed or set_user_namespace_sysctl.changed
   tags:
   - namespace

This set of tasks will:

  1. Install some packages that are either required or useful when using podman.
  2. Configure the 2 files used to map IDs (
    /etc/subuid

    and

    /etc/subgid

     ); we allow the user to have 65536 user IDs mapped, starting the mapping from 100000 and likewise for group IDs.

  3. Set
    user.max_user_namespaces

      to ensure that you can allocate sufficient IDs, making it persistent after a reboot. If you have needed to configure the namespaces then it is likely that you need to reboot to make sure that the changes are active.

Look Ma, PMM — and no root!

OK, so if all is well, then the system is configured to support user namespaces and we can run a container without needing to be root (or a member of a special group). The commands for podman are identical, or very close to those that you would use for Docker, and you can even set

alias docker=podman

  if you wish! Here is how you could test PMM Server (v2) out, mapping port 8443 to the NGINX port inside the container:

$ whoami
percona

$ podman run -d --name pmm2-test -p 8443:443 docker.io/percona/pmm-server:2

In the previous command, the path to the registry is explicitly stated as being a Docker one, but if you were to simply specify percona/pmm-server:2 then by default a number of registries are checked and the first match will win. The list and order of the registries differ per distro, e.g.

  • Ubuntu:
    registries = ['quay.io', 'docker.io']
  • RedHat:
    registries = ['registry.redhat.io', 'quay.io', 'docker.io']

Checking the processes

The container for PMM server is now running as the user that executed the podman command, so we can take a look and see what the processes look like. First, let’s take a peek inside the container:

$ podman exec pmm-server bash -c "ps -e -o pid,ppid,pgrp,user,comm --forest"
  PID  PPID  PGRP USER     COMMAND
  862     0   862 root     ps
    1     0     1 root     supervisord
    9     1     9 postgres postgres
   88     9    88 postgres  \_ postgres
   98     9    98 postgres  \_ postgres
  100     9   100 postgres  \_ postgres
  101     9   101 postgres  \_ postgres
  102     9   102 postgres  \_ postgres
  103     9   103 postgres  \_ postgres
  104     9   104 postgres  \_ postgres
  154     9   154 postgres  \_ postgres
  173     9   173 postgres  \_ postgres
  183     9   183 postgres  \_ postgres
   10     1    10 root     clickhouse-serv
   11     1    11 grafana  grafana-server
   12     1    12 root     nginx
   23    12    12 nginx     \_ nginx
   13     1    13 root     crond
   14     1    14 pmm      prometheus
   17     1    17 root     pmm-managed
   51    17    17 root      \_ supervisorctl
   18     1    18 root     pmm-agent
  172    18    18 root      \_ node_exporter
  174    18    18 root      \_ postgres_export
  153     1   153 pmm      percona-qan-api

We can see all of the processes that run as part of PMM, plus the ps command that was executed to look at the processlist.

Now, let’s take a look outside the container to see what these processes show up as:

$ pgrep -f 'postgres|supervisord|clickhouse-serv|pmm-managed' | xargs ps -o pid,ppid,pgrp,user,comm --forest
  PID  PPID  PGRP USER     COMMAND
32020 32003 32003 percona  node_exporter
23532 23358 23358 percona  postgres_export
23530 23358 23358 percona  node_exporter
23332 23321 23332 percona  supervisord
23349 23332 23349 100025    \_ postgres
23440 23349 23440 100025    |   \_ postgres
23456 23349 23456 100025    |   \_ postgres
23458 23349 23458 100025    |   \_ postgres
23459 23349 23459 100025    |   \_ postgres
23460 23349 23460 100025    |   \_ postgres
23461 23349 23461 100025    |   \_ postgres
23462 23349 23462 100025    |   \_ postgres
23512 23349 23512 100025    |   \_ postgres
23531 23349 23531 100025    |   \_ postgres
23541 23349 23541 100025    |   \_ postgres
23350 23332 23350 percona   \_ clickhouse-serv
23357 23332 23357 percona   \_ pmm-managed

Great! As we can see from the processlist, none of the processes that we checked are running as root, the ones that are running as root inside the container are running as percona, and the remainder are using unknown user IDs generated by the subordinate mapping.

Using persistent volumes for your data

The documented way to use PMM with persistent volumes does not work in the same manner with podman. However, it is easily fixed as follows (plus we get to try out buildah):

$ cat <<EOS > PMM2.Dockerfile
FROM docker.io/percona/pmm-server:2

VOLUMES ['/srv']
EOS

$ buildah bud -f PMM2.Dockerfile --tag localhost/pmm-server:custom .
$ podman create --name pmm-data -v /srv localhost/pmm-server:custom 
$ podman run -d --name pmm-server -v pmm-data:/srv -p 8443:443 localhost/pmm-server:custom

Summary

It is easy to get PMM server running in a container with podman and buildah while also increasing the security from the outside by restricting the user privileges for the processes.

And while you may not be able to replace Docker usage in certain circumstances, taking a look at podman and buildah is definitely recommended… and you can still use the Docker registry if that is the only location for the image that you want to run.

If you haven’t tried PMM2 yet then you can easily test it out or take a look at the online demo.

Oct
14
2019
--

Webinar 10/16: What’s New in Percona Monitoring and Management 2?

Percona Monitoring and Management 2

How can you ensure you are properly managing and optimizing the performance of your database environment?

Join Percona’s Product Manager Michael Coburn as he presents “What’s New in Percona Monitoring and Management 2?” and walks you through practical demonstration. This will be taking place on Wednesday, October 16, 2019, at 11:00 AM EDT.

Register Now

Percona Monitoring and Management (PMM) is a free, open source platform that supports MySQL, MariaDB, MongoDB, and PostgreSQL environments, providing detailed time-based analysis of your data. PMM allows you to embrace multiple database options and can be used on-premises and in the cloud.

Our recent major upgrade to PMM2 gives you far greater Query Analytics performance and usability and enables you to monitor much larger environments. Key features of PMM2 include:

•    New performance and usability query improvements.
•    New query analytics for PostgreSQL.
•    New ability to tag queries.
•    New administrative API.
•    New service-level dashboards.
•    Enhanced security protocols to ensure your data is safe.

Michael Coburn, Product Manager, Percona will provide an overview of these new features and a working demonstration of PMM2. You will also have the opportunity to ask questions in the chat window.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Sep
25
2019
--

Explore Percona Monitoring and Management on Your Own with Linode

Percona Monitoring Management Linode

Percona Monitoring Management LinodeAs the saying goes, a picture is worth 1000 words – and this is why we don’t just talk about Percona Monitoring and Management (PMM) but also have a demo which allows you to explore many PMM features without having to install it.

Though there is only so much you can do in a public demo, if you want to play with customizing graphs or another functionality which requires read-write access, or otherwise just dive deeper into PMM operation, you may want to get your own instance to do so with.

As we’re releasing Percona Monitoring and Management 2,  we have created a very simple way to do it on Linode using its StackScripts framework.

This StackScript will deploy PMM Server as well as Percona Server for MySQL 8 and kickoff some light query workload using Sysbench-TPCC.

You can deploy this stackscript using linode-cli in the following way:

linode-cli linodes create --type g6-standard-1 --label pmm2demo  --root_pass MySecret  --stackscript_id  587359

If you prefer to do it from the GUI instead, you can go to One-Click Apps Menu, click on “Community Stack Scripts”,  search for PMM, and pick PMM2-MySQL-Demo.

 

Choose Region, Node Size, and root password, and hit create. It will take a few minutes for it to be fully deployed, and you will have your own PMM Demo to play with.

I would suggest selecting the plan with at least 2GB of memory for best results.

 

Login with admin:admin credentials and proceed to explore. I would especially suggest you check out the “Node Summary” dashboard.

 

The MySQL Instance Summary Dashboard

 

MySQL InnoDB Details

 

Also, check out that you can see how you can examine your query load by clicking on “Query Analytics”.

 

Enjoy PMM2, and let us know your feedback on the forums or through JIRA.

Sep
09
2019
--

Which Indexes are Cached? Discover with PMM.

Which Indexes are Cached

One of the great things about working at Percona is the constant innovation that occurs as a result of a deep level of technical expertise. A more recent conversation about the Information Schema table: innodb_cached_indexes led to the desire to produce this information in an easy to digest and useful format. Enter PMM.

Which Indexes are Cached

Our goal with creating this dashboard was to help bring further insight into how your MySQL database cache is being used. Why is this important? Data is accessed significantly faster when it is cached, so indexes that are cached will allow for an increase in query performance. Until now there has not been an easy way to see which indexes are cached and which are not. We want to take the guesswork out of the equation and present this information quickly in an easy to read format. What other information can we learn from using this dashboard?

  • Does your cache store the indexes you are expecting or is there interesting behavior that needs to be looked into?
  • Where are your largest indexes and which indexes take up the most space?
  • Which indexes are changing over time?
  • Which indexes are not in cache?

The MySQL InnoDB Cached Indexes dashboard displays the top 20 indexes that are memory resident and the sizes of each index. The pie chart displays the same information in a different view but combines all indexes that take up 3% or less of the total size of all memory-resident indexes, in a pie slice labeled Others. The naming convention is schema_name.table_name.index_name. You are able to display any combination of tables and indexes as you see fit. The five statistics displayed at the bottom are the total size of cached indexes on your MySQL instance, the size of the InnoDB buffer pool, the ratio of cached indexes to the size of the InnoDB buffer pool, the size of the selected schema’s cached indexes, and the size of the selected table’s cached indexes.

To install this dashboard you can do one of 2 options:

  1. Download the files here. The README.md has installation instructions.
  2. Download the queries-mysqld.yml file from here. The README.md has installation instructions for queries-mysqld.yml. Instead of importing the .json, paste the ID 10815 for the dashboard ID, and select Load.

Feel free to provide your thoughts and feedback in the comments below.

For more information, check out Custom Queries in PMM by Daniel Guzmán Burgos, one of our Managed Services Technical Leads.

Aug
30
2019
--

PMM for PostgreSQL: Quick Start Guide

PMM for PostgreSQL

As a Solutions Engineer at Percona, one of my responsibilities is to support our customer-facing roles such as the sales and customer success teams. This affords me the opportunity to speak to many current and new customers who partner with Percona. I often find that many people are interested in Percona Monitoring and Management (PMM) as a free and open-source monitoring solution due to its robust monitoring capabilities when compared to many SaaS-based monitoring solutions. They are interested in installing PMM for PostgreSQL for the first time and want a “quick start guide” with a brief overview to get their feet wet. I have included the commands to get started for both PMM 1 and PMM 2 (PMM2 is still in beta).

For a brief overview of the PMM Architecture and how to install PMM Server, please see my previous post PMM for MongoDB: Quick Start Guide.

PMM for PostgreSQL

Demonstration Environment

When deploying PMM in this example, I am making the following assumptions about the environment:

  • PostgreSQL and the monitoring host are running on Debian based operating systems. (For information on installing as an RPM instead please read Deploying Percona Monitoring and Management.)
  • PostgreSQL is already installed and setup. The username and password for the PostgreSQL user are postgres:postgres.
  • The PMM server docker image has been installed and started on another host.

Installing PMM for PostgreSQL

Setting up DB permissions

  • Capturing read and write time statistics is possible only if PostgreSQL’s track_io_timing setting is enabled. This can be done either in the configuration file or with the following query executed on the running system:
ALTER SYSTEM SET track_io_timing=ON;
SELECT pg_reload_conf();

  • Percona recommends that a PostgreSQL user be configured for SUPERUSER level access, in order to gather the maximum amount of data with a minimum amount of complexity. This can be done with the following command for the standalone PostgreSQL installation:
CREATE USER postgres WITH SUPERUSER ENCRYPTED PASSWORD 'postgres';

If you are using RDS:

CREATE USER postgres WITH rds_superuser ENCRYPTED PASSWORD 'postgres';

Download the Percona repo package

We must first enable the Percona package repository on our PostgreSQL instance and install the PMM Client. Please refer to PMM for MongoDB: Quick Start Guide to accomplish the first three steps below (but come back here before doing MongoDB-specific things)

  • Download PMM-Client
  • Install PMM-Client
  • Configure PMM-Client for monitoring

Now we provide the PMM Client credentials necessary for monitoring the PostgreSQL database.  Execute the following command to start monitoring and communicating with the PMM server:

sudo pmm-admin add postgresql --user=postgres --password=postgres

To start monitoring and communicating with the PMM 2 Server:

pmm-admin add postgresql --username=postgres --password=postgres

You should get a similar output as below if it was successful:

Great! We have successfully installed PMM for PostgreSQL and are ready to take a look at the dashboard.

Of Note: We’re launching PMM 2 Beta with just the PostgreSQL Overview dashboard, but we have others under development, so watch for new Dashboards to appear in subsequent releases!

PostgreSQL Overview Dashboard

Navigate to the IP address of your monitoring host. http://<pmm_server_ip>.

The PostgreSQL Overview Dashboard contains the following graphs:

  • PostgreSQL Connections
  • Active Connections
  • Tuples
  • Read Tuple Activity
  • Tuples Changed per (time resolution)
  • Transactions
  • Durations of Transactions
  • Number of Temp Files
  • Size of Temp Files
  • Conflicts/Deadlocks
  • Numer of Locks
  • Operations with Blocks
  • Buffers
  • Canceled Queries
  • Cache Hit Ratio
  • Checkpoint Stats
  • PostgreSQL Settings
  • System Information

For more information about PMM 2 please read: Percona Monitoring and Management (PMM) 2 Beta Is Now Available

Aug
21
2019
--

Cleaning Docker Disk Space Usage

Cleaning Docker Disk Space Usage

Cleaning Docker Disk Space UsageWhile testing our PMM2 Beta and other Dockerized applications, you may want to clean up your docker install and start from scratch. If you’re not very familiar with Docker it may not be that trivial. This post focuses on cleaning up everything in docker so you can “start from scratch” which means removing all your containers and their data. Make sure this is what you really want!

First, you have running (and stopped containers):

root@pmm2-test:~# docker ps -a
CONTAINER ID        IMAGE                               COMMAND                CREATED             STATUS              PORTS                                      NAMES
810fef9a2a05        perconalab/pmm-server:2.0.0-beta5   "/opt/entrypoint.sh"   2 weeks ago         Up 2 weeks          0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   pmm-server-2.0.0-beta5
7e2595b2b7ad        perconalab/pmm-server:2.0.0-beta5   "/bin/true"            2 weeks ago         Created                                                        pmm-data-2-0-0-beta5

Which you can stop and remove.

root@pmm2-test:~# docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)
810fef9a2a05
7e2595b2b7ad
810fef9a2a05
7e2595b2b7ad

You may feel like you are done at this point, but not really. You may have a lot of disk space held up in other Docker objects:

root@pmm2-test:~# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              1                   0                   1.331GB             1.331GB (100%)
Containers          0                   0                   0B                  0B
Local Volumes       1                   0                   52.61GB             52.61GB (100%)
Build Cache         0                   0                   0B                  0B

Now we can see that we have about 1GB still held up in images and the larger amount of 52GB in local volumes. To clean these up, we want to see their names, which you can do with the same “df” command:

root@pmm2-test:~# docker system df -v
Images space usage:

REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE                SHARED SIZE         UNIQUE SIZE         CONTAINERS
perconalab/pmm-server   2.0.0-beta5         9b3111808907        2 weeks ago         1.331GB             0B                  1.331GB             0

Containers space usage:

CONTAINER ID        IMAGE               COMMAND             LOCAL VOLUMES       SIZE                CREATED             STATUS              NAMES

Local Volumes space usage:

VOLUME NAME                                                        LINKS               SIZE
a3e9e89f3c43bdaa817e014d08e73705562a9affe4dae54817d77a1b8ad0c771   0                   52.61GB

Build cache usage: 0B

CACHE ID            CACHE TYPE          SIZE                CREATED             LAST USED           USAGE               SHARED

From here you can, of course, remove volumes and images one by one, but you may choose to do this instead:

Remove All Docker Images:

root@pmm2-test:~# docker image prune -a -f
Deleted Images:
untagged: perconalab/pmm-server:2.0.0-beta5
untagged: perconalab/pmm-server@sha256:55968571b01e1f2b02cee77ef92afa05f93c1a0b8e506ceebc10d03bff9783be
deleted: sha256:9b3111808907a3cd02d4961b114471c8ec16dc0dc430cfd8c4a4f556804e57ec
deleted: sha256:4cb3b12fc91c29837374de51f6279d911c9e21d73d22a009841ac9a00e6a7e96
deleted: sha256:d69483a6face4499acb974449d1303591fcbb5cdce5420f36f8a6607bda11854

Total reclaimed space: 1.331GB

And Remove all Docker Local Volumes:

root@pmm2-test:~#  docker system prune --volumes -f
Deleted Volumes:
a3e9e89f3c43bdaa817e014d08e73705562a9affe4dae54817d77a1b8ad0c771

Total reclaimed space: 52.61GB

We can run “docker system df” again to see if we really cleaned up everything or if anything else remains:

root@pmm2-test:~# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              0                   0                   0B                  0B
Containers          0                   0                   0B                  0B
Local Volumes       0                   0                   0B                  0B
Build Cache         0                   0                   0B                  0B

All clear. We can now have a fresh start with docker!

Jul
23
2019
--

PMM for MongoDB: Quick Start Guide

PMM for MongoDB

As a Solutions Engineer at Percona, one of my responsibilities is to support our customer-facing roles such as the sales and customer success teams, which affords me the opportunity to speak to many current and new customers who partner with Percona. I often find that many people are interested in Percona Monitoring and Management (PMM) as a free and open-source monitoring solution due to its robust monitoring capabilities when compared to many SaaS-based monitoring solutions. They are interested in installing PMM for MongoDB for the first time and want a “quick start guide” with a brief overview to get their feet wet. I have included the commands to get started for both PMM 1 and PMM 2 (PMM2 is still in beta).

PMM for MongoDB

Overview and Architecture

PMM is an open-source platform for out-of-the-box management and monitoring of MySQL, MongoDB, and PostgreSQL performance, on-premise and in the cloud. It is developed by Percona in collaboration with experts in the field of managed database services, support, and consulting. PMM is built off of Prometheus, a powerful open-source monitoring and alerting platform, and supports any other service that has an exporter. An exporter is an endpoint that collects data on the instance being monitored and is polled by Prometheus to collect metrics. For more information on how to use your own exporters, read the documentation here.

When deployed on-premises, the PMM platform is based on a client-server model that enables scalability. It includes the following modules:

  • PMM Client– installed on every database host that you want to monitor. It collects server metrics, general system metrics, and Query Analytics data for a complete performance overview.
  • PMM Server – the central part of PMM that aggregates collected data and presents it in the form of tables, dashboards, and graphs in a web interface.

PMM can also be deployed to support DBaaS instances for remoting monitoring. Instructions can be found here, under the Advanced section. The drawback of this approach is that you will not have visibility of host-level metrics (CPU, memory, and disk activity will not be captured nor displayed in PMM). There are currently 3 different deployment options:

For a more detailed overview of the PMM Architecture please read the Overview of PMM Architecture.

Demonstration Environment

When deploying PMM in this example, I am making the following assumptions about the environment:

  • MongoDB and the monitoring host are running on Debian based operating systems. (For information on installing as an RPM instead please read Deploying Percona Monitoring and Management.)
  • MongoDB is already installed and setup. The username and password for the MongoDB user are percona:percona.
  • The PMM server will be installed within a docker container on a dedicated host.

Installing PMM Server

This process will consist of two steps:

  1. Create the docker container – docker will automatically pull the PMM Server image from the Percona docker repository.
  2. Start (or run) the docker container – docker will bring up the PMM Server in the container

Create the Docker Container

The code below illustrates the command for creating the docker container for PMM 1:

docker create \
  -v /opt/prometheus/data \
  -v /opt/consul-data \
  -v /var/lib/mysql \
  -v /var/lib/grafana \
  --name pmm-data \
  percona/pmm-server:1 /bin/true

The code below illustrates the command for creating the docker container for PMM 2:

docker create -v /srv --name pmm-data-2-0-0-beta1 perconalab/pmm-server:2.0.0-beta1 /bin/true

This is the expected output from the code:

Use the following command to start the PMM 1 docker container:

docker run -d \
   -p 80:80 \
   --volumes-from pmm-data \
   --name pmm-server \
   --restart always \
   percona/pmm-server:1

Use the following command to start the PMM 2 docker container:

docker run -d -p 80:80 -p 443:443 --volumes-from pmm-data-2-0-0-beta1 --name pmm-server-2.0.0-beta1 --restart always perconalab/pmm-server:2.0.0-beta1

This is the expected output from the code:

The PMM Server should now be installed! Yes, it IS that easy. In order to check that you can access PMM, navigate in a browser to the IP address of the monitoring host. If you are using PMM 2, the default username and password for viewing PMM is admin:admin. You should arrive at a page that looks like https://pmmdemo.percona.com.

Installing PMM Client for MongoDB

Setting up DB permissions

PMM Query Analytics for MongoDB requires the user of the mongodb_exporter to have the clusterMonitor role assigned for the admin database and the read role for the local database. If you do not have these set up already, please read Configuring MongoDB for Monitoring in PMM Query Analytics.

Download the Percona repo package

We must first enable the Percona package repository on our MongoDB instance and install the PMM Client. We can run the following commands in order to accomplish this:

$ wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
$ sudo dpkg -i percona-release_latest.generic_all.deb
$ sudo apt-get update

Since PMM 2 is still not GA, you’ll need to leverage our experimental release of the Percona repository. You’ll need to download and install the official percona-release package from Percona and use it to enable the Percona experimental component of the original repository. See percona-release official documentation for further details on this new tool. The following commands can be used for PMM 2:

$ wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
$ sudo dpkg -i percona-release_latest.generic_all.deb
$ sudo percona-release disable all
$ sudo percona-release enable original experimental
$ sudo apt-get update

Now that we have the MongoDB database server configured with the Percona software repository, we can download the agent software with the local package manager.  Enter the following command to automatically download and install the PMM Client package on the MongoDB server:

$ sudo apt-get install pmm-client

To download and install the PMM 2 Client:

$ apt-get install pmm2-client

Next, we will configure the PMM client by telling it where to find the PMM server.  Execute the following command to configure the PMM client:

$ sudo pmm-admin config --server=<pmm_server_ip>:80

To configure the PMM 2 Client:

$ pmm-admin config --server-insecure-tls --server-url=https://<pmm_server_ip>:443

You should get a similar output as below if it was successful:

Now we provide the PMM Client credentials necessary for monitoring the MongoDB database.  Execute the following command to start monitoring and communicating with the PMM server:

$ sudo pmm-admin add mongodb --uri mongodb://percona:percona@127.0.0.1:27017

To start monitoring and communicating with the PMM 2 Server:

$ sudo pmm-admin add mongodb --use-profiler  --server-insecure-tls --username=percona  --password=percona --server-url=https://<pmm_ip>:443

You should get a similar output as below if it was successful:

Great! We have successfully installed PMM for MongoDB and are ready to take a look at the dashboards.

PMM for MongoDB Dashboards Overview

Navigate to the IP address of your monitoring host. http://<pmm_server_ip>.

PMM Home Dashboard – The Home Dashboard for PMM gives an overview of your entire environment to include all the systems you have connected and configured for monitoring under PMM. It provides useful metrics such as CPU utilization, RAM availability, database connections, and uptime.

Percona Monitoring and Management Dashboard

Cluster Summary – it shows the statistics for the selected MongoDB cluster such as counts of sharded and un-sharded databases, shard and chunk statistics, and various mongos statistics.

MongoDB Cluster Summary

MongoDB Overview – this provides basic information about MongoDB instances such as connections, command operations, and document operations.

MongoDB Overview

ReplSet – provides information about replica sets and their members such as replication operations, replication lag, and member state uptime.

ReplSet

WiredTiger/MMAPv1/In-Memory/RocksDB – it contains metrics that describe the performance of the selected host storage engine.

WiredTiger/MMAPv1/In-Memory/RocksDB

Query Analytics – this allows you to analyze database queries over periods of time. This can help you optimize database performance by ensuring queries are executed as expected and within the shortest amount of time. If you are having performance issues, this is a great place to see which queries may be the cause of your performance issues and get detailed metrics for them.

PMM Query Analytics

What Now?

Now that you have PMM for MongoDB up and running, I encourage you to explore in-depth more of the graphs and features. A few other MongoDB PMM blog posts which may be of interest:

If you run into issues during the install process, a good place to start is Peter Zaitsev’s blog post on PMM Troubleshooting.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com