Running PMM1 and PMM2 Clients on the Same Host

Running PMM1 and PMM2 Clients

Running PMM1 and PMM2 ClientsWant to try out Percona Monitoring and Management 2 (PMM 2) but you’re not ready to turn off your PMM 1 environment?  This blog is for you! Keep in mind that the methods described are not intended to be a long-term migration strategy, but rather, simply a way to deploy a few clients in order to sample PMM 2 before you commit to the upgrade. ?

Here are step-by-step instructions for deploying PMM 1 & 2 client functionality i.e. pmm-client and pmm2-client, on the same host.

  1. Deploy PMM 1 on Server1 (you’ve probably already done this)
  2. Install and setup pmm-client for connectivity to Server1
  3. Deploy PMM 2 on Server2
  4. Install and setup pmm2-client for connectivity to Server2
  5. Remove pmm-client and switched completely to pmm2-client

The first few steps are already described in our PMM1 documentation so we are simply providing links to those documents.  Here we’ll focus on steps 4 and 5.

Install and Setup pmm2-client Connectivity to Server2

It’s not possible to install both clients from a repository at the same time. So you’ll need to download a tarball of pmm2-client. Here’s a link to the latest version directly from our site.

Download pmm2-client Tarball

* Note that depending on when you’re seeing this, the commands below may not be for the latest version, so the commands may need to be updated for the version you downloaded.

$ wget

Extract Files From pmm2-client Tarball

$ tar -zxvf pmm2-client-2.1.0.tar.gz 
$ cd pmm2-client-2.1.0

Register and Generate Configuration File

Now it’s time to set up a PMM 2 client. In our example, the PMM2 server IP is and the monitored host IP is

$ ./bin/pmm-agent setup --config-file=config/pmm-agent.yaml \
--paths-node_exporter="$PWD/pmm2-client-2.1.0/bin/node_exporter" \
--paths-mysqld_exporter="$PWD/pmm2-client-2.1.0/bin/mysqld_exporter" \
--paths-mongodb_exporter="$PWD/pmm2-client-2.1.0/bin/mongodb_exporter" \
--paths-postgres_exporter="$PWD/pmm2-client-2.1.0/bin/postgres_exporter" \
--paths-proxysql_exporter="$PWD/pmm2-client-2.1.0/bin/proxysql_exporter" \
--server-insecure-tls --server-address= \
--server-username=admin  --server-password="admin" generic

Start pmm-agent

Let’s run the pmm-agent using a screen.  There’s no service manager integration when deploying alongside pmm-client, so if your server restarts, pmm-agent won’t automatically resume.

# screen -S pmm-agent

$ ./bin/pmm-agent --config-file="$PWD/config/pmm-agent.yaml"

Check the Current State of the Agent

$ ./bin/pmm-admin list
Service type  Service name         Address and port  Service ID

Agent type                  Status     Agent ID                                        Service ID
pmm-agent                   connected  /agent_id/805db700-3607-40a9-a1fa-be61c76fe755  
node_exporter               running    /agent_id/805eb8f6-3514-4c9b-a05e-c5705755a4be

Add MySQL Service

Detach the screen, then add the mysql service:

$ ./bin/pmm-admin add mysql --use-perfschema --username=root mysqltest
MySQL Service added.
Service ID  : /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b
Service name: mysqltest

Here is the state of pmm-agent:

$ ./bin/pmm-admin list
Service type  Service name         Address and port  Service ID
MySQL         mysqltest      /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b

Agent type                  Status     Agent ID                                        Service ID
pmm-agent                   connected  /agent_id/805db700-3607-40a9-a1fa-be61c76fe755   
node_exporter               running    /agent_id/805eb8f6-3514-4c9b-a05e-c5705755a4be   
mysqld_exporter             running    /agent_id/efb01d86-58a3-401e-ae65-fa8417f9feb2  /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b
qan-mysql-perfschema-agent  running    /agent_id/26836ca9-0fc7-4991-af23-730e6d282d8d  /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b

Confirm you can see activity in each of the two PMM Servers:


Remove pmm-client and Switch Completely to pmm2-client

Once you’ve decided to move over completely to PMM2, it’s better to make a switch from the tarball version to installation from the repository. It will allow you to perform client updates much easier as well as register the new agent as a service for automatically starting with the server. Also, we will show you how to make a switch without re-adding monitored instances.

Configure Percona Repositories

$ sudo yum install 
$ sudo percona-release disable all 
$ sudo percona-release enable original release 
$ yum list | grep pmm 
pmm-client.x86_64                    1.17.2-1.el6                  percona-release-x86_64
pmm2-client.x86_64                   2.1.0-1.el6                   percona-release-x86_64

Here is a link to the apt variant.

Remove pmm-client

yum remove pmm-client

Install pmm2-client

$ yum install pmm2-client
Loaded plugins: priorities, update-motd, upgrade-helper
4 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package pmm2-client.x86_64 0:2.1.0-5.el6 will be installed
  pmm2-client.x86_64 0:2.1.0-5.el6                                                                                                                                                           


Configure pmm2-client

Let’s copy the currently used pmm2-client configuration file in order to omit re-adding monitored instances.

$ cp pmm2-client-2.1.0/config/pmm-agent.yaml /tmp

It’s required to set the new location of exporters (/usr/local/percona/pmm2/exporters/) in the file.

$ sed -i 's|node_exporter:.*|node_exporter: /usr/local/percona/pmm2/exporters/node_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|mysqld_exporter:.*|mysqld_exporter: /usr/local/percona/pmm2/exporters/mysqld_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|mongodb_exporter:.*|mongodb_exporter: /usr/local/percona/pmm2/exporters/mongodb_exporter|g' /tmp/pmm-agent.yaml 
$ sed -i 's|postgres_exporter:.*|postgres_exporter: /usr/local/percona/pmm2/exporters/postgres_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|proxysql_exporter:.*|proxysql_exporter: /usr/local/percona/pmm2/exporters/proxysql_exporter|g' /tmp/pmm-agent.yaml

The default configuration file has to be replaced by our file and the service pmm-agent has to be restarted.

$ cp /tmp/pmm-agent.yaml /usr/local/percona/pmm2/config/
$ systemctl restart pmm-agent

Check Monitored Services

So now we can verify the current state of monitored instances.

$ pmm-admin list

Also, it can be checked on PMM server-side.


Tips for Designing Grafana Dashboards

Designing Grafana Dashboards

As Grafana powers our star product – Percona Monitoring and Management (PMM) – we have developed a lot of experience creating Grafana Dashboards over the last few years.   In this article, I will share some of the considerations for designing Grafana Dashboards. As usual, when it comes to questions of design they are quite subjective, and I do not expect you to chose to apply all of them to your dashboards, but I hope they will help you to think through your dashboard design better.

Design Practical Dashboards

Grafana features many panel types, and even more are available as plugins. It may be very attractive to use many of them in your dashboards using many different visualization options. Do not!  Stick to a few data visualization patterns and only add additional visualizations when they provide additional practical value not because they are cool.  Graph and Singlestat panel types probably cover 80% of use cases.

Do Not Place Too Many Graphs Side by Side

This probably will depend a lot on how your dashboards are used.  If your dashboard is designed for large screens placed on the wall you may be able to fit more graphs side by side, if your dashboard needs to scale down to lower resolution small laptop screen I would suggest sticking to 2-3 graphs in a row.

Use Proper Units

Grafana allows you to specify a unit for the data type displayed. Use it! Without type set values will not be properly shortened and very hard to read:

Grafana Dashboards

Compare this to

Grafana Dashboards2

Mind Decimals

You can specify the number of values after decimal points you want to display or leave it default.  I found default picking does not always work very well, for example here:

Grafana Dashboards3

For some reason on the panel Axis, we have way too many values displayed after the decimal point.  Grafana also often picks three values after decimal points as in the table below which I find inconvenient – from the glance view, it is hard to understand if we’re dealing with a decimal point or with “,” as a “thousands” separator, so I may be looking at 2462 GiB there.  While it is not feasible in this case, there are cases such as data rate where a 1000x value difference is quite possible.  Instead, I prefer setting it to one decimal (or one if it is enough) which makes it clear that we’re not looking at thousands.

Label your Axis

You can label your axis (which especially makes sense) if the presentation is something not as obvious as in this example; we’re using a negative value to lot writes to a swap file.

Grafana Dashboards4

Use Shared Crosshair or Tooltip

In Dashboard Settings, you will find “Graph Tooltip” option and set it to “Default”,
“Shared Crosshair” or “Share Tooltip”  This is how these will look:

Grafana Dashboards5

Grafana Dashboards 6

Grafana Dashboards 6


Shared crosshair shows the line matching the same time on all dashboards while Tooltip shows the tooltip value on all panels at the same time.  You can pick what makes sense for you; my favorite is using the tooltip setting because it allows me to visually compare the same time without making the dashboard too slow and busy.

Note there is handy shortcut CTRL-O which allows you to cycle between these settings for any dashboard.

Pick Colors

If you’re displaying truly dynamic information you will likely have to rely on Grafana’s automatic color assignment, but if not, you can pick specific colors for all values being plotted.  This will prevent colors from potentially being re-assigned to different values without you planning to do so.

Grafana Dashboards 7

Picking colors you also want to make sure you pick colors that make logical sense. For example, I think for free memory “green” is a better color than “red”.  As you pick the colors, use the same colors for the same type of information when you show it on the different panels if possible, because it makes it easier to understand.

I would even suggest sticking to the same (or similar) color for the Same Kind of Data – if you have many panels which show disk Input and Output using similar colors, this can be a good idea.

Fill Stacking Graphs

Grafana does not require it, but I would suggest you use filling when you display stacking data and don’t use filling when you’re plotting multiple independent values.  Take a look at these graphs:

In the first graph, I need to look at the actual value of the plotted value to understand what I’m looking at. At the same time, in the second graph, that value is meaningless and what is valuable is the filled amount. I can see on the second graph what amount of the Cache, blue value, has shrunk.

I prefer using a fill factor of 6+ so it is easier to match the fill colors with colors in the table.   For the same reason, I prefer not to use the fill gradient on such graphs as it makes it much harder to see the color and the filled volume.

Do Not Abuse Double Axis

Graphs that use double axis are much harder to understand.  I used to use it very often, but now I avoid it when possible, only using it when I absolutely want to limit the number of panels.

Note in this case I think gradient fits OK because there is only one value displayed as the line, so you can’t get confused if you need to look at total value or “filled volume”.

Separate Data of Different Scales on Different Graphs

I used to plot Innodb Rows Read and Written at the same graph. It is quite common to have reads to be 100x higher in volume than writes, crowding them out and making even significant changes in writes very hard to see.  Splitting them to different graphs solved this issue.

Consider Staircase Graphs

In the monitoring applications, we often display average rates computed over a period of time.  If this is the case, we do not know how the rate was changing within that period and it would be misleading to show that. This especially makes sense if you’re displaying only a few data points.

Let’s look at this graph which is being viewed with one-hour resolution:

This visually shows what amount of rows read was falling from 16:00 to 18:00, and if we compare it to the staircase graph:

It simply shows us that the value at 18 am was higher than 17 am, but does not make any claim about the change.

This display, however, has another issue. Let’s look at the same data set with 5min resolution:

We can see the average value from 16:00 to 17:00 was lower than from 17:00 to 18:00, but this is however NOT what the lower resolution staircase graph shows – the value for 17 to 18 is actually lower!

The reason for that is if we compute on Prometheus side rate() for 1 hour at 17:00 it will be returned as a data point for 17:00 where this average rate is really for 16:00 to 17:00, while staircase graph will plot it from 17:00 to 18:00 until a new value is available.  It is off by one hour.

To fix it, you need to shift the data appropriately. In Prometheus, which we use in PMM, I can use an offset operator to shift the data to be displayed correctly:

Provide Multiple Resolutions

I’m a big fan of being able to see the data on the same dashboard with different resolutions, which can be done through a special dashboard variable of type “Interval”.  High-resolution data can provide a great level of detail but can be very volatile.

While lower resolution can hide this level of detail, it does show trends better.

Multiple Aggregates for the Same Metrics

To get even more insights, you can consider plotting the same metrics with different aggregates applied to it:

In this case, we are looking at the same variable – threads_running – but at its average value over a period of time versus max (peak) value. Both of them are meaningful in a different way.

You can also notice here that points are used for the Max value instead of a line. This is in general good practice for highly volatile data, as a plottings line for something which changes wildly is messy and does not provide much value.

Use Help and Panel Links

If you fill out a description for the panel, it will be visible if you place your mouse over the tiny “i” sign. This is very helpful to explain what the panel shows and how to use this data.  You can use Markup for formatting.  You can also provide one or more panel links, that you can use for additional help or drill down.

With newer Grafana versions, you can even define a more advanced drill-down, which can contain different URLs based on the series you are looking at, as well as other templating variables:


This list of considerations for designing Grafana Dashboards and best practices is by no means complete, but I hope you pick up an idea or two which will allow you to create better dashboards!


Chronosphere launches with $11M Series A to build scalable, cloud-native monitoring tool

Chronosphere, a startup from two ex-Uber engineers who helped create the open-source M3 monitoring project to handle Uber-level scale, officially launched today with the goal of building a commercial company on top of the open-source project.

It also announced an $11 million investment led by Greylock, with participation from venture capitalist Lee Fixel.

While the founders, CEO Martin Mao and CTO Rob Skillington, were working at Uber, they recognized a gap in the monitoring industry, particularly around cloud-native technologies like containers and microservices. There weren’t any tools available on the market that could handle Uber’s scaling requirements — so like any good engineers, they went out and built their own.

“We looked around at the market at the time and couldn’t find anything in open source or commercially available that could really scale to our needs. So we ended up building and open sourcing our solution, which is M3. Over the last three to four years we’ve scaled M3 to one of the largest production monitoring systems in the world today,” Mao explained.

The essential difference between M3 and other open-source, cloud-native monitoring solutions like Prometheus is that ability to scale, he says.

One of the main reasons they left to start a company, with the blessing of Uber, was that the community began asking for features that didn’t really make sense for Uber. By launching Chronosphere, Mao and Skillington would be taking on the management of the project moving forward (although sharing governance for the time being with Uber), while building those enterprise features the community has been requesting.

The new company’s first product will be a cloud version of M3 to help reduce some of the complexity associated with managing an M3 project. “M3 itself is a fairly complex piece of technology to run. It is solving a fairly complex problem at large scale, and running it actually requires a decent amount of investment to run at large scale, so the first thing we’re doing is taking care of that management,” Mao said.

Jerry Chen, who led the investment at Greylock, saw a company solving a big problem. “They were providing such a high-resolution view of what’s going on in your cloud infrastructure and doing that at scale at a cost that actually makes sense. They solved that problem at Uber, and I saw them, and I was like wow, the rest of the market needs what guys built and I wrote the Series A check. It was as simple as that,” Chen told TechCrunch.

The cloud product is currently in private beta; they expect to open to public beta early next year.


New Relic snags early-stage serverless monitoring startup IOpipe

As we move from a world dominated by virtual machines to one of serverless, it changes the nature of monitoring, and vendors like New Relic certainly recognize that. This morning the company announced it was acquiring IOpipe, a Seattle-based early-stage serverless monitoring startup, to help beef up its serverless monitoring chops. Terms of the deal weren’t disclosed.

New Relic gets what it calls “key members of the team,” which at least includes co-founders Erica Windisch and Adam Johnson, along with the IOpipe technology. The new employees will be moving from Seattle to New Relic’s Portland offices.

“This deal allows us to make immediate investments in onboarding that will make it faster and simpler for customers to integrate their [serverless] functions with New Relic and get the most out of our instrumentation and UIs that allow fast troubleshooting of complex issues across the entire application stack,” the company wrote in a blog post announcing the acquisition.

It adds that initially the IOpipe team will concentrate on moving AWS Lambda features like Lambda Layers into the New Relic platform. Over time, the team will work on increasing support for serverless function monitoring. New Relic is hoping by combining the IOpipe team and solution with its own, it can speed up its serverless monitoring chops.

Eliot Durbin, an investor at Bold Start, which led the company’s $2 million seed round in 2018, says both companies win with this deal. “New Relic has a huge commitment to serverless, so the opportunity to bring IOpipe’s product to their market-leading customer base was attractive to everyone involved,” he told TechCrunch.

The startup has been helping monitor serverless operations for companies running AWS Lambda. It’s important to understand that serverless doesn’t mean there are no servers, but the cloud vendor — in this case AWS — provides the exact resources to complete an operation, and nothing more.

IOpipe co-founders Erica Windisch and Adam Johnson

Photo: New Relic

Once the operation ends, the resources can simply get redeployed elsewhere. That makes building monitoring tools for such ephemeral resources a huge challenge. New Relic has also been working on the problem and released New Relic Serverless for AWS Lambda earlier this year.

As TechCrunch’s Frederic Lardinois pointed out in his article about the company’s $2.5 million seed round in 2017, Windisch and Johnson bring impressive credentials:

IOpipe co-founders Adam Johnson (CEO) and Erica Windisch (CTO), too, are highly experienced in this space, having previously worked at companies like Docker and Midokura (Adam was the first hire at Midokura and Erica founded Docker’s security team). They recently graduated from the Techstars NY program.

IOpipe was founded in 2015, which was just around the time that Amazon was announcing Lambda. At the time of the seed round the company had eight employees. According to PitchBook data, it currently has between 1 and 10 employees, and has raised $7.07 million since its inception.

New Relic was founded in 2008 and has raised more than $214 million, according to Crunchbase, before going public in 2014. Its stock price was $65.42 at the time of publication, up $1.40.


MySQL Workbench Review

MySQL Workbench Review

MySQL Workbench ReviewMySQL Workbench is a great multi-purpose GUI tool for MySQL, which I think is not marketed enough by the MySQL team and is not appreciated enough by the community for what it can do.

MySQL Workbench Licensing

MySQL Workbench is similar to MySQL Server and is an Open-Core product. There is Community Edition which has GPL licensed source code on GitHub as well as the “MySQL Workbench Standard Edition (SE)” and “MySQL Workbench Enterprise Edition (EE)”  which are proprietary. The differences between the releases can be found in this document.

In this MySQL Workbench review, I focus on the MySQL Workbench Community Edition, often referred to as MySQL Workbench CE.

Downloading MySQL Workbench

You can download the current version of MySQL Workbench here.

Installing MySQL Workbench

Installation, of course, will be OS-dependent. I installed MySQL Workbench CE on Windows and it was quite uneventful.

Installing MySQL Workbench


Starting MySQL Workbench for the first time

If you go through the default MySQL Workbench install process, it will be started upon install completion.  And as it starts, it will check for MySQL Servers running locally, and because I do not have anything running locally, it won’t detect any servers.

Starting MySQL Workbench


You need to click on the little “+” sign near the “MySQL Connection” text to add a connection.   I think a clearer link to “Add Connection” by “Rescan Servers” would be more helpful.

Connection options are great. Besides support for TCP/IP and Local Socket/Pipe, MySQL Workbench also has support for TCP/IP over SSH, which is fantastic if you want to connect to servers reachable via SSH but do not have MySQL port open. 

When you have the connection created, you can open the main screen of MySQL Workbench which looks like this:

main screen of MySQL Workbench

You can see there is a lot of stuff there!  Let’s look at some specific features.

MySQL Workbench Management Features

MySQL Workbench Management Features 

Server Status shows information about the running MySQL Server.  Of course, being an Oracle product, it is not designed to detect the alternative implementations.   In this case, Percona Server has a Thread Pool feature but it shows as N/A.  

Server Performance information graphs are updated in real-time and provide some idea about server load.

Client Connections shows current connections to MySQL Server.   This view has some nice features, for example, you can hide sleeping connections and look at running queries only, you can set the view to automatically refresh, and kill some queries and connections. You can also run EXPLAIN for Connection to see the query execution plan.   

Client Connections MySQL Workbench

How EXPLAIN for Connection works is a bit complicated.  As you click on EXPLAIN for Connection, the notebook containing the query opens up, but I would expect to see the explain output at this point:

EXPLAIN for Connection

You when need to click on the EXPLAIN icon to see the Query Explain Output:

Query Explain Output

Note you can get both EXPLAIN for the given query or EXPLAIN for CONNECTION, which can be different, especially in the case of this particular query, where the execution plan was abnormal.

There are multiple displays for EXPLAIN provided, including Tabular Explain and Raw JSON explain, if you want to see these, although having Visual Explain displayed is a unique MySQL Workbench feature.

I also like the feature of MySQL Workbench to provide additional details on connection, such as held locks as well as connection attributes, which can often help to find what particular application instance this query comes from.

Users and Privileges

This MySQL Workbench functionality allows you to view and manage your users:

MySQL Workbench Users

It is not very advanced, but instead for the basic needs of understanding user privileges.  It has built-in support for Administrative Roles but there does not seem to be support for generic roles or some newer features such as locking accounts or requiring a change in password after a certain period of time, etc.

Status and System Variables

The Status and System Variables section in MySQL Workbench shows the formatted output of “SHOW GLOBAL STATUS” and “SHOW VARIABLES”:

Status and System Variables

I like the fact that the massive number of settings and variables are grouped into different categories and there is some help provided.  The fact that all values are only provided as raw numbers, without any formatting and not normalized per second when appropriate, make it hard to work with such information.

Data Export and Data Import/Restore

As you may expect, these provide the functionality to export and import schema and possibly data.  This basically provides GUI for mysqldump, which is a great help for more advanced use cases.

Data Export

Instance Management

This is interesting; even though I set up a connection using SSH, MySQL Workbench does not automatically use it for host access. It needs to be configured separately instead, by clicking the little Wrench icon.

Instance Management

If you’re using Linux for Remote Management, you will need to provide quite a lot of details about the Linux version, packaging type, and even init scripts you use, which can easily be overwhelming.

I do wonder why there is no auto-detection of the system type implemented here.

If you configure Remote Management in MySQL Workbench properly, you could, in theory, be able to start/stop the server, look at server logs, and view options file.   It did not work well in my case. 

Remote Management in MySQL Workbench

Performance  – Dashboard

The MySQL Workbench Performance Dashboard section shows a selection of Performance Graphs.  It is not very deep and only shows stats while MySQL Workbench is running, but it covers some good basics.

MySQL Workbench Performance Dashboard

Performance – Reports

The Performance Reports section in MySQL Workbench is pretty cool; it shows a lot of reports based on MySQL’s sys schema. 

Performance - Reports 

This is pretty handy, but I think it would benefit from having better formatting (so I do not have to count digits to see how much memory is used) and also numbers from the instance start often make little sense.

Performance Schema Setup

This is one of the hidden gems in MySQL Workbench.  Performance Schema configuration in MySQL can be rather complicated if you’re not familiar with it, and MySQL Workbench makes it a lot easier.   Its default Performance Schema controls are very basic.

Performance Schema Setup

However, if you enable the “Show Advanced” settings, it will give you this fantastic overview of Performance Schema: 

As well as allow you to modify the configuration in details:

modify configuration

Until this point, we have been operating in Administration View.  If you want to work with Database Schema, you want to switch MySQL Workbench to Schema View.

Schema View

This view allows you to work with tables and other database schema objects.  The contextual menu provides different functions for different objects.

contextual menu


MySQL Workbench Query Editor

Finally, let’s take a look at the MySQL Workbench Query Editor. It has a lot of advanced features.   

First, I like that it is a multi-tab editor so you can have multiple Queries opened at once and you can switch between them easily.  It also has support for helpful snippets – both a large library of built-in ones as well as ones created by the user. It also has support for contextual help with can be quite helpful for beginners.

I like the fact MySQL Workbench adds LIMIT 1000 by default to queries it runs, and it also allows you to easily and conveniently edit the stored data.

Examine Field Types:

View Query execution statistics:

Query execution statistics

Though in this case, it seems to only show information derived from SHOW SESSION STATUS and not more advanced details available in Performance Schema. 

Visual Explain is quite a gem of MySQL Workbench too, but we covered it already.


In general, I’m quite impressed with the functionality offered with MySQL Workbench CE (Community Edition). For someone looking for a simple, free GUI for MySQL to run queries and provide basic help with administration you need to look no further. If you have more advanced needs, particularly in the monitoring or management space, you should look somewhere else. Oracle has MySQL Enterprise Monitor for this purpose which is a fully commercial product that comes with a MySQL Enterprise subscription.  If you are looking for an Open Source Database Monitoring-focused product, consider Percona Monitoring and Management.  


PMM Server + podman: Running a Container Without root Privileges

PMM server and podman

PMM server and podmanIn this article, we will take a look at how to run Percona Monitoring and Management (PMM) Server in a container without root privileges.

Some of the concerns companies have about using Docker relate to the security risks that exist due to the requirement for root privileges in order to run the service and therefore the containers. If processes inside the container are running as root then they are also running as such outside of the container, which means that you should take measures to mitigate privilege escalation issues that could occur from container breakout. Recently, Docker added experimental support for running in rootless mode which requires a number of extra changes.

From a personal perspective, it seems that Docker tries to do it “all in one”, which has its issues. While I was looking to overcome some gripes with Docker, reduce requirements, and also run containers as a chosen user regardless of the process users inside the container, podman and buildah caught my attention, especially as podman also supports Kubernetes. To quote the docs:

Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.

Configuring user namespaces

In order to use containers without the need for root privileges, some initial configuration is likely to be needed to set the mapping of namespaces, as well as a suitable limit. Using the shadow-utils package on CentOS, or uidmap package on Debian, it is possible to assign subordinate user (man subuid) and group IDs (man subgid) to allow the mapping of IDs inside a container to a dedicated set of IDs outside the container. This allows us to resolve the issue where a process is running as root both inside and outside of a container. 

The following is an Ansible task snippet that, on a dedicated CentOS machine, makes the necessary changes, although you may need/want to adjust this if already using namespaces:

- name: install packages
      - runc
      - podman
      - podman-docker
      - buildah
      - shadow-utils
      - slirp4netns
      update_cache: yes
      state: latest
    become: yes
    - packages

- name: set user namespace
     content: '{{ ansible_user }}:100000:65536'
     dest: '/etc/{{ item }}'
   become: yes
     - subuid
     - subgid
   register: set_user_namespace_map
   - namespace

 - name: set sysctl user.max_user_namespaces
     name: user.max_user_namespaces
     value: 100000
     state: present
     sysctl_file: /etc/sysctl.d/01-namespaces.conf
     sysctl_set: yes
     reload: yes
   become: yes
   register: set_user_namespace_sysctl
   - namespace

 - name: reboot to apply
     post_reboot_delay: 60
   become: yes
   when: set_user_namespace_map.changed or set_user_namespace_sysctl.changed
   - namespace

This set of tasks will:

  1. Install some packages that are either required or useful when using podman.
  2. Configure the 2 files used to map IDs (



     ); we allow the user to have 65536 user IDs mapped, starting the mapping from 100000 and likewise for group IDs.

  3. Set

      to ensure that you can allocate sufficient IDs, making it persistent after a reboot. If you have needed to configure the namespaces then it is likely that you need to reboot to make sure that the changes are active.

Look Ma, PMM — and no root!

OK, so if all is well, then the system is configured to support user namespaces and we can run a container without needing to be root (or a member of a special group). The commands for podman are identical, or very close to those that you would use for Docker, and you can even set

alias docker=podman

  if you wish! Here is how you could test PMM Server (v2) out, mapping port 8443 to the NGINX port inside the container:

$ whoami

$ podman run -d --name pmm2-test -p 8443:443

In the previous command, the path to the registry is explicitly stated as being a Docker one, but if you were to simply specify percona/pmm-server:2 then by default a number of registries are checked and the first match will win. The list and order of the registries differ per distro, e.g.

  • Ubuntu:
    registries = ['', '']
  • RedHat:
    registries = ['', '', '']

Checking the processes

The container for PMM server is now running as the user that executed the podman command, so we can take a look and see what the processes look like. First, let’s take a peek inside the container:

$ podman exec pmm-server bash -c "ps -e -o pid,ppid,pgrp,user,comm --forest"
  862     0   862 root     ps
    1     0     1 root     supervisord
    9     1     9 postgres postgres
   88     9    88 postgres  \_ postgres
   98     9    98 postgres  \_ postgres
  100     9   100 postgres  \_ postgres
  101     9   101 postgres  \_ postgres
  102     9   102 postgres  \_ postgres
  103     9   103 postgres  \_ postgres
  104     9   104 postgres  \_ postgres
  154     9   154 postgres  \_ postgres
  173     9   173 postgres  \_ postgres
  183     9   183 postgres  \_ postgres
   10     1    10 root     clickhouse-serv
   11     1    11 grafana  grafana-server
   12     1    12 root     nginx
   23    12    12 nginx     \_ nginx
   13     1    13 root     crond
   14     1    14 pmm      prometheus
   17     1    17 root     pmm-managed
   51    17    17 root      \_ supervisorctl
   18     1    18 root     pmm-agent
  172    18    18 root      \_ node_exporter
  174    18    18 root      \_ postgres_export
  153     1   153 pmm      percona-qan-api

We can see all of the processes that run as part of PMM, plus the ps command that was executed to look at the processlist.

Now, let’s take a look outside the container to see what these processes show up as:

$ pgrep -f 'postgres|supervisord|clickhouse-serv|pmm-managed' | xargs ps -o pid,ppid,pgrp,user,comm --forest
32020 32003 32003 percona  node_exporter
23532 23358 23358 percona  postgres_export
23530 23358 23358 percona  node_exporter
23332 23321 23332 percona  supervisord
23349 23332 23349 100025    \_ postgres
23440 23349 23440 100025    |   \_ postgres
23456 23349 23456 100025    |   \_ postgres
23458 23349 23458 100025    |   \_ postgres
23459 23349 23459 100025    |   \_ postgres
23460 23349 23460 100025    |   \_ postgres
23461 23349 23461 100025    |   \_ postgres
23462 23349 23462 100025    |   \_ postgres
23512 23349 23512 100025    |   \_ postgres
23531 23349 23531 100025    |   \_ postgres
23541 23349 23541 100025    |   \_ postgres
23350 23332 23350 percona   \_ clickhouse-serv
23357 23332 23357 percona   \_ pmm-managed

Great! As we can see from the processlist, none of the processes that we checked are running as root, the ones that are running as root inside the container are running as percona, and the remainder are using unknown user IDs generated by the subordinate mapping.

Using persistent volumes for your data

The documented way to use PMM with persistent volumes does not work in the same manner with podman. However, it is easily fixed as follows (plus we get to try out buildah):

$ cat <<EOS > PMM2.Dockerfile

VOLUMES ['/srv']

$ buildah bud -f PMM2.Dockerfile --tag localhost/pmm-server:custom .
$ podman create --name pmm-data -v /srv localhost/pmm-server:custom 
$ podman run -d --name pmm-server -v pmm-data:/srv -p 8443:443 localhost/pmm-server:custom


It is easy to get PMM server running in a container with podman and buildah while also increasing the security from the outside by restricting the user privileges for the processes.

And while you may not be able to replace Docker usage in certain circumstances, taking a look at podman and buildah is definitely recommended… and you can still use the Docker registry if that is the only location for the image that you want to run.

If you haven’t tried PMM2 yet then you can easily test it out or take a look at the online demo.


Webinar 10/16: What’s New in Percona Monitoring and Management 2?

Percona Monitoring and Management 2

How can you ensure you are properly managing and optimizing the performance of your database environment?

Join Percona’s Product Manager Michael Coburn as he presents “What’s New in Percona Monitoring and Management 2?” and walks you through practical demonstration. This will be taking place on Wednesday, October 16, 2019, at 11:00 AM EDT.

Register Now

Percona Monitoring and Management (PMM) is a free, open source platform that supports MySQL, MariaDB, MongoDB, and PostgreSQL environments, providing detailed time-based analysis of your data. PMM allows you to embrace multiple database options and can be used on-premises and in the cloud.

Our recent major upgrade to PMM2 gives you far greater Query Analytics performance and usability and enables you to monitor much larger environments. Key features of PMM2 include:

•    New performance and usability query improvements.
•    New query analytics for PostgreSQL.
•    New ability to tag queries.
•    New administrative API.
•    New service-level dashboards.
•    Enhanced security protocols to ensure your data is safe.

Michael Coburn, Product Manager, Percona will provide an overview of these new features and a working demonstration of PMM2. You will also have the opportunity to ask questions in the chat window.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.


Percona Monitoring and Management (PMM) 2.0.1 Is Now Available

Percona Monitoring and Management 2.0.1

Percona Monitoring and Management 2.0.1Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring your database performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MariaDB®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

In this release, we are introducing the following PMM enhancements:

  • Securely share Dashboards with Percona – let Percona engineers see what you see!
  • Improved navigation – Now PMM remembers which host and service, and applies these as filters when navigating to Query Analytics

Securely share Dashboards with Percona

A dashboard snapshot is a way to securely share what you’re seeing, with Percona. When created, we strip sensitive data like queries (metrics, template variables, and annotations) along with panel links. The shared dashboard will only be available for Percona engineers to view as it will be protected by Percona’s two-factor authentication system. The content on the dashboard will assist Percona engineers in troubleshooting your case.

Improved navigation

Now when you transition from looking at metrics into Query Analytics, PMM will remember the host and service, and automatically apply these as filters:


  • PMM-4779: Securely share dashboards with Percona
  • PMM-4735: Keep one old slowlog file after rotation
  • PMM-4724: Alt+click on check updates button enables force-update
  • PMM-4444: Return “what’s new” URL with the information extracted from the pmm-update package changelog

Fixed bugs

  • PMM-4758: Remove Inventory rows from dashboards
  • PMM-4757qan_mysql_perfschema_agent failed querying events_statements_summary_by_digest due to data types conversion
  • PMM-4755: Fixed a typo in the InnoDB AHI Miss Ratio formula
  • PMM-4749: Navigation from Dashboards to QAN when some Node or Service was selected now applies filtering by them in QAN
  • PMM-4742: General information links were updated to go to PMM 2 related pages
  • PMM-4739: Remove request instances list
  • PMM-4734: A fix was made for the collecting node_name formula at MySQL Replication Summary dashboard
  • PMM-4729: Fixes were made for formulas on MySQL Instances Overview
  • PMM-4726: Links to services in MongoDB singlestats didn’t show Node name
  • PMM-4720machine_id could contain trailing \n
  • PMM-4640: It was not possible to add MongoDB remotely if password contained a # symbol

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.


Netdata, a monitoring startup with 50-year-old founder, announces $17M Series A

Nearly everything about Netdata, makers of an open-source monitoring tool, defies standard thinking about startups. Consider that the founder is a polished, experienced 50-year-old executive who started his company several years ago when he became frustrated by what he was seeing in the monitoring tools space. Like any good founder, he decided to build his own, and today the company announced a $17 million Series A led by Bain Capital.

Marathon Ventures also participated in the round. The company received a $3.7 million seed round earlier this year, which was led by Marathon.

Costa Tsaousis, the company’s founder and CEO, was working as an executive for a company in Greece in 2014 when he decided he had had enough of the monitoring tools he was seeing. “At that time, I decided to do something about it myself — actually, I was pissed off by the industry. So I started writing a tool at night and on weekends to simplify monitoring significantly, and also provide a lot more insights,” Tsaousis told TechCrunch.

Mind you, he was a 45-year-old executive who hadn’t done much coding in years, but he was determined, as any startup founder tends to be, and he took two years to create his monitoring tool. As he tells it, he released it to open source in 2016 and it just took off. “In 2016, I released this project to the public, and it went viral, I wrote a single Reddit post, and immediately started building a huge community. It grew up about 10,000 GitHub stars in a matter of a week,” he said. Even today, he says that it gets a half million downloads every single day, and hundreds of people are contributing to the open-source version of the product, relieving him of the burden of supporting the product himself.

Panos Papadopoulos, who led the investment at Marathon, says Tsaousis is not your typical early-stage startup founder. “He is not following many norms. He is 50 years old, and he was a C-level executive. His presentation and the depth of his thinking, and even his core materials, are unlike anything else have seen in an early-stage startup,” he said.

What he created was an open-source monitoring tool, one that he says simplifies monitoring significantly, and also provides a lot more insights, offering hundreds of metrics as soon as you install it. He says it is also much faster, providing those insights every second, and it’s distributed, meaning Netdata doesn’t actually collect the data, just provides insights on it wherever it lives.

Screenshot 2019 09 25 09.58.36

Live dashboard on the Netdata website

Today, the company has 24 employees and Tsaousis has set up shop in San Francisco. In addition, to the open-source version of the product, there is a SaaS version, which also has what he calls a “massively free plan.” He says the open-source monitoring agent is “a gift to the world.” The SaaS tool is about democratizing monitoring and the pay version is even different from most monitoring tools, charging by the seat instead of by the amount of infrastructure you are monitoring.

Tsaousis wants no less than to lead the monitoring space eventually, and believes that the free tiers will lead the way. “I think Netdata can change the way people perceive and understand monitoring, but in order to do this, I think that offering free services in a massive way is essential. Otherwise, it will not work. So my aim is to lead monitoring. This may sound arrogant, and Netdata is not there yet, but I think it can be,” he said.


Explore Percona Monitoring and Management on Your Own with Linode

Percona Monitoring Management Linode

Percona Monitoring Management LinodeAs the saying goes, a picture is worth 1000 words – and this is why we don’t just talk about Percona Monitoring and Management (PMM) but also have a demo which allows you to explore many PMM features without having to install it.

Though there is only so much you can do in a public demo, if you want to play with customizing graphs or another functionality which requires read-write access, or otherwise just dive deeper into PMM operation, you may want to get your own instance to do so with.

As we’re releasing Percona Monitoring and Management 2,  we have created a very simple way to do it on Linode using its StackScripts framework.

This StackScript will deploy PMM Server as well as Percona Server for MySQL 8 and kickoff some light query workload using Sysbench-TPCC.

You can deploy this stackscript using linode-cli in the following way:

linode-cli linodes create --type g6-standard-1 --label pmm2demo  --root_pass MySecret  --stackscript_id  587359

If you prefer to do it from the GUI instead, you can go to One-Click Apps Menu, click on “Community Stack Scripts”,  search for PMM, and pick PMM2-MySQL-Demo.


Choose Region, Node Size, and root password, and hit create. It will take a few minutes for it to be fully deployed, and you will have your own PMM Demo to play with.

I would suggest selecting the plan with at least 2GB of memory for best results.


Login with admin:admin credentials and proceed to explore. I would especially suggest you check out the “Node Summary” dashboard.


The MySQL Instance Summary Dashboard


MySQL InnoDB Details


Also, check out that you can see how you can examine your query load by clicking on “Query Analytics”.


Enjoy PMM2, and let us know your feedback on the forums or through JIRA.

Powered by WordPress | Theme: Aeros 2.0 by