The Preview of Database as a Service (DBaaS) in Percona Monitoring and Management is Now Live!

DBaaS Percona Monitoring and Management

This week we officially kick-off the Preview of Database as a Service (DBaaS) in Percona Monitoring and Management. We are still looking for users to test and provide feedback during this year-long program, and we would love you to participate! 

Preview of Database as a Service in Percona Monitoring and Management


Our vision is to deliver a truly open source solution that won’t lock you in. A single pane of glass to easily manage your open source database infrastructure, and a self-service experience enabling fast and consistent open source database deployment. 

Our goal is to deliver the enterprise benefits our customers are looking for, including:

  • A single interface to deploy and manage your open source databases on-premises, in the cloud, or across hybrid and multi-cloud environments.
  • The ability to configure a database once and deploy it anywhere. 
  • Critical database management operations, such as backup, recovery, and patching.
  • Enhanced automation and advisory services allow you to find, eliminate, and prevent outages, security issues, and slowdowns. 
  • A viable alternative to public cloud and large enterprise database vendor DBaaS offerings, allowing you to eliminate vendor lock-in.

Percona applies a user-driven product development process. So, we hope our user community will get involved in the Preview of Database as a Service (DBaaS) in Percona Monitoring and Management and help inform the design and development of this new software functionality.

The Preview is a year-long program consisting of four phases. Each three-month phase will focus on a different area of participant feedback. Preview participants can be involved in as many phases as they like.

Preview of Database as a Service (DBaaS) in Percona Monitoring and Management

Phase One Details for Interested Participants

Phase one will focus on:

  1. Gathering feedback allows us to understand the applicable user personas and the goals and objectives required in their day-to-day roles.
  2. Gathering feedback on the user experience, specifically involving creating, editing, and deleting database clusters and the databases within those clusters. We will also gather feedback on the management of those clusters and the monitoring of added database servers and nodes.

We are starting with a focus on database deployment and management features, as they help users improve their productivity. 

Other details to note…

  • Phase one of the Preview will run until April 2021
  • Phase one requires around 10 hours of self-paced activities, facilitated through the Percona Forum
    • All Community Preview participant feedback will be captured within the Percona Forum
    • Community Preview participant questions will be facilitated through the Percona Forum.

So make sure to sign up to participate in the Preview of Database as a Service (DBaaS) in Percona Monitoring and Management and become a crucial participant in this initiative, helping shape future users’ experience as we develop and test this new software functionality! 

Register Now!


Observations on Better Resource Usage with Percona Monitoring and Management v2.12.0

Better Resource Usage with Percona Monitoring and Management

Better Resource Usage with Percona Monitoring and ManagementPercona Monitoring and Management (PMM) v2.12.0 comes with a lot of improvements and one of the most talked-about is the usage of VictoriaMetricsDB. The reason we are doing this comparison is that PMM 2.12.0 is a release in which we integrate VictoriaMetricsDB and replace Prometheus as its default method of data ingestion.

A reason for this change was also driven by motivation for improved performance for PMM Server, and here we will look at an overview of why users must definitely consider using the 2.12.0 version if they have always been looking for a less resource-intensive PMM. This post will try to address some of those concerns.

Benchmark Setup Details

The benchmark was performed using a virtualized system with PMM Server running on an ec2 instance, with 8 cores, 32 GB of memory, and SSD Storage. The duration of the observation is 24 hours, and for clients, we set up 25 virtualized client instances on Linode with each emulating 10 Nodes running MySQL with real workloads using Sysbench TPC-C Test.

Percona Monitoring and Management benchmark

Both PMM 2.11.1 and PMM 2.12.0 were set up in the same way, with client instances running the exact same load, and to monitor the difference in performance, we used the default metrics mode for 2.12.0 for this observation.

Sample Ingestion rate for the load was around 96.1k samples/sec, with around 8.5 billion samples received in 24 hours. 

A more detailed benchmark for Prometheus vs. VictoriaMetrics was done by the VM team and it clearly shows how efficient VictoriaMetrics is and how better performance can be achieved with Victoria Metrics.

Disk Space Usage

VictoriaMetrics has really good efficiency when it comes to disk usage of the host system, and we found that 2.11.1 generates a lot of disk usage spikes with the maximum storage touching around 23.11 GB of storage space. If we compare the same for PMM 2.12.0, the disk usage spikes are not as high as 2.11.1 while the maximum disk usage is around 8.44 GB.

It is clear that PMM 2.12.0 needs 2.7 times less disk space for monitoring the same number of services for the same duration as compared to PMM 2.11.1.

Disk Usage PMM 2.11.1

Disk Usage 1: PMM 2.11.1 

Disk Usage PMM 2.12.0

Disk Usage 2: PMM 2.12.0

Memory Utilization

Another parameter on which PMM 2.12.0 performs better is Memory Utilization. During our testing, we found that PMM 2.11.1 was using two times more memory for monitoring the same number of services. This is indeed a significant improvement in terms of performance.

The memory usage clearly shows several spikes for PMM 2.11.1, which is not the case with PMM 2.12.0.

Memory Utilization PMM 2.11.1

Memory Utilization: PMM 2.11.1

Free Memory PMM 2.11.1
Free Memory PMM 2.11.1 

Memory Utilization 2.11.1
Memory Utilization PMM 2.12.0

The Memory Utilization for 2.12.0 clearly shows more than 55% of memory is available across the 24 hours of our observation, which is a significant improvement over 2.11.1.

Free Memory 2.12.0

Free Memory PMM 2.12.0

CPU Usage

During the observation we noticed a slight increase in CPU usage for PMM 2.12.0, the average CPU usage was about 2.6% more than PMM 2.11.1, but the max CPU usage for both versions did not have any significant difference.

CPU Usage 2.11.1

CPU Usage: PMM 2.11.1

CPU Usage 2.12.0

CPU Usage: PMM 2.12.0



The overall performance improvements are around the Memory Utilization and Disk Usage, and we also observed a significantly less disk I/O bandwidth with far fewer spikes in the write operations for PMM 2.12.0. This behavior is observed and articulated in the VictoriaMetrics benchmarking. CPU Usage and Memory are two important resource factors when planning to setup PMM Server, and with PMM 2.12.0 we can safely say that it will cost about half in terms of Memory and Disk resources when compared to any other previously released PMM versions. This would also likely encourage current users to be able to add more instances for monitoring without the need to think about the cost of extra infrastructure. 


Let’s Try Something New: Percona Open Source Expert Missions!

Open Source Expert Missions Percona

Hello everyone! I wanted to outline a new program we are trying out. We are calling it, Percona Open Source Expert Missions. So, you might be asking… What the heck is that? 

An Expert Mission is a challenge or a contest of skill. Here at Percona, we are always looking for new ways to get people more involved in the open-source community. Our idea is to create small challenges for people to try and compete for fame, fortune (well maybe not), and cool SWAG! 

Speaking of SWAG, we have some very exclusive, limited-edition Percona items available for this first mission. In fact, we are limiting the total number of shirts and hats featuring one of these logos to just 10! If you are interested in a hat and shirt with one of these cool logos, I have a mission just for you! 

Mission #1: We are looking for some help in creating a few new dashboards for Percona Monitoring and Management. Specifically, we would like to have new dashboards to monitor and troubleshoot MySQL and PostgreSQL running in Azure Database and Google Cloud SQL. Although we already have dashboards and an exporter (that grabs data over the AWS API) for people to help monitor, observe, and troubleshoot their databases in AWS, we’d love to bring the same functionality to GCP and Azure users. For now, we are going to keep this Percona Expert Mission open-ended.

Everyone who submits a pull request will get their dashboards/exporters reviewed by engineering. Once you submit a pull request send us an email to community-team@percona.com and we will make sure it is reviewed. The top 10 submissions as judged by our engineering and product teams will get a free “normal” shirt. However, for the people whose dashboards and exporters are chosen to be included officially, we will send you a shirt and a hat with your choice of one of the exclusive graphics! Finally, if you want the ultimate personalization, our own Peter Zaitsev will autograph that hat with a special thank you message for your contribution. 


Want to know where to begin? Our own Daniil Bazhenov wrote a great ‘getting started’ article on contributing dashboards to Percona Monitoring and Management (PMM).

So what are you waiting for? Let’s build some exporters and dashboards and get some SWAG! 


Percona Monitoring and Management Migration from Prometheus to VictoriaMetrics FAQ

Percona Monitoring and Management Migration Prometheus to VictoriaMetrics

Percona Monitoring and Management Migration Prometheus to VictoriaMetricsStarting with Percona Monitoring and Management (PMM) 2.12, we use VictoriaMetrics instead of Prometheus as a metrics store.  This blog is written as an FAQ to explain our reasoning for this change as well as to provide answers to various questions on what it may mean for you.

Why is Percona Monitoring and Management moving from Prometheus to VictoriaMetrics?

The main reason to move to VictoriaMetrics is to be able to push metrics from clients to PMM server (see Foiled by the Firewall: A Tale of Transition From Prometheus to VictoriaMetrics), Additionally, VictoriaMetrics offers better performance, lower space utilization, can be deployed in scale-out fashion and has supported MetricsQL – a more expressive variant of PromQL, which is better suited for PMM needs.  Note: while VictoriaMetrics may consume slightly more CPU on ingest, it has better query execution performance.

What is the first version of PMM to include VictoriaMetrics instead of Prometheus?

PMM 2.12 is the first version to ship VictoriaMetrics.

Can I continue running Prometheus instead of VictoriaMetrics in PMM 2.12+?

No.  There is no option to use Prometheus in PMM v 2.12+.  This change is mostly transparent though, so you should not need to.

What do I need to do to migrate my existing data from Prometheus to VictoriaMetrics?

You do not need to do anything, as Percona has implemented transparent migration. VictoriaMetrics will read from your existing Prometheus files and store new data in VictoriaMetrics format so you will not need to do anything manually.

Will Prometheus to VictoriaMetrics conversion require significant downtime for large data sets?

There is no downtime required. Over time, data will be transparently migrated from Prometheus to VictoriaMetrics  It is the closest thing to magic you’ve seen. ?

Are my older PMM clients compatible with VictoriaMetrics-based PMM?

Yes, existing 2.x clients are supported with PMM 2.12 until further notice.  For best compatibility, though, we recommend upgrading your clients too when you update the server.

Will my custom dashboards continue to work now that PMM uses VictoriaMetrics instead of Prometheus?

Yes, 99% of custom dashboards will continue to work. There are some MetricQL differences from PromQL which can impact your dashboards in very rare cases, and it should be easy to fix.

Can I continue using my external exporters with Victoria Metrics-based PMM?

Yes. External exporters continue to be supported.

Will VictoriaMetrics require more resources than Prometheus?

VictoriaMetrics is generally more resource-efficient compared to Prometheus. It will use less CPU, Memory, and consume less disk space.

How do I troubleshoot VictoriaMetrics inside PMM?

There is “VictoriaMetrics” dashboard in the “Insights” category which provides a lot of diagnostic information for VictoriaMetrics.  You can also find VictoriaMetrics logs in Server Diagnostics which you can download from the “PMM Settings” page.

I was using the Prometheus built-in web interface for diagnostics, what should I use instead?

There is no exact feature equivalent for this internal Prometheus interface. However, there are ways to get most of the same things done. Check out this document for more information.

Will PMM 2.12 “push” metrics data from client to server?

No. In PMM 2.12 we replaced the time series engine but did not switch to pushing metrics from a client by default; this will happen in future releases. If you want to switch to a push model now you can do it by using pmm-admin setup … –metrics-mode=auto when adding the node. For more information see this document.

What will happen to my custom Prometheus configuration?

VictoriaMetrics supports a lot of Prometheus configuration options worth changing, such as scrape config.  If you supplied custom settings in /srv/prometheus/prometheus.base.yml  which are not supported by VictoriaMetrics, you need to remove them before upgrading from PMM which included Prometheus to one which uses VictoriaMetrics. Otherwise, VictoriaMetrics process will fail to start.

Can I supply a custom configuration for the VictoriaMetrics engine as I could with Prometheus?

Same as you could supply custom configuration for Prometheus you can supply custom VictoriaMetrics configuration in /srv/prometheus/prometheus.base.yml. For compatibility reasons, the file name was not changed.

Are custom Prometheus recording rules still supported with VictoriaMetrics?

Yes, while technical implementation changed, recording rules will continue to work.  See How to Use Prometheus Recording Rules With Percona Monitoring and Management for more details.

Will my alerting rules continue to work?

Yes. Prometheus alerting rules will continue to work using VictoriaMetrics native compatibility with Prometheus alerting functionality. Vmalert is added inside PMM to support this.

Have more questions? Feel free to post them as comments on this blog post or check out our forums.

Not using Percona Monitoring and Management yet?

Give it a try today!


How to Use Prometheus Recording Rules With Percona Monitoring and Management

Prometheus Recording Rules Percona Monitoring and Management

Prometheus Recording Rules Percona Monitoring and ManagementIf you’re building custom dashboards for Percona Monitoring and Management (PMM), chances are you would like to use Prometheus recording rules for better performance.

In the nutshell, recording rules act somewhat as materialized views in relational databases, pre-computing and storing results of complicated queries, to limit data crunching needed during dashboard load.

As an example, let’s use a common “problematic” data point  – CPU usage.  With Linux exposing some 10 CPU usage “modes” data points for every CPU logical core, for powerful systems we may be pushing 1000 data points every scrape interval, meaning lots of data crunching when trending CPU usage even for a few hours.

Instead, we can compute how many “CPU Cores” are used on average with one-minute resolution and use it for high-performance long term trending.

To do this we can define the rule as follows:

  - name: cpu.rules
    interval: 1m
    - expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}[1m])) by (node_name)
      record: node:cpu_usage_seconds_total:sum_rate_1m

This means, run an evaluation of the named rule every 1 minute and store the result as “node:cpu_usage_seconds_total:sum_rate_1m” time series.   Because the sum by (node_name) is the only label that will be retained, the rest of the labels are dropped.

Ok. Now we have got such a custom recording rule; how do we enable it in PMM?

The easiest way is to go to PMM Settings and paste the file to the PMM Settings Alertmanager Integration Configuration:

PMM Settings Alertmanager Integration Configuration

I know the interface does not say so (as of PMM 2.12), but it really does accept recording rules NOT just alerting rules in this interface (although it may not work forever this way, it does now!).

You can also use the API (Documented Though Swagger) to supply Recording Rules configuration to PMM – Check out the /v1/Settings/Change method.

Web Interface (and API) are smart enough to check your recording/alerting rules for syntax and if there is a syntax error, changes will not be applied.

Prometheus Recording Rules

Now that we have set up our recording rule we can see values being recorded through the Explore Interface:

Going Off the Deep End

Most of you should stop reading right here and go get the recording rules you need in place.  For the curious minority, though, let’s look under the covers and check out some additional implementation details.

Note: These details correspond to Percona Monitoring and Management 2.12 which includes VictoriaMetrics. Previous releases and possibly future releases can change details of implementation without notice.

If you’re running  PMM as Docker Container you can enter the container through this command:

root@PMM2Server:~# docker exec -it pmm-server bash

For other deployment types, connect to the Virtual Appliance through ssh:

[root@641c08864858 opt]# cd /srv/prometheus/rules/
[root@641c08864858 rules]# ls
[root@641c08864858 rules]# cat pmm.rules.yml
  - name: cpu.rules
    interval: 1m
    - expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}[1m])) by (node_name)
      record: node:cpu_usage_seconds_total:sum_rate_1m

As you can see. the contents we passed through the API got stored in the pmm.rules.yml file.

With VictoriaMetrics, rules processing is done by the VmAlert tool which supports most of Prometheus Alerting syntax (of course, using slightly different language MetricsQL rather than PromQL for expressions), though in most cases you will not see the difference.

[root@641c08864858 rules]# ps aux | grep vmalert
pmm         18  0.0  0.6 712180  6652 ?        Sl   Dec06   5:43 /usr/sbin/vmalert --notifier.url= --notifier.basicAuth.password= --notifier.basicAuth.username= --external.url=http://localhost:9090/prometheus --datasource.url= --remoteRead.url= --remoteWrite.url= --rule=/srv/prometheus/rules/*.yml --httpListenAddr=

As you may guess from this configuration /srv/prometheus/rules/*.yml are used as a source of rules, so you can add your own files to this directory too… though we would rather have you use the API (or Web Interface) instead.

You also may discover there is VMAlert API which can be rather helpful for troubleshooting.

For example, you can get the Recording Rules Groups Status this way:

[root@641c08864858 rules]# curl
{"data":{"groups":[{"name":"cpu.rules","id":"10366635785632970132","file":"/srv/prometheus/rules/pmm.rules.yml","interval":"1m0s","concurrency":1,"alerting_rules":null,"recording_rules":[{"id":"2245083066170751034","name":"node:cpu_usage_seconds_total:sum_rate_1m","group_id":"10366635785632970132","expression":"sum(rate(node_cpu_seconds_total{mode!=\"idle\",mode!=\"iowait\",mode!=\"steal\"}[1m])) by (node_name)","last_error":"","last_exec":"2020-12-11T18:13:29.695994753Z","labels":null}]}]},"status":"success"}

Which shows what rules have been loaded and if they are executing successfully. This particular access point is also mapped to the http://<PMMSERVER>/prometheus/rules URL to make it easily externally accessible.

This should get you going with Percona Monitoring and Management (PMM) and recording rules, and if you need more help, feel free to ask a question on our forums


Enabling HTTPS Connections to Percona Monitoring and Management Using Custom Certificates

HTTPS Connections to Percona Monitoring and Management

HTTPS Connections to Percona Monitoring and ManagementWhichever way you installed Percona Monitoring and Management 2 (PMM2), using the docker image or an OVF image for your supported virtualized environment, PMM2 enables, by default, two ports for the web connections: 80 for HTTP and 443 for HTTPS. Using HTTPS certificates are requested for encrypting the connection for better security.

All the installation images contain self-signed certificates already configured, so every PMM2 deployment should work properly when using HTTPS.

This is cool, but sometimes self-signed certificates are not permitted, based on the security policy adopted by your company. If your company uses a Certification Authority to sign certificates and keys for encryption, most probably you are forced to use the files provided by the CA for all your services, even for PMM2 monitoring.

In this article, we’ll show how to use your custom certificates to enable HTTPS connections to PMM2, according to your security policy.

PMM2 Deployed as a Docker Image

If PMM Server is running as a Docker image, use docker cp to copy certificates. This example copies certificate files from the current working directory to a running PMM Server docker container.

docker cp certificate.crt pmm-server:/srv/nginx/certificate.crt
docker cp certificate.key pmm-server:/srv/nginx/certificate.key
docker cp ca-certs.pem pmm-server:/srv/nginx/ca-certs.pem
docker cp dhparam.pem pmm-server:/srv/nginx/dhparam.pem

If you’re going to deploy the container, you can use the following to use your own certificates instead of the built-in ones. Let’s suppose your certificates are in /etc/pmm-certs:

docker run -d -p 443:443 --volumes-from pmm-data \
  --name pmm-server -v /etc/pmm-certs:/srv/nginx \
  --restart always percona/pmm-server:2

  • The certificates must be owned by root.
  • The mounted certificate directory must contain the files certificate.crt, certificate.key, ca-certs.pem and dhparam.pem.
  • For SSL encryption, the container must publish on port 443 instead of 80.

PMM2 Deployed Using a Virtual Appliance Image

In such cases, you need to connect to the virtual machine and replace the certificate files in /srv/nginx:

  • connect to the virtual machine
    $> ssh root@pmm2.mydomain.com
  • place CA, certificate, and key files into the /srv/nginx directory. The file must be named certificate.crt, certificate.key, ca-certs.pem and dhparam.pem
  • if you would like to use different file names you can modify the nginx configuration file /etc/nginx/conf.d/pmm.conf. The following variables must be set:
    ssl_certificate /srv/nginx/my_custom_certificate.crt;
    ssl_certificate_key /srv/nginx/my_custom_certificate.key;
    ssl_trusted_certificate /srv/nginx/my_custom_ca_certs.pem;
    ssl_dhparam /srv/nginx/my_dhparam.pem
  • restart nginx
    [root@pmm2]> supervisorctl restart nginx


Percona Monitoring and Management is widely used for monitoring MySQL, Proxysql, MongoDB, PostgreSQL, and OSes. Setting up customer certificates for the connection encryption, according to the security policy adopted by your company, is quite simple. You can rely on PMM2 for troubleshooting your environments in a secure way.

Take a look at the demo site: https://pmmdemo.percona.com


Foiled by the Firewall: A Tale of Transition From Prometheus to VictoriaMetrics

Transition From Prometheus to VictoriaMetrics

Transition From Prometheus to VictoriaMetricsWhen I was in the interview process here at Percona, I was told I’d be leading the team that delivered Percona Monitoring and Management (PMM) and so naturally, I wanted to know just what that meant.  I started researching the product and I got even more excited about this opportunity because as I read, PMM was actually designed…FOR ME! Well, a much younger and less gray-haired version of me, anyway.  A tool designed for the SysAdmin/DBA/DevOps-y types to find and fix issues with speed!  That evening I went to the website and pulled down the latest version of PMM partly to ace the interview and partly to see how easy the new wave of engineers have it compared to “my day”.  Well, I struggled…BOY did I struggle! 

The installation was a breeze…basic RPM-based client with simple docker install for the server, decent instructions mostly copy/paste-able, run of the mill commands to get things connected…this was gonna be easy and I coasted all the way to the point of having registered my first MariaDB instance to my new monitoring suite… and then I slammed face-first into a brick wall!  EVERYTHING worked, every command resulted in an expected response, and I had glimmers of confirmation that I was on the right track…right there on the dashboard I could see awareness of my newly added host: The number of nodes monitored went from 1 to 2…but why did I have no data about my system…why could I not see anything on the MySQL summary pages…how would I ever know how awesome Query Analytics (QAN) was without the data I was promised…but wait…THERE WAS QAN DATA…how can that be, ”whyyyyyyyyyy???????……”  


Look, I’m a geek at heart…a nerd’s nerd, I embrace that.  I have a more robust home network than most small to midsize businesses…because why not!  The house is divided into a few VLANs to ensure the guest-wifi is internet only and my kids’ devices are only given access to the web interfaces of the services they need (email, Plex, etc)…I wouldn’t call it bulletproof but it was designed with layers of security in mind. So when I installed the PMM server in my sandbox environment (its own VLAN) and tried to let it monitor my core DB I knew I’d need to make a hole for my database server to talk to this docker image on my trusty Cisco Pix 501 (R.I.P.) to allow it to talk TCP on port 443 from the DB (client) to PMM (server) and it registered no problem.  But no stats…no great errors on the client side, no idea where to look for errors on the server side…stuck.  I’d like to tell you I pulled out my trusted troubleshooting guide and turned to page one where it says “check the firewall dummy”, but I cannot.  Surely this problem is FAR more complex and requires a complete dissection of every single component…and I was up for the challenge. 

Well, I was up until probably three in the morning determined to emerge victorious when I finally threw in the towel and went to bed defeated…visions of being a Perconian and joining an elite team of super-smart people were quickly fading and I remember thinking… ”If I can’t figure this out, I sure as hell wouldn’t hire me”.  The next morning I woke up and went for my run and finally ran through the troubleshooting basics in my mind and realized that I’ll bet there’s good info on the firewall logs!  Low and behold…my single purpose VM is trying to connect back to my DB on TCP port 42000 and 42001.  That can’t be right…what on earth would be going on there…  Google to the rescue, it was NOT Percona trying to harvest my ‘oh, so valuable’ blog data or family picture repository. 

Turns out, this is by design.  

If you don’t know, Prometheus uses a “pull” model to get data from the exporters, whereby the server reaches out to the exporter to pull the data it needs.  PMM clients register to the server with a “push” model by initiating the connection from the client and pushing the registration data over TCP port 443, which it later uses to send QAN data.  So in a protected network, to register then get both QAN data AND exporter metrics you need to open TCP port 443 with communication originating from the PMM client and destined to the PMM Server AND open up TCP ports 4200x originating from the PMM server destined for the client.  Why the “x”, well because you need to open up a port for EACH exporter you run on the client; so just monitoring MySQL, you’ll need to open 42000 for the node stats and 42001 for MySQL, add an external exporter, also open up 42002, the same server has a proxySQL instance, open up 42003 and so on.  Oh…and do this for EVERY server you want to have monitored behind that firewall.  Prometheus

So I opened up the ports and, like magic, the data just started flowing and it was glorious.  I chalked the issue up to me just being dumb and that the whole world probably knew the model of exporters.  Well, it turns out, I was wrong: the whole world did not know This ends up being THE single most asked question on our PMM Forums, I think I’ve answered this question personally about 50 times in various ways.  The reality is, this is an extremely common network configuration and an extremely frustrating hurdle to overcome but I guess we’re stuck with it right?  NO!  Not even for a second..there HAS to be a better way. We’d kicked around a few ideas of how we’d implement it but all of them were somewhat expensive from a time and manpower standpoint and all we got from it was a little less configuration for a lot of effort, we could just make the documentation better for a fraction of the cost and move on UNTIL I was introduced to a product called VictoriaMetrics.  

VictoriaMetrics is an alternative to Prometheus and boasts compatibility with the vast majority of Prometheus ecosystem (exporters, alertmanager, etc.) but adds some elements that are pretty cool.  To start with, VictoriaMetrics can use the VMAgent installed on a client to collect the metrics on a client machine and PUSH them to the server.  This instantly solves the problem of data flowing in a single, consistent direction regardless of the number of “things” being monitored per node but we’d have to add another process on the server in the form of the Prometheus PushGateway to received the data..it works but feels really clunky to add two brand new processes to solve one problem. Instead, we decided to replace Prometheus with VictoriaMetricsDB (VMDB) as its preferred method of data ingestion IS the push model (although can still perform the pull, or scrapes, of exporters directly).  For us it’s not an insignificant change to implement so it better be worth it; well we think it is.  The benchmarks they’ve done show the VMDB needs about 1/5th of the RAM and with its compression uses about 7x LESS disk space.  As an added bonus, VictoriaMetrics supports data replication for clustering/high availability which is something very high on our list of priorities for PMM.  One of the biggest hurdles for making PMM highly available is the fact that there’s not a great solution to data replication for Prometheus, all of the other DB’s in the product (PostgreSQL and ClickHouse) support clustering/high availability so paves the way to bring that to life in 2021! 


The best part is, there’s more!  I’ll stay light on the technical details but turns out our desire to have a single direction (from client to server) path of communication comes in handy elsewhere: Kubernetes!  As more companies are exploring what the world-famous orchestrator can do we’re seeing an increasing number of companies putting DB’s inside Kubernetes (K8s) as well (and we’re one of those companies in case you haven’t heard about our recently announced beta).  Well, one of K8s core design principles is not allowing entities outside K8s to communicate directly with running pods inside…if your customer or application needs an answer, talk to the Proxy layer and it will decide what pod is best to serve your request…that way if that pod should die..you’ll never know, the orchestrator will quietly destroy and recreate it and do all the things needed to get the replacement registered and the failed pod deregistered in the load balancer!  But when it comes to databases we NEED to see how each DB node in the DB cluster is performing because “healthy” from K8s perspective may not be “healthy” from a performance perspective and our “take action” thresholds will be different when it comes to increasing (or even decreasing) DB cluster sizes to meet the demands.  

Prometheus Kubernetes

VictoriaMetrics Kubernetes

Other things we’ll get from this move and will enable over time:

  • The VMAgent will cache exporter data in the event the PMM server is unavailable to receive (i.e NO GAPS IN DATA!) 
  • Language Extensions are available
  • Parallel processing of complex queries
  • Configurable data reduction allowing for things like 30 days of high-resolution metrics and 1 year at say 1hr resolution

The bottom line is, there are major reasons this effort needed to be undertaken, and the good news is, we’ve been working on this for the past several months in a close partnership with the VictoriaMetrics team and are close to releasing it!   As it stands right now, our next release of Percona Monitoring and Management will include the VMAgent (defaulted to pull mode still) with VMDB AND we’ll have a K8s compatible pmm-client that works with Percona’s latest operator! You can test it out as you like with a new flag on the ‘pmm-admin config’ and ‘pmm-admin add’ commands to set your mode to push instead of pull.  Oh, and in case you’re wondering…I got the job ? Enjoy and let us know what you think! 

Try Percona Monitoring and Management Today!


Impact of Percona Monitoring and Management “Get Latest” Command Change

Impact of Percona Monitoring and Management Get Latest Command Change

percona monitoring and managementIn the first quarter of 2021 (expected late January), Percona is slated to release a version of Percona Monitoring and Management (PMM) v2 that will include all of the critical functionality users of PMM v1 have come to know and love over the years. While PMM v2 has some major improvements over its v1 sibling, PMM v2 has long had this stigma that there wasn’t parity between the versions when it came to features like external services, annotations, MongoDB Explain, and custom collectors per service to name a few. By early 2021, we feel confident that users of PMM v1 will recognize all their beloved functionality they’ve come to rely upon in v1 is now in v2 and so we encourage you to come try it for yourself. While many of the missing features have since been added in, one item to note is that external services will be included in that early 2021 release; as with all external exporters, you’ll still need to create your own graphs, but getting the remainder of this functionality will make just about anything you can squeeze data out of “monitorable”.

So What’s the Big Deal?

We will be modifying our “latest” tag that currently specifies v1.x so that it will now point to v2.x on getting the “latest version”. PMM v1 users have historically just “rerun” their ‘docker run pmm-server’ command to update to the next PMM v1.x version. They could specify the latest version of the pmm-server by saying 

docker run -d --name pmm-server percona/pmm-server:1.17.3

or they’ve had the ability to replace that with 

docker run -d --name pmm-server percona/pmm-server:latest

and get whichever v1.x version is the latest released by Percona (as of this blog posting date, the latest is 1.17.4).  But, when we make PMM v2 “latest” early in 2021, those of you that run the latter command are going to be impacted (both positively and negatively), so we wanted to give you a heads-up now so you can plan accordingly and make the appropriate modifications to your deployment code. 

First the positive news… PMM v2 has some very exciting and useful improvements over PMM v1 and we can’t wait for you to leverage this new functionality including:

  • A complete rewrite of the Query Analytics (QAN) tool, including improved speed, global sparkline hover, filtering, new dimensions to collect data, and rich searching capabilities
  • The Security Threat Tool (STT) so that you not only can monitor database performance but also database security
  • A robust expansion of MongoDB and PostgreSQL support (along with continued improvements for MySQL)
  • Integration with external AlertManager to create and deploy alerting and “integrated alerting” expected by the end of December 2020 providing native alerting inside PMM itself
  • Global and local annotations across nodes and services to highlight key events for correlation

As has been stated in the past, there is no direct upgrade/migration path from PMM v1 to PMM v2 because of the complete re-architecting in PMM v2. In fact, these are basically two separate and distinct applications. So you will need to stand up and install PMM v2 as a brand new system with new clients on your endpoints. Additionally, we do not provide a data migration path to move your historical data to PMM v2. You can, however, choose to run both PMM v1 and PMM v2 on the same host using this approach to ease the transition. 

So, if you are one of those users that leverages the :latest” command to upgrade to the latest PMM version (note: this is not the recommended approach to upgrading your PMM implementation; the recommended Percona approach is to use a specific version number such as “pmm:2.11.1”.), you need to start planning now to ensure a smooth transition to PMM v2. Here’s our recommendation for how to plan for this change now:

  1. Determine if you currently upgrade PMM via
    docker run -d --name pmm-server percona/pmm-server:latest


    1. If “no”, you will NOT be impacted by the early 2021 change. We would recommend you develop a plan for moving to PMM v2 in 2021 at your convenience, and then proceed to step #2 below.
    2. If “yes”, you WILL be impacted by the early 2021 change and thus need to create a plan on how to minimize your impact. 
      1. If you are planning to keep the docker run command and move to PMM v2 by early 2021, please continue to bullet #2 below. 
      2. If you will not be ready to move to PMM v2 by early 2021, please disable the above docker run command and implement a temporary, manual approach to upgrading to future PMM v1.x releases. When you are ready to migrate to PMM v2, please proceed to step #2 below.
  2.  Will you require access to historical PMM v1 data after deploying PMM v2?
    1. If “yes”, you will need to run both PMM v1 and PMM v2 in parallel. This approach enables a parallel existence. You will want to keep both instances running in parallel until you no longer require access to PMM v1 data, as defined by your organization’s data retention policy.
    2. If “no”, you can install a clean deployment of PMM v2, accessible from the main Percona Monitoring and Management page. From then forward, we recommend you upgrade using the
      docker run.../pmm-server:2

      command, and upgrades will be performed from the v2.x branch of PMM.

After you upgrade in early 2021, enjoy the move to PMM v2 and please let us know your thoughts on its new features as well as any ideas you have for improvement.

Please note that this does NOT mean that we are “sunsetting” PMM v1 and will no longer support that application. While we are not creating new features for PMM v1, we do continue to maintain it with critical bug fixes as needed as well as support for the product for those customers on a support contract. This maintenance and support will continue until PMM moves to version 3.x at a date to be determined in the future.


Download and Try Percona Monitoring and Management Today!


A Blog Shamelessly Bribing You to Review Percona Monitoring and Management!

Review Percona Monitoring and Management

We would love you to help us spread the word about Percona Monitoring and Management (PMM) to make sure even more people are aware of it and adopting it. And we are not afraid to offer (modest) bribes!

  • If you already use PMM please write an independent review of its pros and cons on the AWS and/or Azure product page.
  • If you don’t use PMM, please install and try this software to see how it can help you improve the monitoring of your database environment.

For those of you new to Percona Monitoring and Management, it is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Percona Monitoring and Management can be used to monitor a wide range of open source database environments:

  • Amazon RDS MySQL
  • Amazon Aurora MySQL
  • MySQL
  • MongoDB
  • Percona XtraDB Cluster
  • PostgreSQL
  • ProxySQL

Percona Monitoring and Management is now available for fast installation on two marketplaces – AWS and Azure. We are keen to increase the number of PMM reviews on those pages so that potential users can get an independent view of how it will benefit their business.

We will send you special Percona Swag for every verified review you post before December 20, 2020. 

Just send us a link to your testimonial or a screenshot, and we will send you the latest in Percona gear – 100% free, and shipped to you anywhere in the world!

You can choose from any of these gift options:

review percona monitoring and management

Any meaningful review (ie: not just a star rating) earns swag; whether it is positive, negative, or mixed. We believe in open source and learning from our users, so please write honestly about your experience using Percona Monitoring and Management.

To claim your swag, email the Percona community team and include:

  1. The screenshot or link to your review
  2. Your postal address
  3. Your phone number (for delivery use only, never for marketing)
  4. If you have chosen a sweatshirt or hoodie please also let us know what color (grey, black, or blue), and your size.
  5. We only accept feedback from PMM users who are using AWS and Azure marketplaces.
  6. Be sure to submit your review before December 20, 2020!

It’s that simple!

So, please visit the AWS and Azure Percona Monitoring and Management download pages to add your review today!




Deploying Percona Monitoring and Management 2 Without Access to the Internet

percona monitoring and management deployment

Normally it is quite easy to deploy Percona Monitoring and Management (PMM) Server as a Docker container as per the official documentation. However, when working in very restrictive environments, it is possible the server doesn’t have access to the public Internet, so pulling the image from the Docker hub is not possible. Fortunately, there are a few workarounds to get past this problem.

As previously described by Agustin for PMM 1, one way is to Docker pull and save the image somewhere else. Here I will show you another way to do it that doesn’t require a separate server running Docker, and also provide updated instructions for PMM 2.

1. Download the PMM Server image directly from the Percona website. Select the desired Version and choose ‘Server – Docker Image’ from the drop-down box, for example:

download percona monitoring and management

2. Copy the downloaded .docker file to the PMM server, for example via SCP:

scp -i my_private_key pmm-server-2.11.1.docker my_user@my_secure_server:

3. Load the image to the local Docker repository on your PMM server

sudo docker load < pmm-server-2.11.1.docker

4. Create the persistent data container. Normally we would use percona/pmm-server:2 as the image tag, but since we loaded a specific version we need to specify it as follows:

sudo docker create \
-v /srv \
--name pmm-data \
percona/pmm-server:2.11.1 \

5. If this is a production deployment, it is a good idea to move the data container to a dedicated volume.

6. Create the server container (again, specifying the image version we have loaded before):

sudo docker run \
--detach \
--restart always \
--publish 80:80 \
--publish 443:443 \
--volumes-from pmm-data \
--name pmm-server \

7. Verify PMM Server installation by visiting server_hostname:80 or server_hostname:443 and reset the admin password. The default user/password is admin/admin.

All that is left now is to install the clients and start using your brand new Percona Monitoring and Management instance. If you have questions or run into trouble, feel free to reach out to us at the Percona Forums.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com