Nov
29
2022
--

PMM, Federated Tables, Table Stats, and Lots of Connections!

Percona Monitoring and Management Federated Tables

Percona Monitoring and Management Federated TablesEarlier in the year, I was working on an issue where one of my clients had reported a massive influx in connection on their hosts after enabling Percona Monitoring and Management (PMM). This was something I had not seen before and after researching for a couple of days I discovered that if you monitor a MySQL instance with PMM configured to collect table statistics, and if the tables that it’s gathering statistics from are Federated, it will generate a connection on the remote host for the Federated tables, one for each Federated table in the instance. Let’s go over the details and provide some examples so we can understand this a bit better.

First, I’ll offer a reminder that a Federated table is simply a table that you can put in your MySQL instance that is empty locally and uses a network connection to get the data from another MySQL host when the table is queried. For example, if I have a normal table called peter_data on host mysql1, I can set up a Federated table on mysql2 that points to mysql1. Each time that mysql2 has a query on the peter_data table, it connects to mysql1, gets the data, and then returns it locally. This feature is a lot less common now than it once was given how MySQL replication has improved over time, but as you can see here in the MySQL reference guide, it’s still supported.

So how does this impact our issue where PMM was establishing so many connections? Let’s set this up in my lab and have a look!

Lab setup

Let’s start by setting up my first host centos7-1 as the “remote host”. This is the host that has the actual data on it.

[root@centos7-1 ~]# mysql
.....
mysql> use ftest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show create table t1 \G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `c1` int(11) NOT NULL,
  `c2` int(11) DEFAULT NULL,
  `c3` int(11) DEFAULT NULL,
  PRIMARY KEY (`c1`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

mysql> select * from t1;
+----+------+------+
| c1 | c2   | c3   |
+----+------+------+
|  1 |    1 |    1 |
|  2 |    2 |    2 |
|  3 |    3 |    3 |
+----+------+------+
3 rows in set (0.00 sec)

Now that’s done, I’ll set up my second host centos7-2 to act as the host that has the Federated table.

[root@centos7-2 ~]# mysql -u root -ppassword
mysql> select * from mysql.servers;
+-------------+-----------+-------+----------+----------+------+--------+---------+-------+
| Server_name | Host      | Db    | Username | Password | Port | Socket | Wrapper | Owner |
+-------------+-----------+-------+----------+----------+------+--------+---------+-------+
| fedlink     | 10.0.2.12 | ftest | root     | password | 3306 |        | mysql   |       |
+-------------+-----------+-------+----------+----------+------+--------+---------+-------+
1 row in set (0.00 sec)

mysql> use ftest;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show create table t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `c1` int(11) NOT NULL,
  `c2` int(11) DEFAULT NULL,
  `c3` int(11) DEFAULT NULL,
  PRIMARY KEY (`c1`)
) ENGINE=FEDERATED DEFAULT CHARSET=latin1 CONNECTION='fedlink/t1'
1 row in set (0.00 sec)

mysql> select * from t1;
+----+------+------+
| c1 | c2   | c3   |
+----+------+------+
|  1 |    1 |    1 |
|  2 |    2 |    2 |
|  3 |    3 |    3 |
+----+------+------+
3 rows in set (0.01 sec)

Recreating the issue

Now that we have our Federated table set up. Let’s test and see how querying table metadata on centos7-2, the instance with the Federated table, impacts connections on centos7-1, the remote host. What I did was connect to centos7-2, query the information_schema.tables table much in the same way that PMM does, disconnected, and then connected a second time running the same query.

[root@centos7-2 ~]# mysql -u root -ppassword
...
mysql> select * from information_schema.tables where table_schema = 'ftest';
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | ENGINE    | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH | MAX_DATA_LENGTH | INDEX_LENGTH | DATA_FREE | AUTO_INCREMENT | CREATE_TIME | UPDATE_TIME         | CHECK_TIME | TABLE_COLLATION   | CHECKSUM | CREATE_OPTIONS | TABLE_COMMENT |
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
| def           | ftest        | t1         | BASE TABLE | FEDERATED |      10 | Fixed      |          3 |           5461 |       16383 |               0 |            0 |         0 |           NULL | NULL        | 1969-12-31 19:33:42 | NULL       | latin1_swedish_ci |     NULL |                |               |
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
1 row in set (0.01 sec)

mysql> exit
Bye

[root@centos7-2 ~]# mysql -u root -ppassword
....
mysql> select * from information_schema.tables where table_schema = 'ftest';
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | ENGINE    | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH | MAX_DATA_LENGTH | INDEX_LENGTH | DATA_FREE | AUTO_INCREMENT | CREATE_TIME | UPDATE_TIME         | CHECK_TIME | TABLE_COLLATION   | CHECKSUM | CREATE_OPTIONS | TABLE_COMMENT |
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
| def           | ftest        | t1         | BASE TABLE | FEDERATED |      10 | Fixed      |          3 |           5461 |       16383 |               0 |            0 |         0 |           NULL | NULL        | 1969-12-31 19:33:42 | NULL       | latin1_swedish_ci |     NULL |                |               |
+---------------+--------------+------------+------------+-----------+---------+------------+------------+----------------+-------------+-----------------+--------------+-----------+----------------+-------------+---------------------+------------+-------------------+----------+----------------+---------------+
1 row in set (0.01 sec)

mysql> exit
Bye

As you can see below, this resulted in two connections on centos7-1 that did not drop despite disconnecting and reconnecting on centos7-2.

mysql> show processlist;
+----+------+-----------------+-------+---------+------+----------+------------------+-----------+---------------+
| Id | User | Host            | db    | Command | Time | State    | Info             | Rows_sent | Rows_examined |
+----+------+-----------------+-------+---------+------+----------+------------------+-----------+---------------+
| 23 | root | localhost       | NULL  | Query   |    0 | starting | show processlist |         0 |             0 |
| 25 | root | 10.0.2.13:33232 | ftest | Sleep   |  112 |          | NULL             |         1 |             1 |
| 27 | root | 10.0.2.13:33236 | ftest | Sleep   |   71 |          | NULL             |         1 |             1 |
+----+------+-----------------+-------+---------+------+----------+------------------+-----------+---------------+
3 rows in set (0.00 sec)

This doesn’t sound like that big of a deal, especially considering that PMM usually remains connected to the host. So if you have one Federated table in your system and if PMM is monitoring table stats, it will only add one connection on the remote host right? That’s true, but in my lab, I expanded this to create 145 Federated tables, and the result of this despite only querying the information_schema.tables table, 145 connections were created on centos7-1.

[root@centos7-2 ~]# mysql -u root -ppassword
...
mysql> select * from information_schema.tables where table_schema = 'ftest';
....
145 rows in set (0.08 sec)

[root@centos7-1 ~]# mysql -u root -ppassword
mysql> select * from information_schema.processlist where substring_index(host,':',1) = '10.0.2.13' and user = 'root';
+------+------+------------------+-------+---------+------+-------+------+---------+-----------+---------------+
| ID   | USER | HOST             | DB    | COMMAND | TIME | STATE | INFO | TIME_MS | ROWS_SENT | ROWS_EXAMINED |
+------+------+------------------+-------+---------+------+-------+------+---------+-----------+---------------+
| 2120 | root  | 10.0.2.13:60728 | ftest | Sleep   |    7 |       | NULL |    6477 |         1 |             1 |
| 2106 | root  | 10.0.2.13:60700 | ftest | Sleep   |    7 |       | NULL |    6701 |         1 |             1 |
....
| 2117 | root  | 10.0.2.13:60722 | ftest | Sleep   |    7 |       | NULL |    6528 |         1 |             1 |
| 2118 | root  | 10.0.2.13:60724 | ftest | Sleep   |    7 |       | NULL |    6512 |         1 |             1 |
+------+------+------------------+-------+---------+------+-------+------+---------+-----------+---------------+
145 rows in set (0.00 sec)

This can be a big problem if you have a host that doesn’t support a lot of connections and you need those connections to be available for your app!

Conclusion

Based on the lab testing above, we can see how PMM queries against the information_schema.tables table can cause issues with a lot of connections being created on a Federated remote host. This probably will not be a problem for most MySQL users considering that Federated tables aren’t that common, but if you have Federated tables and if you’re considering adding PMM monitoring, or any other monitoring that collects table statistics, be warned! The maintenance of Federated connections on a remote host is not a bug, this is how it’s supposed to behave.

If you have Federated tables and if you want to avoid this problem, you can ensure that you use the flag –disable-tablestats when adding MySQL to your local PMM client using the “pmm-admin add mysql” command.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Nov
15
2022
--

PMM v2.32: Backup Management for MongoDB in GA, New Home Dashboard, and More!

Percona Monitoring and Management v2.32

Percona Monitoring and Management v2.32We are pleased to announce the general availability of the Backup Management for MongoDB and other improvements in Percona Monitoring and Management (PMM) v.2.32 that has been released in November 2022. Details are in this blog and also in the PMM 2.32 Release Notes.

PMM is now on the scene with a new Home Dashboard where you can quickly and easily check your databases’ health at one glance and detect anomalies. While there’s no one-size-fits-all approach, we created and released the new Home Dashboard to make it more user-friendly, even for users new to PMM.

You can get started using PMM in minutes with PMM Demo to check out the latest version of PMM V2.32.

Let’s have a look at the highlights of PMM 2.32:

General availability of Backup Management for MongoDB

The Backup Management for MongoDB in PMM has reached General Availability and is no longer in Technical Preview.

Supported setups

MongoDB Backup Management now supports replica set setups for the following actions:

  • Create logical snapshot backups
  • Create logical Point In Time Recovery (PITR) backups
  • Create physical snapshot backups. This is available only with Percona Server for MongoDB
  • Restore logical snapshot backups.
  • Restore physical backups. This requires additional manual operations after the restore and is only available with Percona Server for MongoDB.
  • Restore logical PITR backups from S3

Current limitations

  • Restoring logical PITR backups only supports S3 storage type
  • Restoring physical backups requires manual post-restore actions
  • Restoring a MongoDB backup on a new cluster is not yet supported
  • Restoring physical backups for containerized MongoDB setups is not supported
  • Local storage for MySQL is not supported
  • Sharded cluster setups are not supported
  • Backups that are stored locally cannot be removed automatically
  • Retention is not supported for PITR artifacts

 

Quicker and easier database health overview with the new Home Dashboard

As mentioned and promised in previous release notes, we were investigating better approaches, methods, and user-friendly presentation of database health in the Home Dashboard, which is also the entry point to PMM. Finally, we are proud to release this finalized dashboard as the new Home Dashboard. Thank you for your feedback and collaboration during all iterations of the experimental versions.

Monitor hundreds of nodes without any performance issues

If you have hundreds of nodes being monitored with the same PMM instance, the original dashboard may have taken a long time to load, which could have resulted in an unresponsive page, due to the design of the original Home Dashboard with repeating panels for each node. With performance issues in mind, we re-designed the Home Dashboard with new logic to show what is wrong or what is OK with your databases, instead of showing all metrics, for each node.  

PMM Home Dashboard_home_select multiple nodes

PMM Home Dashboard_home_select multiple nodes

Anomaly detection

Many of you probably use dozens of tools for different purposes in your daily work, meetings, and projects. These tools should make your life easier, not more intensive. With monitoring tools, the issue of too many metrics can be daunting— so analyzing data, and detecting anomalies that deviate from a database’s normal behavior should be easy and fast. Functional Anomaly Detection panels, as opposed to separate graphs for each node, are a much better way to visualize and recognize problems with your databases that may require action to be taken.

  • You can click the node name on the panel to see the Node Overview Dashboard of the related node if you see any anomaly. So you can see all metrics of the Node that you need to diagnose the problems.
  • All panels except Advisor Checks can be filtered by node and environment variables
  • Graphs in the Anomaly Detection row show the data for the top 20 nodes. e.g., CPU anomalies in the top 20
Anomaly Detection

PMM Anomaly Detection panels

Command Center panels

The primary motivation behind the new Home Dashboard is simplicity. It was always hard to balance presenting the required metrics for everyone and at the same time, making it clean, functional, and simple while working on the new design. So we decided to use Command Center panels which are collapsed by default. If you see any anomaly in Memory Usage with more than 90%, how do you know when it happened or started? Time-series graphs for the Top 20 in the Command Center panels will help you see when the anomalies occurred: in the last 1 hour or the last week? 

PMM Command Center Panels

PMM Command Center Panels on Home Dashboard

Enhanced main menu

We returned with two improvements we previously promised. These improvements were announced in V2.32 for easier access to dashboards from the Main Menu. After the last changes, with each possible monitored services type represented on the Main Menu as icons, the menu became crowded and extended with all icons representing different service types. In the latest version,  you’ll only see the icons of currently monitored services on the Main Menu. For example, if you’re monitoring MongoDB, you will see the MongoDB Dashboard’s icon on the main menu, as opposed to the previous versions, which showed all database types PMM is capable of monitoring, whether you had them in your system or not. When and if you start to monitor other services like MySQL, they will be automatically added to the Main Menu.

Another improvement on the Main Menu is the visibility of all other dashboards. PMM provides multiple dashboards for different levels of information for each service. You only see some dashboards in the main menu; the rest are available in the folders. Some users can miss these dashboards, which are not presented in the Main Menu. Also, customer dashboards created by different users in your organization can be missed or invisible to you until you see them in the folders by chance. So, we added Other Dashboards links to the sub-menu of each service,  so that you can easily click and see all dashboards in the Service folder.

Quick access to other dashboards from the menu

Quick access to other dashboards from the menu

What’s next?

  • We’ll improve the Vacuum Dashboard with more metrics. If you’d like to enhance it with us, you can share your feedback in the comments.
  • A health dashboard for MySQL is on the way. Please share your suggestions in the comments or forum if you’d like to be part of the group shaping PMM. 
  • We have started to work on two new and significant projects: High Availability in PMM and advanced Role-Based Access Control (RBAC). We’d love to hear your needs, use cases, and suggestions. You can quickly book a short call with the product team to collaborate with us. 
  • For Backup Management, we are planning to continue to iterate on the current limitations listed above and make the restore processes as seamless as possible for all database types.

Install PMM 2.32 now or upgrade your installation to V2.32 by checking our documentation for more information about upgrading.

Learn more about Percona Monitoring and Management 3.32

Thanks to Community and Perconians

We love our community and team in Percona, who shape the future of PMM, together and help us with all those changes.

You can also join us on our community forums to request new features, share your feedback, and ask for support.

Thank you for your collaboration on the new Home Dashboards:

Cihan Tunal?   @SmartMessage 

Tyson McPherson @Parts Authority

Paul Migdalen @IntelyCare

Sep
28
2022
--

PMM v2.31: Enhanced Alerting, User-Friendly Main Menu, Prometheus Query Builder, Podman GA, and more!

Percona Monitoring and Management v2.31

Percona Monitoring and Management v2.31Autumn brought new cool features and improvements to Percona Monitoring and Management (PMM) in V2.31. Enhanced user experience with the updated main menu, Alerting, and better PostgreSQL autovacuum observability with the new Vacuum dashboard are the major themes that we focused on in this release. Check out our Release Note of 2.31 for the full list of new features, enhancements, and bug fixes.

You can get started with PMM in minutes with the PMM Demo to check out the latest version of PMM V2.31.

Some of the highlights in PMM V2.31 include:

General availability of Percona Alerting

We are excited to introduce a streamlined alert setup process in PMM with an overhauled, unified alerting system based on Grafana. 

All Alerting functionality is now consolidated in a single pane of glass on the Alerting page. From here, you can configure, create and monitor alerts based on Percona or Grafana templates. 

The Alert Rules tab has also evolved into a more intuitive interface with an added layer for simplifying complex Grafana rules. You’ll find that the new Percona templated alert option here offers the same functionality available in the Tech Preview of Integrated Alerting but uses a friendlier interface with very advanced alerting capabilities. 

As an important and generally useful feature, this new Alerting feature is now enabled by default and ready to use in production! 

For more information about Percona Alerting, check out Alerting doc.

Deprecated Integrated Alerting

The new Percona Alerting feature fully replaces the old Integrated Alerting Tech Preview available in previous PMM versions. The new alerting brings full feature parity with Integrated Alerting, along with additional benefits like Grafana-based alert rules and a unified alerting command center

However, alert rules created with Integrated Alerting are not automatically migrated to Percona Alerting. After upgrading, make sure to manually migrate any custom alert rules that you want to transfer to PMM 2.31 using the script

Easier query building, enhanced main menu in PMM 2.31

We have powered-up PMM with Grafana 9.1 by drawing on its latest features and improvements. Here is the list of features and enhancements that have been shipped in this release: 

Redesigned expandable main menu (side menu)

With the 2.31 release, we introduce a more user-friendly and accessible main menu inspired by Grafana’s expandable side menu. PMM dashboards are the heart of the monitoring, and we aimed to provide quick and easy access to the frequently used dashboard from the main menu. On this menu, you’ll be able to browse dashboards with one click, like Operating System, MySQL, MongoDB, PostgreSQL, etc. 

PMM new side menu

PMM new side menu

Pin your favorite dashboards to the main menu (side menu)

PMM provides many custom dashboards with dozens of metrics to monitor your databases. Most users in an organization use just a handful of dashboards regularly; now, it is much easier to access them by saving the most frequently used dashboards to the main menu. You see your saved dashboards under the Starred section on the main menu.

This feature is enabled by default in PMM. You can disable it by disabling the savedItems feature flag if you have server admin or Grafa admin roles.

dd

Tip:

You can follow these steps to add your dashboard to Starred on the main menu:

  1. Open your dashboard
  2. Mark it by clicking the star icon next to the Dashboard name on the top right corner
  3. Hover Starred icon on the main menu and see all saved dashboards.

Search dashboards on Panel titles

While looking for a specific metric or panel inside dashboards, it is easy to forget which dashboard presents it. Now you can quickly find the dashboard you need on the Search dashboard page.

Percona Monitoring and Management v2.31

Command palette

A new shortcut which is named “command palette” in Grafana, has been provided in this PMM version. You can easily access main menu sections, dashboards, or other tasks using cmd+K (MacOS) or ctrl+K (Linux/Windows).  Run the command on the Explore section to quickly run a query or on the Preferences section to easily change theme preferences. 

Command Palette

Command Palette

Visual Prometheus Query Builder in Explore (Beta)

A new Prometheus query builder has been introduced in Grafana 9. This feature allows everyone, especially new users, to build queries on PromQL without extensive expertise. Visual query builder UI in Explore allows anyone to write queries and understand what the query means. 

You can easily switch to the new Prometheus query builder (Builder) by clicking on Builder mode in the top-right corner. The Builder mode allows you to build your queries by choosing the metric from the dropdown menu. If you want to continue on Text mode, you can switch to Code mode while having your text changes preserved. Please check this blog to learn more about Builder mode.

new visual query builder

Visual Prometheus Query Builder in Explore (Beta)

Add your queries to a dashboard or create a new dashboard from Explore

You’ll probably like this news if you’re a fan of the Explore feature or frequently use it. Now, creating a panel/dashboard from Explore with one click is possible by saving you from jobs like copy-paste or re-write queries. You only need to click the “Add to dashboard” button after you run your query.  Then, your panel will be automatically created with the query and a default visualization. You can change the visualization on the dashboard later by clicking the “Edit” panel. Note that you need to have the Editor/Admin/SuperAdmin role to save the panel to the chosen dashboard and follow the current dashboard save flow to save the added panel. Otherwise, you’ll lose the added new panel on your dashboard.

add query from Explore

Add your queries to a dashboard or create a new dashboard from Explore

Experimental Vacuum Dashboard

The autovacuum process in PostgreSQL is designed to prevent table bloat by removing dead tuples. These dead tuples can accumulate because of the unique way that PostgreSQL handles MVCC. Because PostgreSQL’s architecture is so unique, the autovacuum process is sometimes not understood well enough to be able to tune its parameters for peak performance. After talking to many customers and realizing that this is a recurring problem, we decided to help our users by providing a dashboard that allows you to monitor metrics related to the vacuum process – thereby helping you tune these parameters for better performance.

Now, you can monitor PostgreSQL vacuum processes with a new experimental dashboard named PostgreSQL Vacuum Monitoring which is available in the Experimental folder. We’re still working on this dashboard to add more metrics. Please let us know your feedback about this dashboard in the comments.

Experimental Vacuum Dashboard

Experimental Vacuum Dashboard

Tip

If you’d like to move the Vacuum experimental dashboard to the PostgreSQL folder or other folders that you internally use to gather all PostgreSQL dashboards, please check this document to see how you can move dashboards to a different folder.

General availability of Podman

We are excited to announce the General Availability (GA) of Podman support for deploying PMM 2.31.0. We had introduced it in 2.29.0 as a preview feature, but now we are production ready with this feature

Simplified deployment with Database as a Service (DBaaS)

In our constant endeavor and focus on an enhanced user experience, in PMM 2.31.0, we have simplified the deployment and configuration of DBaaS as follows:

  • With PMM 2.31.0, you can easily add a DB cluster from a newly created K8s cluster. All the DB cluster window fields are auto-populated with the values based on the existing K8s cluster. 
  • For PMM 2.31.0, while accessing DbaaS, if you have an existing Kubernetes cluster configured for DBaaS, you will be automatically redirected to the DB Cluster page. Otherwise, you would be redirected to the Kubernetes Cluster page.

What’s next?

  • New UX improvements are baking! We’re working on making our main menu easy to use and minimal. Next release, the main menu will present only monitored services, and you’ll have a clearer and less crowded main menu.
  • The home dashboard will replace the experimental Home dashboard, which is available in the Experimental folder after v2.30. Please do not forget to share your feedback with us if you have tried it. 
  • We’ll improve the vacuum dashboard with more metrics. If you’d like to enhance it with us, you can share your feedback in the comments.
  • We have started to work on two new and big projects: High Availability in PMM and advanced role-based access control (RBAC). We’d love to hear your needs, use cases, and suggestions. You can quickly book a short call with the product team to collaborate with us. 

Thanks to Community and Perconians

We love our community and team in Percona, who help shape PMM and improve better! 

Thank you for your collaboration on the new main menu:

Pedro Fernandes, Fábio Silva, Matej Kubinec

Thank you for your collaboration on Vacuum Dashboards:

Anton Bystrov, Daniel Burgos, Jiri Ctvrtka, Nailya Kutlubaeva, Umair Shahid

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Sep
08
2022
--

Enabling ProcFS UDF in Percona Monitoring and Management

Enabling ProcFS UDF in Percona Monitoring and Management

Enabling ProcFS UDF in Percona Monitoring and ManagementIn my previous blog post, ProcFS UDF: A Different Approach to Agentless Operating System Observability in Your Database, I wrote about the ProcFS UDF MySQL plugin, which allows you to get operating systems stats, through the MySQL database, without having shell access to the server and any local agent installation.

Some of you wondered whether there is a way to use this goodness in Percona Monitoring and Management (PMM), and this blog post will show you exactly how to do that.

Unfortunately, at this point, Percona Monitoring and Management does not support the ProcFS UDF MySQL plugin out of the box. It is in the backlog, along with many other cool things. However, if this particular feature would be valuable to you, please let us know. 

That said, Percona Monitoring and Management is extensible, so you can actually make things work with a little bit of elbow grease using the external exporter functionality.

Here’s how:

1. Configure the MySQL host you wish to monitor with ProcFS UDF as described in this blog post.

2. Add this MySQL server as a remote instance using “Add Instance,” available in the PMM menu in the top right corner.

PMM Add Instance

3. Pick the host to capture metrics.

While you do not need any agent installed on the MySQL server, if you’re looking to monitor, you’ll need a server to capture metrics from it and pass them to the PMM server. This server will need the PMM client installed and configured. You will also need to install Node Exporter with ProcFS UDF:

docker/podman run -p 9100:9100 -d docker.io/perconalab/node_exporter:procfs --collector.mysqlprocfs="MYSQLUSER:MYSQLPASSWORD@tcp(MYSQLHOST:3306)"

If you do not want to use docker, instructions on how to compile patched Node Exporter are included in the previously mentioned ProcFS UDF Introduction blog post.

You can use one host to monitor multiple MySQL servers — just run multiple Node Exporters on different ports.

4. Configure passing information to PMM.

Now that we have MySQL metrics flowing to PMM as a remote instance and Node Exporter running on a different host, which is ready to provide us metrics, how do we establish a connection so that those “node metrics” are attached to the correct host?

First, we need to find the node_id of the remote node we’ve added:

root@client1:# pmm-admin inventory list nodes
Nodes list.

Node type     Node name            Address           Node ID
GENERIC_NODE  mysql1               66.228.62.195
GENERIC_NODE  client1              50.116.36.182     /node_id/9ba48cd4-a7c2-43f6-9fa6-9571d1f0aebf     
….
REMOTE_NODE   procfstest           173.230.136.197   /node_id/29800e10-53fc-43f7-bba6-27c22ab3a483

Second. we need to get the external service added for this node:

root@client1:~# pmm-admin inventory add service external --name=procfstest-node --node-id=/node_id/29800e10-53fc-43f7-bba6-27c22ab3a483
External Service added.
Service ID     : /service_id/c477453f-29fb-41e1-aa64-84ee51c38cd8
Service name   : procfstest-node
Node ID        : /node_id/29800e10-53fc-43f7-bba6-27c22ab3a483
Environment    :
Cluster name   :
Replication set:
Custom labels  : map[]
Group          : external

Note: The Node ID we use here is the Node ID of the remote node we are monitoring.

This creates the external service, but it is really orphan at this point — there is no agent to supply the data.

root@client1:~# pmm-admin inventory add agent external --runs-on-node-id=/node_id/9ba48cd4-a7c2-43f6-9fa6-9571d1f0aebf --service-id=/service_id/c477453f-29fb-41e1-aa64-84eec38cd8 --listen-port=9100
External Exporter added.
Agent ID              : /agent_id/93e3856a-6d74-4f62-8e8d-821f7de73977
Runs on node ID       : /node_id/9ba48cd4-a7c2-43f6-9fa6-9571d1f0aebf
Service ID            : /service_id/c477453f-29fb-41e1-aa64-84ee51c38cd8
Username              :
Scheme                : http
Metrics path          : /metrics
Listen port           : 9100

Disabled              : false
Custom labels         : map[]

This command now specifies what external service we’ve created for our remote node will get data from the agent, which is running on the other node and will use the port specified by the listen-port option. This is what our ProcFS Enabled Node Exporter is using.

After you’ve done these steps, you should see OS data for the remote host appear on the home dashboard.

PMM Remote Host Percona

And even more important, you’ll have OS metrics populated in the Node Summary dashboard.

Node Summary dashboard PMM

Summary

While this is much harder than it has to be, I think it serves as a great proof of concept (POC) of what is possible with the ProcFS UDF MySQL plugin — getting the full power of the operating system and MySQL observability without requiring any shell access to MySQL.

I think this can be extremely valuable for MySQL provided as a Database as a Service (DBaaS), as well as for enterprises practicing great separation between their teams responsible for operating system and database operations!

Learn more about ProcFS UDF

Sep
06
2022
--

Percona Private DBaaS API in Action

Percona Private DBaaS API

Percona Monitoring and Management (PMM) comes with Database as a Service (DBaaS) functionality that allows you to deploy highly-available databases through a simple user interface and API. PMM DBaaS is sort of unique for various reasons:

  • It is fully open source and free
  • It runs on your premises – your data center or public cloud account
  • The databases are deployed on Kubernetes and you have full control over your data

PMM features a robust API but while I have seen demos on the internet demonstrating the UI, I have never seen anything about API. PMM API is great for building tools that can automate workflows and operations at scale. In this blog post, I’m going to impersonate a developer playing with PMMs API to deploy and manage databases. I also created an experimental CLI tool in python to showcase the possible integration.

Percona Private DBaaS API in Action

Preparation

At the end of this step, you should have the following:

  • Percona Monitoring and Management up and running
  • Kubernetes cluster 
  • PMM API token generated

First off, you need a PMM server installed and reachable from your environment. See the various installation ways in our documentation.

For DBaaS to work you would also need a Kubernetes cluster. You can use minikube, or leverage the recently released free Kubernetes capability (in this case it will take you ~2 minutes to set everything up).

To automate various workflows, you will need programmatic access to Percona Monitoring and Management. The recommended way is API tokens. To generate it please follow the steps described here. Please keep in mind that for now, you need admin-level privileges to use DBaaS.

Using API

I will use our API documentation for all the experiments here. DBaaS has a dedicated section. In each step, I will provide an example with the cURL command, but keep in mind that our documentation has examples for cURL, Python, Golang, and many more. My PMM server address is

34.171.88.159

and my token

eyJrIjoiNmJ4VENyb0p0NWg1ODlONXRLT1FwN1N6YkU2SW5XMmMiLCJuIjoiYWRtaW5rZXkiLCJpZCI6MX0=

. Just replace these with yours.

In the demo below, you can see me playing with PMM API through percona-dbaas-cli tool that I created to demonstrate possible integration for your teams. The goal here is to deploy the database with API and connect to it.

Below are the basic steps describing the path from setting up PMM to deploying your first database.

Connection check

To quickly check if everything is configured correctly, let’s try to get the PMM version.

API endpoint: /v1/version

CLI tool:

percona-dbaas-cli pmm version

 

cURL:

curl -k --request GET \
     --url https://34.171.88.159/v1/version \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer eyJrIjoiNmJ4VENyb0p0NWg1ODlONXRLT1FwN1N6YkU2SW5XMmMiLCJuIjoiYWRtaW5rZXkiLCJpZCI6MX0='

It should return the information about the PMM server. If you get the error, there is no way we can proceed.

In the CLI tool, if you have not configured access to the PMM, it will ask you to do it first.

Enable DBaaS in PMM

Database as a Service in PMM is in the technical preview stage at the time of writing this blog post. So we are going to enable it if you have not enabled it during the installation.

API endpoint: /v1/Settings/Change

CLI tool:

percona-dbaas-cli dbaas enable

 

cURL:

curl -k --request POST \
     --url https://34.171.88.159/v1/Settings/Change \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer eyJrIjoiNmJ4VENyb0p0NWg1ODlONXRLT1FwN1N6YkU2SW5XMmMiLCJuIjoiYWRtaW5rZXkiLCJpZCI6MX0=' \
     --data '{"enable_dbaas": true}'

Now you should see the DBaaS icon in your PMM user interface and we can proceed with further steps. 

Register Kubernetes cluster

At this iteration, PMM DBaaS uses Percona Kubernetes Operators to run databases. It is required to register the Kubernetes cluster in PMM by submitting its kubeconfig.

API endpoint: /v1/management/DBaaS/Kubernetes/Register

CLI tool:

percona-dbaas-cli dbaas kubernetes-register

 

Registering k8s with cURL requires some magic. First, you will need to put kubeconfig into a variable and it should be all in one line. We have an example in our documentation:

KUBECONFIG=$(kubectl config view --flatten --minify | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/\\n/g')

curl -k --request POST \
      --url "http://34.171.88.159/v1/management/DBaaS/Kubernetes/Register" \
     --header "accept: application/json" \
     --header "authorization: Bearer eyJrIjoiNmJ4VENyb0p0NWg1ODlONXRLT1FwN1N6YkU2SW5XMmMiLCJuIjoiYWRtaW5rZXkiLCJpZCI6MX0=" \
     --data "{ \"kubernetes_cluster_name\": \"my-k8s\", \"kube_auth\": { \"kubeconfig\": \"${KUBECONFIG}\" }}"

It is much more elegant in python or other languages. We will think about how to simplify this in the following iterations.

Once the Kubernetes cluster is registered, PMM does the following:

  1. Deploys Percona Operators for MySQL and for MongoDB
  2. Deploys Victoria Metrics Operators, so that we can get monitoring data from the Kubernetes in PMM

Get the list of Kubernetes clusters

Mostly to check if the cluster was added successfully and if the Operators were installed.

API endpoint: /v1/management/DBaaS/Kubernetes/List

CLI tool:

percona-dbaas-cli dbaas kubernetes-list

 

cURL:

curl -k --request POST \
     --url https://34.171.88.159/v1/management/DBaaS/Kubernetes/List \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer eyJrIjoiNmJ4VENyb0p0NWg1ODlONXRLT1FwN1N6YkU2SW5XMmMiLCJuIjoiYWRtaW5rZXkiLCJpZCI6MX0=' \

In the CLI tool, I decided to have a nicely formatted list of the clusters, as it is possible to have many registered in a single PMM server.

Create the database

Right now our DBaaS solutions support MySQL (based on Percona XtraDB Cluster) and MongoDB, thus there are two endpoints to create databases:

API endpoints: 

CLI tool:

percona-dbaas-cli dbaas databases-create

 

cURL: 

curl -k --request POST \
     --url https://34.171.88.159/v1/management/DBaaS/PSMDBCluster/Create \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer eyJrIjoiNmJ4VENyb0p0NWg1ODlONXRLT1FwN1N6YkU2SW5XMmMiLCJuIjoiYWRtaW5rZXkiLCJpZCI6MX0=' \
     --header 'Content-Type: application/json' \
     --data '{"kubernetes_cluster_name": "my-k8s", \"expose\": true}'

In the experimental CLI tool, I decided to go with a single command, where the user can specify the engine with the

–engine

flag.

Notice that I also set the expose flag to true, which instructs the Operator to create a LoadBalancer Service for my cluster. It is going to be publicly exposed to the internet, not a good idea for production.

There are various other parameters that you can use to tune your database when interacting with the API.

For now, there is a certain gap between the features that Operators provide and the API. We are heading towards more flexibility, stay tuned for future releases.

Get credentials and connect

It will take some time to provision the database – in the background, Persistent Volume Claims are provisioned, the cluster is formed and networking is getting ready. You can get the list of the databases and their statuses by looking at /v1/management/DBaaS/DBClusters/List endpoint. 

We finally have the cluster up and running. It is time to get the credentials:

API endpoints:

CLI tool:

percona-dbaas-cli dbaas get-credentials

 

cURL:

curl --request POST \
     --url https://34.171.88.159/v1/management/DBaaS/PXCClusters/GetCredentials \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer eyJrIjoiNmJ4VENyb0p0NWg1ODlONXRLT1FwN1N6YkU2SW5XMmMiLCJuIjoiYWRtaW5rZXkiLCJpZCI6MX0=' \
     --header 'Content-Type: application/json' \
     --data '{"kubernetes_cluster_name": "my-k8s","name": "my-mysql-0"}'

This will return you the endpoint to connect to, user and password. Use your favorite CLI tool or ODBC to connect to the database.

Conclusion

Automated database provisioning and management with various Database as a Service solution is becoming a minimal requirement for agile teams. Percona is committed to helping developers and operations teams to run databases anywhere. You can deploy fully open source Percona Monitoring and Management in the cloud or on-premises and provide a self-service experience to your teams not only through UI but API as well.

Right now PMM DBaaS is in technical preview and we encourage you to try it out. Feel free to tell us about your experience in our community forum.

Sep
02
2022
--

PMM v2.30: New Dashboards, Improved DBaaS Experience, Free K8s Cluster for Testing, and More!

Percona Monitoring and Management v2.30

Percona Monitoring and Management v2.30In this v2.30 release, we have focused on improving Percona Monitoring and Management (PMM) for usability, refining the dashboards (effortless root cause analysis), Database as a Service (DBaaS) functionality, and seamless onboarding to align with your needs. Your valuable feedback along with your contributions is important to us. Please keep contributing to take PMM to the next level and make the world more open source.

Note that you can get started with PMM in minutes with the PMM Demo to check out the latest version of PMM V2.30.

Here are some of the highlights in PMM V2.30:

New experimental dashboards

We created new experimental dashboards based on your feedback and requests. The new Home Dashboard was designed to gain actionable insights while monitoring numerous nodes or services without excessive loading time. Our objective with the all-new MongoDB dashboards is to provide collections and document-based metrics as requested by the PMM users.

Important: These experimental dashboards are subject to change. It is recommended to use these dashboards for testing purposes only.

New Home Dashboard (experimental)

The redesigned and improved new Home Dashboard was introduced as a more user-friendly and insightful experimental dashboard in PMM version 2.30.0. The driver behind the new Home Dashboard design is your requests, and feedback about long loading time issues of the existing Home dashboard for monitoring a large number of nodes or services in PMM. Repeated rows (for a large number of nodes) for each node on the dashboard can take a considerable time to load the Home Dashboard UI, which was reported as a performance problem by some users. So we reworked the Home dashboard to provide a boost in performance for monitoring big scales. Our new experimental dashboard ensures fast retrieval of data on the Home Dashboard. For more information, read more on experimental Home Dashboard metrics.

New Home Dashboard (experimental)

Please do not hesitate to give feedback by posting your comments here.

Tip: If you’d like to use an experimental Home dashboard as your default dashboard, you can follow these steps :
  • Open the new Home Dashboard and start it
  • Open the “Configuration” page by clicking it from the User Profile on the main menu
  • Select the new dashboard from the Home Dashboard dropdown under preferences. 

New MongoDB Dashboard (experimental)

After dozens of calls with PMM users who monitor MongoDB metrics in PMM, we collected all the feedback and put them together on new experimental dashboards. (Special thanks to Kimberly Wilkins and Michael Patrick from Percona) We have introduced the following experimental dashboards to collect more metrics and data for MongoDB and present them in a more structured way with new dashboards:

1. Experimental MongoDB collection overview

This dashboard contains panels for data about the hottest collections in the MongoDB database. For more information, see the documentation.

Experimental MongoDB collection overview

2. Experimental MongoDB collection details

This experimental dashboard provides detailed information about the top collections by document count, size, and document read for MongoDB databases. For more information, see the documentation.

Experimental MongoDB collection details

3. Experimental MongoDB oplog details

This real-time dashboard contains oplog details such as Recovery Window, Processing Time, Buffer Capacity, and Oplog Operations. For more information, see the documentation.

oplog details dashboard

Tip: If you’d like to move MongoDB experimental dashboards to the MongoDB folder or other folders that you internally use to gather all MongoDB dashboards, you can follow these steps:
Note: You should have at least an Editor role to do this change.
  • Open the dashboard that you want to move to another folder
  • Click the gear icon to open Dashboard Settings
  • Select the folder name that you want to move the dashboard from the dropdown under the folder in the General section
  • Click the “Save Dashboard” button on the left to save the change

Improved user experience for DBaaS

Usability is one of the themes on our roadmap and is now more important for us to give the best PMM experience in minutes. So we applied a couple of improvements to the configuration and deployment of DBaaS database clusters:

  • In the 2.30.0 release of PMM, we simplified the process of registering the K8s cluster to suggest defaults and populated all possible defaults on the database creation screen.
  • If you have configured the DBaaS feature with K8s registering in it, the creation of the database will be a “one click of the button” action.
  • For more information, see the documentation.
  • We have simplified the DBaaS and Percona Platform connection configuration by suggesting and automatically setting the public address of the PMM server if it’s not set up. This happens when connecting to Percona Platform, adding a K8s cluster, or adding a database cluster. 
  • If you are not a K8s user for now but want to test the DBaaS functionality of PMM, Percona is offering a cluster for free via the Percona Platform portal. Read more in the Private DBaaS with Free Kubernetes Cluster blog post.

Operators support

PMM now supports Percona Operator for MySQL based on Percona XtraDB Cluster.

What’s next?

We’re working on a new main menu on top of the Grafana 9.1 menu structure and Vacuum Monitoring with a new experimental dashboard. If you have any feedback or suggestions, please get in touch with us

Also, another topic on our next release agenda is Podman GA. Please share your experience/feedback with Podman. 

Run PMM Server with Podman now!

We know that you’re waiting for news for Alerting in PMM and we’ll get back to you with good news in v2.31 by combining our Integrated Alerting with Grafana Alerting to make it easier for setup and management. Please follow our emails and release notes to be informed about upcoming releases.

Thanks to the Community and Perconians

We love our Community and team at Percona who help us to shape PMM and make it better! 

Thank you for your collaboration on the new Home Dashboard:

  • Daniel Burgos
  • Anton Bystrov

Thank you for your collaboration on MongoDB dashboards:

  • Anton Bystrov
  • Kimberly Wilkins
  • Michael Patrick

Appreciate your feedback:

Learn more about Percona Monitoring and Management

Aug
23
2022
--

Installing PMM Server from RHEL Repositories

Installing PMM Server from RHEL Repositories

Installing PMM Server from RHEL RepositoriesWe currently provide multiple ways to install Percona Monitoring and Management (PMM) Server, with the primary way  to use a docker:

Install Percona Monitoring and Management

or Podman:

Podman – Percona Monitoring and Management

We implemented it this way to simplify deployments, as the PMM server uses multiple components like Nginx, Grafana, PostgreSQL, ClickHouse, VictoriaMetrics, etc. So we want to avoid the pain of configuring each component individually, which is labor intensive and error-prone.

For this reason, we also provide bundles in a virtual box:

Virtual Appliance – Percona Monitoring and Management

or as Amazon AMI:

AWS Marketplace – Percona Monitoring and Management

Recently, for even simpler and effort-free deployments, we partnered with HostedPMM. They will install and manage the PMM server for you with a 1-Click experience.

However, we still see some interest in having a PMM server installed from repos, using, for example, rpm packages. For this reason, we have crafted ansible scripts that you can use on RedHat 7 compatible system.

The scripts are located here:

Percona-Lab/install-repo-pmm-server (github.com)

Please note these scripts are ONLY for EXPERIMENTATION and quick evaluation of PMM-server, and this way IS NOT SUPPORTED by Percona.

The commands to install PMM Server:

git clone https://github.com/Percona-Lab/install-repo-pmm-server
cd install-repo-pmm-server/ansible
ansible-playbook -v -i 'localhost,' -c local pmm2-docker/main.yml
ansible-playbook -v -i 'localhost,' -c local /usr/share/pmm-update/ansible/playbook/tasks/update.yml

And you should have an empty RedHat 7 system without any database or web-server packages installed.

Let us know if you have interest in this way of deploying PMM server!

Jul
13
2022
--

How Percona Monitoring and Management Helps You Find Out Why Your MySQL Server Is Stalling

Percona Monitoring and Management MySQL Server Is Stalling

In this blog, I will demonstrate how to use Percona Monitoring and Management (PMM) to find out the reason why the MySQL server is stalling. I will use only one typical situation for the MySQL server stall in this example, but the same dashboards, graphs, and principles will help you in all other cases.

Nobody wants it but database servers may stop handling connections at some point. As a result, the application will slow down and then will stop responding.

It is always better to know about the stall from a monitoring instrument rather than from your own customers.

PMM is a great help in this case. If you look at its graphs and notice that many of them started showing unusual behavior, you need to react. In the case of stalls, you will see that either some activity went to 0 or, otherwise, it increased to high numbers. In both cases, it does not change.

Let’s review the dashboard “MySQL Instance Summary” and its graph “MySQL Client Thread Activity” during normal operation:

PMM MySQL Instance Summary

As you see, the number of active threads fluctuates and this is normal for any healthy application: even if all connections request data, MySQL puts some threads into idle states while they need to wait while the storage engine prepares the data for them. Or, if the client application processes retrieved data.

The next screenshot was taken when the server was stalling:

Percona monitoring dashboard

In this picture, you see that the number of active threads is near maximum. At the same time the number of “MySQL Temporary Objects” lowered down to zero. This by itself shows that something unusual happened. But to understand the picture better let’s examine storage engine graphs.

I, like most MySQL users, used InnoDB for this example. Therefore the next step for figuring out what is going on would be to examine graphs in the “MySQL InnoDB Details” dashboard.

MySQL InnoDB Details

First, we are seeing that the number of rows that InnoDB reads per second went down to zero, as well as the number of rows written. This means that something prevents InnoDB from performing its operations.

MySQL InnoDB Details PMM

More importantly, we see that all I/O operations were stopped. This is unusual even on a server that does not handle any user connection: InnoDB always performs background operations and is never completely idle.

Percona InnoDB

You may see this in the “InnoDB Logging Performance” graph: InnoDB still uses log files but only for background operations.

InnoDB Logging Performance

InnoDB Buffer Pool activity is also stopped. What is interesting here is that the number of dirty pages went down to zero. This is visible on the “InnoDB Buffer Pool Data” graph: dirty pages are colored in yellow. This actually shows that InnoDB was able to flush all dirty pages from the buffer pool when InnoDB stopped processing user queries.

At this point we can make the first conclusion that our stall was caused by some external lock, preventing MySQL and InnoDB from handling user requests.

MySQL and InnoDB

The “Transaction History” graph confirms this guess: there are no new transactions and InnoDB was able to flush all transactions that were waiting in the queue before the stall happened.

We can conclude that we are NOT experiencing hardware issues.

MySQL and InnoDB problems

This group shows why we experience the stall. As you can see in the  “InnoDB Row Lock Wait Time” graph, the wait time was raised to its maximum value around 14:02, then lowered to zero. There is no row lock waits registered during the stall time.

This means that at some point all InnoDB transactions were waiting for a row lock, then failed with a timeout. Still, they have to wait for something. Since there are no hardware issues and InnoDB functions healthy in the background, this means that all threads are waiting for a global MDL lock, created by the server.

If we have Query Analytics (QAN) enabled we can find such a command easily.

Query Analytics (QAN)

For the selected time frame we can see that many queries were running until a certain time when a query with id 2 was issued, then other queries stopped running and restarted a few minutes later. The query with id 2 is FLUSH TABLES WITH READ LOCK which prevents any write activity once the tables are flushed.

This is the command that caused a full server stall.

Once we know the reason for the stall we can perform actions to prevent similar issues in the future.

Conclusion

PMM is a great tool that helps not only to identify if your database server is stalling but also to figure out what was the reason for the stall. I used only one scenario in this blog. But you may use the same dashboards and graphs to find out other reasons for the stall, such as DoS attack, hardware failure, a high number of IO operations, caused by poorly optimized queries, and many more.

Jun
06
2022
--

Percona Platform and Percona Account Benefits

Percona Platform and Percona Account

On the 15th of April, Percona introduced the general availability of Percona Platform, and with that step, our company started a new, interesting period.

Popular questions we have received after this date include: 

  • What is Percona Platform, and how does it differ from Pecona Account? 
  • What is the difference between Percona Platform and Percona Portal? 
  • Why should Percona users connect to Percona Platform?

In this blog post, I will answer these questions and provide a summary of the benefits of a Percona Account. 

What is Percona Platform, and how does it differ from Pecona Account? 

First, let’s make sure we understand the following concepts:

Percona PlatformA unified experience for developers and database administrators to monitor, manage, secure, and optimize database environments on any infrastructure. It includes open source software and tools, access to Percona Expert support, automated insights, and self-service knowledge base articles.

Percona Account – A single login provides users with seamless access across all Percona properties, as well as insights into your database environment, knowledge base, support tickets, and guidance from Percona Experts. By creating a Percona Account, you get access to Percona resources like Forums, Percona Portal, ServiceNow (for current customers), and, in the future, other Percona resources (www.percona.com, Jira, etc.).

Percona Portal One location for your Percona relationship, with a high-level overview of your subscription and key resources like the knowledge base, support tickets, and ability to access your account team from anywhere.

Next, let’s try to understand the complete flow users will have to go through when they decide to start using the Percona Platform. A Percona Account is foundational to Percona Platform. Users can create their own account here on Percona Portal. There are two options for Percona Account creation, either via an email and password or by using a social login from Google or Github.  

What is the difference between Percona Platform and Percona Portal? 

From the description above Percona Platform is a great experience in open source software, services, tools, and resources Percona is offering to all users. Percona Portal is just a part of Percona Platform and could be thought of as the core component of Percona Platform to discover the value Percona is offering to registered users.

Why should Percona users connect to Percona Platform?

Percona Platform brings together enterprise-ready distributions of MySQL, PostgreSQL, and MongoDB and a range of open source tools for database monitoring, backup, and management, making it easier to run complex database environments.

Percona Platform consists of several essential units: 

  • Percona Account, 
  • Percona Monitoring & Management, which can assure alerting, advisor insight for users’ environments, backups, private DBaaS, etc.
  • Percona Portal, 
  • Support and services.

Percona Monitoring and Management, DBaaS 

The heart of Percona Platform is Percona Monitoring and Management (PMM), which provides the observability required to understand database health while offering actionable insights to remediate database incidents or performance issues. 

PMM has already won the hearts of millions of DBAs around the world with its Query Analytics (QAN) and Advisors and now with Percona Platform release we made it even cooler! 

The new technical preview feature that I’m talking about is Private DBaaS. 

With less than 20 minutes to configure, Percona Private DBaaS enables you to provide self-service databases to your internal teams in any environment. 

Percona Private DBaaS enables Developers to create and manage database clusters through familiar UI and well-documented API, providing completely open source and ready-to-use clusters with MySQL or MongoDB that contain all necessary Kubernetes settings. Each cluster is set up with high availability, load balancing, and essential monitoring and management.

Private DBaaS feature of PMM is free from vendor lock-in and does not enforce any cloud or on-prem infrastructure usage restrictions.

Learn more about DBaaS functionality in our Documentation

Percona Portal 

Percona Portal is a simple UI to check and validate your relationship with Percona software and services. Stay tuned for updates. 

percona platform

To start getting the value from Percona Portal, just create an organization in the Portal and connect Percona Monitoring & Management using a Percona account. Optionally, you can also invite other members to this group in order to share a common view of your organization, any PMM connection details, a list of current organization members, etc. 

When existing Percona customers create their accounts with corporate email, Percona Portal already knows about their organization. They are automatically added as members and can see additional details about their account, like opened support tickets, entitlements, and contact details.

Percona Portal

Percona Account and SSO

Percona Platform is a combination of open source software and services. You can start discovering software advantages right from installing PMM and start monitoring and managing your environments. You could ask yourself what role of the Platform connection is there? Let’s briefly clarify this.

SSO is the first advantage you should see upon connecting your PMM instance to Percona Platform. All users in the organization are able to sign in to PMM using their Percona Account credentials. They also will be granted the same roles in PMM and Portal.  For example, if a user is an admin on Percona Portal, they do not need to create a new user in PMM, but will instead be automatically added as an admin in PMM. Portal Technical users will be granted a Viewer role in PMM.

Let’s assume a Percona Platform user has already created an account, has his own organization with a couple of members in it, and has also connected PMM to the Platform with a Percona Account. Then, he is also able to use SSO for the current organization. 

Percona Advisors

Advisors provide automated insights and recommendations within Percona Monitoring and Management, ensuring your database performs at its best. Constantly evolving with technology and user feedback, the Advisors check for:

  • Availability at risk,
  • Replication inconsistencies,
  • Durability at risk,
  • Passwordless users,
  • Unsecure connections,
  • Unstable OS configuration,
  • Available performance improvements and more.

 Advisors workflow consists of the following:

  1. If the user has just started PMM and set up monitoring of some databases and environments, we provide him with a set of basic Advisors.  To check the list the user should go to the Advisors section in PMM.
    Percona Advisors
  2. After the user connects PMM and Platform on Percona Portal, this list gets bigger and the set of advisors is updated. 
  3. The final step happens with a Percona Platform subscription. If a user subscribes to Percona Platform, we provide the most advanced list of Advisors, which cover a lot of edge cases and check database environments.

See the full list of Advisors and various tiers in our documentation here

As an open source company, we are always eager to engage the community and invite them to innovate with us. 

Here you can find the developer guides on how to create your own Advisors.

How to register a Percona Account?

There are several options to create a Percona account. The first way is to specify your basic data – email, last name, first name, and password.

  1. Go toDon’t have an account? Create one” on the landing page for Percona Portal. 
  2. Enter a minimal set of information (work email, password, first and last name), confirm that you agree with Percona’s Terms of Service and the Privacy Policy agreement, and hit Create.
  3. register percona accountTo complete your registration, check your email and follow the steps provided there. Click the confirmation link in order to activate your account. You may see messages coming from Okta, this is because Percona Platform uses Okta as an identity provider.
  4. After you have confirmed your account creation, you can log in to Percona Portal.

You can also register a Percona Account if you already have a Google or Github account created.

When you create a Percona account with Google and Github, Percona will store only your email and authenticate you using data from the given identity providers.

  1. Make sure you have an account on Google/Github.
  2. On the login page, there are dedicated buttons that will instruct you to confirm account information usage by Percona.
  3. Confirm you are signing in to your account if 2FA for your Google is enabled.
  4. Hurray! Now you have an active account on Percona Portal.

The next option to register a Percona Account is to Continue with Github. The process is similar to Google. 

  1.  An important precondition has to be met prior to account usage and registration: Set your email address in Github to Public.
  2. From Settings, go to the Emails section and uncheck the Keep my email address private option. These settings will be saved automatically. If you do not want to change this option in Github, so you could use other accounts like Google, or create your own with email.
  3. If you already registered an account, you are ready to go with Percona Portal. Use your Google/Github credentials every time you log in to your account.

Conclusions

Start your Percona Platform experience with Percona Monitoring & Management – open source database monitoring, management, and observability solution for MySQL, PostgreSQL, and MongoDB.

Set up Percona Monitoring and Management (version >2.27.0) following PMM Documentation

The next step is to start exploring Percona Platform and get more value and experience from Percona database experts by creating a Percona Account on Percona Portal. The key benefits of Percona  Platform include: 

  • A Percona Account as a single authentication mechanism to use across all Percona resources, 
  • Access to Percona content like blogs and forums, knowledge base.
  • Percona Advisors that help optimize, manage and monitor database environments.

Once you create a Percona account, you will get basic advisors, open source software, and access to community support and forums.

When connecting Percona Monitoring and Management to Percona Account you will get a more advanced set of advisors for your systems security, configuration, performance, and data design.

Percona Platform subscription is offering even more advanced advisors for Percona customers and also a fully supported software experience, private DBaaS, and an assigned Percona Expert to ensure your success with the Platform. 

Please also consider sharing your feedback and experience with Percona on our Forums.

Visit Percona Platform online help to view Platform-related documentation, and follow our updates on the Platform what’s new page.

Apr
08
2022
--

A Story of a Failed Percona Monitoring and Management Update

Failed Percona Monitoring and Management Update

Just the other day, one of my Percona Monitoring and Management (PMM) instances failed to update and ended up in a funny state:

percona monitoring and management

 

No matter how many times I clicked the “check for updates” button in the lower right, PMM still showed “no updates available”. Basically, PMM forgot what version it is and because of that it thinks there are no updates available

If you run into this (or possibly other PMM  update problems), you may find the following helpful to troubleshoot or resolve the problem. 

I have deployed PMM through docker, so the first step is to get inside the docker container to be able to investigate and troubleshoot the installation.

docker exec -it pmm-server bash

Percona Monitoring and Management (as of version 2.26) learns its own version from the version of “pmm-update” package.  In my case, this did not work due to a corrupted rpm database:

[root@812fb43b3a96 opt]# yum info pmm-update
error: rpmdb: BDB0113 Thread/process 13841/140435865503552 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 -  (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:

Error: rpmdb open failed

It is not clear to me why the database has become corrupted in my case, but unfortunately, it did and needed to be repaired:

mv /var/lib/rpm/__db* /tmp/
rpm --rebuilddb
rm -rf /var/cache/yum

These commands backup the current database to /tmp/ (just in case)  to rebuild the current database as well as remove already downloaded packages (it has nothing to do with database corruption but reduces reliance on the local state).

# supervisorctl start pmm-update-perform
pmm-update-perform: started

This command starts the update process.  You probably could also trigger an update through the Web GUI at this point but I want you to be aware of this option, too.

As this simply starts the background update process you do not get much information about what is going on. To find that out, you need to check the log:

tail /srv/logs/pmm-update-perform.log

Note while the update is going, the PMM instance may not be available at all. You could see this message:

Do not freak out – wait for the update to finish before trying to use PMM again. 

The update log does not really have a very clear message which states what update was completed successfully.  Here is, however, an example of how the end of a log from a successful update may look:

PLAY RECAP *********************************************************************
localhost                  : ok=79   changed=31   unreachable=0    failed=0    skipped=16   rescued=0    ignored=0

time="2022-04-01T15:36:39Z" level=info msg="Waiting for Grafana dashboards update to finish..."

With that, the update has been completed and my PMM instance has successfully updated to the recent version.

I hope this blog post will help you to troubleshoot your PMM update problems too if you happen to run into one! ?

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com