Oct
14
2021
--

Percona Is a Finalist for Best Use of Open Source Technologies in 2021!

Percona Finalist Open Source

Percona has been named a finalist in the Computing Technology Product Awards for Best Use of Open Source Technologies. If you’re a customer, partner, or just a fan of Percona and what we stand for, we’d love your vote.

With Great Power…

You know the phrase. We’re leaving it to you and your peers in the tech world to push us to the top.

Computing’s Technology Product Awards are open to a public vote until October 29. Vote Here!

percona Best Use of Open Source Technologies

Thank you for supporting excellence in the open source database industry. We look forward to the awards ceremony on Friday, November 26, 2021.

Why We’re an Open Source Finalist

A contributing factor to our success has been Percona Monitoring and Management (PMM), an open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical MySQL, MongoDB, PostgreSQL, and MariaDB database environments, no matter where they are located or deployed. It’s impressing customers, and even competitors, in the industry.

If you want to see how Percona became a finalist, learn more about Percona Monitoring and Management, and be sure to follow @Percona on all platforms.

Vote Today!

Oct
06
2021
--

How to Hide Credentials from Percona Monitoring and Management Client Commands

Hide Credentials from Percona Monitoring and Management Client Commands

Hide Credentials from Percona Monitoring and Management Client CommandsIn this short blog post, we are going to review how to avoid using credentials in the Percona Monitoring and Management (PMM) client command line when adding new exporters. We will use an example with the MySQL exporter, but it is extensible to others (PostgreSQL, MongoDB, etc.).

In the online documentation we can see the basic steps for adding a new MySQL exporter:

  1.  Configure the PMM client 
    1. pmm-admin config ...
  2. Add the MySQL exporter
    1. pmm-admin add mysql --username=pmm --password=pass

The issue with this approach is that the user and password are there in plain sight for anyone to see, be it through the shell history or via commands like ps aux.

The PMM client uses kingpin to parse the arguments given, so we can use its feature for reading them from a file to do it in a more secure way. We just need to create the files with the arguments we want to hide from the commands, like:

shell> cat <<EOF >/home/agustin/pmm-admin-config.conf
--server-insecure-tls
--server-url=https://admin:admin@X.X.X.X:443
EOF

shell> cat <<EOF >/home/agustin/pmm-admin-mysql.conf
--username=pmm
--password=pmmpassword
EOF

Note that the above commands were used for simplicity in showing how they can be created. If you are worried about leaving traces in the shell command history use vim (or your editor of choice) to actually create the files and their contents.

We can use these files in the following way, instead:

shell> pmm-admin config @/home/agustin/pmm-admin-config.conf

shell> pmm-admin add mysql @/home/agustin/pmm-admin-mysql.conf

We can still use other arguments in the command directly. For example, for the MySQL command:

shell> pmm-admin add mysql --port=6033 @/home/agustin/pmm-admin.conf

PMM clients will not store database credentials within themselves, but will instead request this data from the PMM server. After the exporters are added and running, remove the pmm-admin conf files.

Using Shell Variables

Another way of achieving this is to use “hidden” variables, like:

shell> read -s pmm_mysql_pass
[type_the_password_here]
shell> pmm-admin add mysql --username=pmm --password=${pmm_mysql_pass}

You can then even wipe the variable out if you want:

shell> pmm_mysql_pass=""

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Oct
05
2021
--

Configuring a MongoDB Sharded Cluster with PMM2 – Part 2

Configure MongoDB Sharded Cluster

Configure MongoDB Sharded ClusterAs a DBA, it is important to monitor a database to help us troubleshoot or to understand the health of an instance. Percona Monitoring and Management (PMM v2) is open-source and does a great job in monitoring the databases like MongoDB, MySQL, PostgreSQL, etc.

In this blog post, we will see how to configure a sharded cluster in PMM2. This is a part two version of the previous one which was done with PMM v1, titled Configuring PMM Monitoring for MongoDB Cluster. I have listed the steps to configure the sharded cluster into PMM2 below:

Prepare DB for Monitoring

Before configuring with PMM2, we will need to create a USER for monitoring from the database side. If you need to enable QAN (query analytics), then you will need to enable profiler and some more custom permission like “explainRole”  to the user as well. Adding profiler adds up some more little load to the database, so it is better you do prior tests to analyze the load if you want to assess the extra load.

  1. Add PMM Users to the DB

// Change role name / user / password as required

db.getSiblingDB("admin").createRole({
    role: "explainRole",
    privileges: [{
        resource: {
            db: "",
            collection: ""
            },
        actions: [
            "listIndexes",
            "listCollections",
            "dbStats",
            "dbHash",
            "collStats",
            "find"
            ]
        }],
    roles:[]
})


db.getSiblingDB("admin").createUser({
   user: "pmm_mongodb",
   pwd: "password",
   roles: [
      { role: "explainRole", db: "admin" },
      { role: "clusterMonitor", db: "admin" },
      { role: "read", db: "local" }
   ]
})

  1. Enabling Profiler

This is optional. Run the instance with the profiler or add profiling at the database level to monitor queries in QAN (not applicable for mongos).

To start at the instance level (enables profiling for all databases):

mongod <other options> --profile 2 --slowms 200 --rateLimit 100

or in mongod.conf:

operationProfiling:
  mode: all
  slowOpThresholdMs: 200
# (Below variable is available only with Percona Server for MongoDB.)
  rateLimit: 100

To enable p[rofiling at DB level:

use dbname
db.setProfilingLevel(2)

  1. Add MongoDB Instance to the pmm-client

Here use the same –cluster option name for all members from the same cluster and provide service-name to identify it:

sudo pmm-admin add mongodb \
--username=pmm_mongodb --password=password \
--query-source=profiler \
--cluster=mycluster \
--service-name=myc_mongoc2 \
--host=127.0.0.1 --port=37061

  1. Check the Inventory Service

Then check whether the service was added successfully or not:

$ sudo pmm-admin list
Service type        Service name                   Address and port        Service ID
MongoDB             myc_mongoc2                    127.0.0.1:37061         /service_id/02e261a1-e8e0-4eb4-8043-8616424500de

Agent type                    Status           Metrics Mode        Agent ID                                              Service ID
pmm_agent                     Connected                            /agent_id/281b4046-4f4b-4897-bd2e-b771d3e97922         
node_exporter                 Running          push                /agent_id/5e9b17a8-ecb9-47c3-8477-ce322047c4d9         
mongodb_exporter              Running          push                /agent_id/0067dd85-9a0a-47dd-976e-ae779deb982b        /service_id/5c92f132-3005-45ab-84df-7541c286c34a
mongodb_profiler_agent        Running                              /agent_id/18d3d87a-9bb9-48c1-8e3e-d8bae3f043bb        /service_id/02e261a1-e8e0-4eb4-8043-8616424500de

From My Test

I used localhost to deploy the sharded cluster for the testing purpose as below:

Members list:

1 mongos (37050), 
3 shards consist of 3 member replicaSet each (37051-37059), 
3 config members(37060-37062)

Listing one mongod instance from the ps command:

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ ps -ef | grep mongod -w | head -1
balaguru   41883    2846  1 13:01 ?        00:04:04 mongod --replSet configRepl --dbpath /home/balaguru/mongodb/testshard/data/configRepl/rs1/db --logpath /home/balaguru/mongodb/testshard/data/configRepl/rs1/mongod.log --port 37060 --fork --configsvr --wiredTigerCacheSizeGB 1 --profile 2 --slowms 200 --rateLimit 100 --logappend

Adding mongodb services to pmm-admin:

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s11 --host=127.0.0.1 --port=37051
MongoDB Service added.
Service ID  : /service_id/cc6b3fed-ee16-494e-93f0-0d2e8f60a136
Service name: myc_s11--host=127.0.0.1

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s12 --host=127.0.0.1 --port=37052
MongoDB Service added.
Service ID  : /service_id/235494d8-aaee-4ca0-bd3a-bf2259e87ecc
Service name: myc_s12

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s13 --host=127.0.0.1 --port=37053
MongoDB Service added.
Service ID  : /service_id/55261675-41e7-40f1-95c9-08cac25c4f64
Service name: myc_s13

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s21 --host=127.0.0.1 --port=37054
MongoDB Service added.
Service ID  : /service_id/5c92f132-3005-45ab-84df-7541c286c34a
Service name: myc_s21

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s22 --host=127.0.0.1 --port=37055
MongoDB Service added.
Service ID  : /service_id/4de07a5b-5a47-4126-8824-80570bd72cef
Service name: myc_s22--host=127.0.0.1

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s23 --host=127.0.0.1 --port=37056
MongoDB Service added.
Service ID  : /service_id/7bdaaa72-6e00-4f46-a2a9-5205d5f3fff5
Service name: myc_s23

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s31 --host=127.0.0.1 --port=37057
MongoDB Service added.
Service ID  : /service_id/2028e075-bc65-4aae-bcdd-ec616b36e81b
Service name: myc_s31

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s32 --host=127.0.0.1 --port=37058
MongoDB Service added.
Service ID  : /service_id/7659231c-f48f-4a65-b651-585ac1f058cd
Service name: myc_s32

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_s33 --host=127.0.0.1 --port=37059
MongoDB Service added.
Service ID  : /service_id/2c224eaf-c0f1-482b-b23c-8ea4b914c8e5
Service name: myc_s33

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_mongoc1 --host=127.0.0.1 --port=37060
MongoDB Service added.
Service ID  : /service_id/09e95cc5-40b7-4a53-9e35-2937ca23395f
Service name: myc_mongoc1

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_mongoc2 --host=127.0.0.1 --port=37061
MongoDB Service added.
Service ID  : /service_id/02e261a1-e8e0-4eb4-8043-8616424500de
Service name: myc_mongoc2

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin add mongodb --username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster --service-name=myc_mongoc3 --host=127.0.0.1 --port=37062
MongoDB Service added.
Service ID  : /service_id/421449d9-8ada-46dd-9c8a-84c0847a8742
Service name: myc_mongoc3

Listing the services added:

balaguru@vinodh-UbuntuPC:~/mongodb/testshard$ pmm-admin list
Service type        Service name                   Address and port        Service ID
MongoDB             myc_mongoc2                    127.0.0.1:37061         /service_id/02e261a1-e8e0-4eb4-8043-8616424500de
MongoDB             myc_mongoc1                    127.0.0.1:37060         /service_id/09e95cc5-40b7-4a53-9e35-2937ca23395f
MongoDB             myc_s31                        127.0.0.1:37057         /service_id/2028e075-bc65-4aae-bcdd-ec616b36e81b
MongoDB             myc_s12                        127.0.0.1:37052         /service_id/235494d8-aaee-4ca0-bd3a-bf2259e87ecc
MongoDB             myc_s33                        127.0.0.1:37059         /service_id/2c224eaf-c0f1-482b-b23c-8ea4b914c8e5
MongoDB             myc_mongos                     127.0.0.1:37050         /service_id/3f4f56be-6259-4579-88b7-bb4d0c29204b
MongoDB             myc_mongoc3                    127.0.0.1:37062         /service_id/421449d9-8ada-46dd-9c8a-84c0847a8742
MongoDB             myc_s22                        127.0.0.1:37055         /service_id/4de07a5b-5a47-4126-8824-80570bd72cef
MongoDB             myc_s13                        127.0.0.1:37053         /service_id/55261675-41e7-40f1-95c9-08cac25c4f64
MongoDB             myc_s21                        127.0.0.1:37054         /service_id/5c92f132-3005-45ab-84df-7541c286c34a
MongoDB             myc_s32                        127.0.0.1:37058         /service_id/7659231c-f48f-4a65-b651-585ac1f058cd
MongoDB             myc_s23                        127.0.0.1:37056         /service_id/7bdaaa72-6e00-4f46-a2a9-5205d5f3fff5
MongoDB             myc_s11                        127.0.0.1:37051         /service_id/cc6b3fed-ee16-494e-93f0-0d2e8f60a136

Agent type                    Status           Metrics Mode        Agent ID                                              Service ID
pmm_agent                     Connected                            /agent_id/281b4046-4f4b-4897-bd2e-b771d3e97922         
node_exporter                 Running          push                /agent_id/5e9b17a8-ecb9-47c3-8477-ce322047c4d9         
mongodb_exporter              Running          push                /agent_id/0067dd85-9a0a-47dd-976e-ae779deb982b        /service_id/5c92f132-3005-45ab-84df-7541c286c34a 
mongodb_exporter              Running          push                /agent_id/071ec1ae-ff35-4fa1-a4c9-4d5bca705131        /service_id/09e95cc5-40b7-4a53-9e35-2937ca23395f 
mongodb_exporter              Running          push                /agent_id/5e045290-36c2-410b-86e9-b4945cd7ecfb        /service_id/3f4f56be-6259-4579-88b7-bb4d0c29204b 
mongodb_exporter              Running          push                /agent_id/6331b519-da6e-47c0-be7e-92f2ac142fa5        /service_id/2c224eaf-c0f1-482b-b23c-8ea4b914c8e5 
mongodb_exporter              Running          push                /agent_id/6ce78e1c-be6a-4ffd-844b-8afdc0ee5700        /service_id/235494d8-aaee-4ca0-bd3a-bf2259e87ecc 
mongodb_exporter              Running          push                /agent_id/6ed1bcc2-3561-4c65-95e1-11b3cc051194        /service_id/cc6b3fed-ee16-494e-93f0-0d2e8f60a136 
mongodb_exporter              Running          push                /agent_id/7721bd24-7408-431d-abcb-3239459df75a        /service_id/7659231c-f48f-4a65-b651-585ac1f058cd 
mongodb_exporter              Running          push                /agent_id/999c0152-656e-4941-a1fb-003df2dbfbf6        /service_id/2028e075-bc65-4aae-bcdd-ec616b36e81b 
mongodb_exporter              Running          push                /agent_id/9e63f2d9-7e75-45ee-927d-b1406d4797e0        /service_id/55261675-41e7-40f1-95c9-08cac25c4f64 
mongodb_exporter              Running          push                /agent_id/ca3ab511-29eb-4c68-b037-23ab13fa92ff        /service_id/4de07a5b-5a47-4126-8824-80570bd72cef 
mongodb_exporter              Running          push                /agent_id/cd1066eb-f917-4d7e-b284-8d8a8bc7c652        /service_id/7bdaaa72-6e00-4f46-a2a9-5205d5f3fff5 
mongodb_exporter              Running          push                /agent_id/e2ef230a-d84b-428c-921b-b6da7c3180f3        /service_id/421449d9-8ada-46dd-9c8a-84c0847a8742 
mongodb_exporter              Running          push                /agent_id/e3f7ba25-6592-4cb4-aae6-7431b3b6a6da        /service_id/02e261a1-e8e0-4eb4-8043-8616424500de 
mongodb_profiler_agent        Running                              /agent_id/18d3d87a-9bb9-48c1-8e3e-d8bae3f043bb        /service_id/02e261a1-e8e0-4eb4-8043-8616424500de 
mongodb_profiler_agent        Running                              /agent_id/1cf5ee8a-b5b5-4133-896c-fafccc164f54        /service_id/5c92f132-3005-45ab-84df-7541c286c34a 
mongodb_profiler_agent        Running                              /agent_id/4b13cc24-fbd2-47cc-955d-c2a65624d2be        /service_id/55261675-41e7-40f1-95c9-08cac25c4f64 
mongodb_profiler_agent        Running                              /agent_id/4de795cf-f047-49e6-a3bc-dc2ab1b2bc86        /service_id/cc6b3fed-ee16-494e-93f0-0d2e8f60a136 
mongodb_profiler_agent        Running                              /agent_id/89ae83c7-e62c-48f6-9e8c-597ce978c8ce        /service_id/4de07a5b-5a47-4126-8824-80570bd72cef 
mongodb_profiler_agent        Running                              /agent_id/98343388-a246-4767-8838-ded8f8de5191        /service_id/235494d8-aaee-4ca0-bd3a-bf2259e87ecc 
mongodb_profiler_agent        Running                              /agent_id/a5df9e6b-037e-486a-bc95-afe20095cf98        /service_id/7bdaaa72-6e00-4f46-a2a9-5205d5f3fff5 
mongodb_profiler_agent        Running                              /agent_id/a6bda9b4-989a-427b-ae64-5deffc2b9ba2        /service_id/7659231c-f48f-4a65-b651-585ac1f058cd 
mongodb_profiler_agent        Running                              /agent_id/c59c40ca-63ee-4497-b297-403faa9d4ec0        /service_id/2c224eaf-c0f1-482b-b23c-8ea4b914c8e5 
mongodb_profiler_agent        Running                              /agent_id/c7f84a08-4823-455b-93a3-168eee19329b        /service_id/3f4f56be-6259-4579-88b7-bb4d0c29204b 
mongodb_profiler_agent        Running                              /agent_id/e85d0757-7542-4b38-bfed-81ded8bf309c        /service_id/421449d9-8ada-46dd-9c8a-84c0847a8742 
mongodb_profiler_agent        Running                              /agent_id/ed81849a-6fc9-46f3-a5dc-e6c288409009        /service_id/09e95cc5-40b7-4a53-9e35-2937ca23395f 
mongodb_profiler_agent        Running                              /agent_id/f9d26161-4827-4bed-a85f-cbe3ce9478ab        /service_id/2028e075-bc65-4aae-bcdd-ec616b36e81b 
vmagent                       Running          push                /agent_id/a662e1f6-31d3-4514-8f83-ea31e0165d61

PMM Dashboards

From PMM Dashboards, you can then view the replSet summary as well as the sharded cluster summary.

Cluster Summary

This dashboard gives information about the sharded/unsharded databases, shards, chunks, cursor details, etc.

Cluster Summary

 

ReplSet Summary:

This dashboard tells about the replication information like replica lag, operations, heartbeat, ping time, etc.

ReplSet Summary

 

MongoDB Instance Overview:

This is the general dashboard for a MongoDB instance which provides generic information about the connections, memory usage, latency, etc

MongoDB Instance Overview

 

WiredTiger Details:

This is the main dashboard that you’ll need most to analyze the problems here as it shows the wiredTiger information. The main metrics that you need to monitor here are the WT cache utilization, evictions of modified or unmodified pages, write/read tickets utilization, index/objects scans, etc.

WiredTiger Details

 

QAN:

If you enable the profiling, then you could see the queries used in the database here. You can filter them easily as shown in the screenshot below. Also, you can get the explain plan to check whether they utilize the COLLSCAN (disk reads) or IXSCAN (uses index). Also, you can check the counts, load, etc.

QAN

 

Conclusion

As said, Percona Monitoring and Management 2 is very easy to configure to monitor the databases and it is recommended too. It’s better now rather than late to configure the monitoring. PMM2 is managed by Percona which is totally free and you can raise any bugs here – https://jira.percona.com/. If you have doubts, you can leave your questions here – https://forums.percona.com.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Sep
29
2021
--

New Experimental Environment Dashboards for Percona Monitoring and Management

Environment Dashboards Percona Monitoring and Management

As Technical Product Manager, I get a lot of user feedback, both positive and negative. In recent months many people have complained about the Home dashboard. It went something like this:

“Hey, the Home page is useless! We have several hundred services monitored by a single Percona Monitoring and Management (PMM), so this list of metrics is not providing any value when I have this many servers”.

We were happy to note that people were using PMM for big deployments, and we decided to create new types of dashboards for these users. I will provide their description below. But what exactly is the problem with the Home dashboard now? Let’s take a look.

Percona Monitoring and Management Dashboard

Percona Monitoring and Management Dashboard

The red box represents the visible part of the screen on my laptop. So to see any data, I need to scroll to the bottom of the page. But even if I do, I will see a set of small graphs not related to one another.

If I have more than three nodes, I get something like the example below: (https://pmmdemo.percona.com/)

Percona Monitoring and Management Nodes
Remember, on the laptop screen, you can only see this much:

PMM Dashboard
You can’t see the complete picture and compare the performance of individual servers; there is nothing actionable.

This dashboard is clearly not meeting the goal of giving the user a high-level overview of their infrastructure. To achieve this goal, we have created two new experimental dashboards – Environment Overview and Environment Summary.

The Environment label is already present in the current version of the dashboard. It lets users specify different groups of the Service. We have the particular flag to add an environment label when you add a Service to PMM.

# pmm-admin add mongodb … --environment=environment1 ..

Here are some ideas of what you can use as an environment:

  • “production”, “development”, “testing”
  • “departmentA”, “departmentB”, “subDepartmentC”
  • “Datacenter1”, “datacenter2”, “cloud-region-east2”
  • <Your ideas here>

Setting the environment for your services will simplify the search/selection on all dashboards. The new dashboards let you group your nodes by their environment label.

Environment Overview Dashboard

The Environment Overview Dashboard is designed to be a possible replacement for the default Home dashboard. This dashboard aims to give the user a high-level view of all Environments and how they are behaving.

This dashboard presents the main parameters of all environments. It shows six graphs – three for Node metrics (CPU, Memory, Disk) per environment and three for Services metrics (Used Connections, QPS, Latency) per environment. The Service metrics are inspired by RED Method for MySQL Performance Analyses – Percona Database Performance Blog.

You get some valuable data on the first screen. It provides you a summary of the parameters of your environments and highlights apparent problems. The second screen shows the relationships between the main metrics and how they’ve changed over a selected period of time and allows you to click on graphs and drill down for more detailed information. This will help you spot the environments where you might have a problem and know where to dig deeper to investigate it.

PMM Environment Overview Dashboard

PMM Dashboard

Environment Summary Dashboard

The Environment Summary Dashboard is designed to give you information about one specific Environment and an overview of the activities and behaviors of the Services and Nodes inside that particular environment.

Environment Summary Dashboard PMM

You can also drill down to the specific Services, Nodes, and their parameters for more details from this dashboard. This will help you see the unhealthy Service or Node.

Percona Monitoring and Management Disk Space Usage

How to Install Dashboards

  1. Get dashboards:
  2. Import dashboards to PMM2 https://grafana.com/docs/grafana/latest/dashboards/export-import/

What’s Next?

These two experimental dashboards are not yet ready to be a part of the standard PMM release, but we would LOVE to make them the default. We would appreciate your feedback on making them not just a step forward from the current home dashboard but a HUGE step forward.

If you have many servers, please test the dashboards and let us know if these dashboards provide better visibility over your infrastructure. If not, we would love to hear what sort of data you want to see in these dashboards to speed up decision-making.

You can leave your feedback on our Percona Community Forum. Please help us make Percona Monitoring and Management more useful!

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Sep
09
2021
--

Q&A on Webinar “Using Open Source Software to Optimize and Troubleshoot Your MySQL Environment”

Optimize and Troubleshoot Your MySQL Environment

Optimize and Troubleshoot Your MySQL EnvironmentThanks to everyone who attended last week’s webinar on Using Open Source Software to Optimize and Troubleshoot Your MySQL Environment; hopefully you’ve found the time we spent in Percona Monitoring and Management (PMM) useful.

We had a record-breaking number of questions during the talk and unfortunately weren’t able to answer them all live, so we decided to answer them separately. Also, there were several requests for best practices around installation and configuration. This is something we are considering for the next webinar in this series, so stay tuned!

If you weren’t able to attend, the recording is available for viewing. But now, without further ado, here are the questions that we didn’t have time to cover during the presentation.

 

Q: Can PMM also be used for a web hosting server (Cpanel, Directadminetc)?

PMM by default can monitor a node to provide vital statistics on the health of the host.  From there, you can use external exporters to monitor other applications and send the data to PMM to visualize and create alerts.

 

Q: Does it provide any query optimization suggestions if my query is bad? 

Not at present…that’s planned for the future query advisor

 

Q: How soon we will be able to use the alerting manager in production?

We are looking at late Sept to early Oct. When it’s ready, you will hear about it!

 

Q: Capturing Queries Data for performance checking can be costly and some monitoring systems capture data every few seconds. At what level of data is captured here and analyzed…live systems with lots of database traffic? What percentage (all of it,  2 seconds, 1 second, etc.)?

We adhere to ‘do no harm’ so the impact of PMM  is typically 1-4% of the busiest systems.  We offer custom resolutions to adjust the scrape frequency to balance the need for information with the need for performance.

 

Q: Are long-running queries captured that potentially slow down the system over time & shown as graph/alert? Also, is there potentially more than one instance of these types running over again by a user.?

This is something we are going to include in our Alerting capabilities (coming soon, see above).

 

Q: Can more than one of the metrics be compared against each other to gain more insight into a problem in graphical form? Can you in effect play with these graphs?

Yes, you can, this is in fact how most of the dashboards are designed, where we connect different metric series together to drive graphs that explain system performance.  While you may be able to edit the existing graphs, Percona recommends that you instead make a copy of the dashboard you’d like to modify and make your changes on the copy.  The reason for this is if you modify a dashboard distributed by PMM, it will be overwritten on the next upgrade, and you’ll lose your changes.

 

Q: Could you list what can be monitored using PMM? And explain what recommended plugins are available and what they are used for? 

Natively, any Linux system and pretty much all flavors of MySQL, MariaDB, MongoDB, and PostgreSQL. You can use external exporters to gather even more data than default and using Grafana as the basis for visualization of PMM allows you to create custom dashboards and a wealth of community plugins.

 

Q: Can you choose to monitor a particular set of users? Set of queries? Set of schema? 

You can filter it down to view based on username, particular schema, and then filter those results by particular query strings.  We can monitor as much or as little about your database as the user you define to pull data.

 

Q: How can we work on optimization when using cloud-based services like RDS where we have limited access?

PMM can monitor RDS instances and has simplified the connection and selection process of its remote monitoring capabilities.  We can provide nearly the same data as an on-prem database however we don’t have access to the node level statistics.

 

Q: For Oracle MySQL 5.7.29, if you have many tables/objects in the database, will the PMM query information_schema and load the DB?

We have a predefined limit of 1000 tables that will disable polling information schema but you can configure this to your liking both with the client and with remote monitoring. This CAN have a more significant impact on your system though especially with large table and row counts.

 

Q: At what point do I know I’ve done enough optimization? 

HA! It’s a never-ending game of cat and mouse considering the sheer volume of variables in play. It’s these times where monitoring data for before and after become vital.

 

Q: Can a database monitoring package be the source of database performance issues? In particular, mysqld_exporter is installed as a docker container, as I’m seeing “out of resources” on a trace on mysqld_exporter.

Of course, there are plenty of ways to generate database performance issues and it’s possible monitoring can result in some overhead. For an extreme example, here’s one way to replicate some overhead: start the pmm-client on a MySQL database and restore a blank DB from mysqldump. A few million rows at a time should generate LOTS of chaos and load between QAN and exporters. Our pmm client runs the exporter natively so no need to use a container.

 

Q: Is the query analytics somehow slowing down the database server as well? Or is it save to enable/use it without further impact?

The impact is minimal.  Most of the Query Analytics processing is done at the PMM server, the only impact to the client is retrieving the queries from slowlog or performance schema so this can have a bigger impact for the most extremely active DB’s but still should remain below 5% CPU hit.

 

Q: Did I understand correctly that PMM is not for RDS users and that AWS tools are available?

PMM certainly is for RDS! Since RDS is managed by AWS, PMM cannot collect CPU/Disk/Memory metrics but all MySQL metrics are still available even in RDS.

 

Q: Do you have any instructions/steps to install PMM to monitor MySQL RDS? 

  • Gear icon ? PMM Inventory ? Add Instance
  • Choose AWS/RDS Add Remote Instance
  • Use your AWS credentials to view your available RDS & Aurora nodes
  • Ensure that performance_schema is enabled

 

Watch the Recording

Aug
31
2021
--

My Favorite Percona Monitoring and Management Additional Dashboards

Percona Monitoring and Management Dashboards

Percona Monitoring and Management (PMM) has dashboards that cover a lot of ground, yet PMM Superpowers come from the fact you do not need to stick to dashboards that are included with the product! You also can easily install additional dashboards provided by the Community, as well as implement your own.

In this blog post, we will cover some of the additional dashboards which I find particularly helpful.

Node Processes Dashboard

Node Processes Dashboard

Get insights into the processes on the system to better understand resource usage by your database server vs other stuff on the system.   Unexpected resource hog processes are a quite common cause of downtime and performance issues.  More information in the Understanding Processes on your Linux Host blog post.

MySQL Memory Usage Details

MySQL Memory Usage Details

Ever wondered where MySQL memory usage comes from? This dashboard can shed a light on this dark place, showing the top global memory consumers as well as what users and client hosts contribute to memory usage.  More details in the Understanding MySQL Memory Usage with Performance Schema blog post.

MySQL Query Performance Troubleshooting

MySQL Query Performance Troubleshooting

Want to understand which queries are responsible for CPU, Disk, Memory, or Network Usage and get some other advanced MySQL Query Troubleshooting tools? Check out this dashboard.  Read more about it in the  MySQL Query Performance Troubleshooting blog post.

RED Method for MySQL Dashboard

RED Method for MySQL Dashboard

Want to apply the RED (Rate-Errors-Duration)  method to MySQL?  Check out this dashboard, and check out RED Method for MySQL Performance Analyses for more details.

OK, so let’s say you’re convinced and want to get those dashboards into your PMM install but manual installation does not excite you.  Here is how you can use custom dashboard provisioning  to install all of them:

curl -LJOs https://github.com/Percona-Lab/pmm-dashboards/raw/main/misc/import-dashboard-grafana-cloud.sh --output import-dashboard-grafana-cloud.sh
curl -LJOs https://github.com/Percona-Lab/pmm-dashboards/raw/main/misc/cleanup-dash.py --output cleanup-dash.py

chmod a+x import-dashboard-grafana-cloud.sh
chmod a+x cleanup-dash.py

./import-dashboard-grafana-cloud.sh -s <PMM_SERVER_IP> -u admin:<ADMIN_PASSWORD> -f Custom -d 13266 -d 12630 -d 12470 -d 14239

Note:  Node Processes and MySQL Memory Usage Details dashboards also require additional configuration on the client-side. Check out the blog posts mentioned for specifics.

Enjoy!

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Jul
06
2021
--

Move Percona Monitoring and Management Server Data From One Instance Type to Another

Move Percona Monitoring and Management Server Data

Move Percona Monitoring and Management Server DataPercona Monitoring and Management (PMM2) Server runs as a Docker container, a Virtual appliance, or as an instance on Amazon or Azure cloud services. Here I’ll show how to move the PMM Server and its data from one type to another.

Note, this is only for PMM2 to PMM2—you can’t migrate data from PMM Server version 1 to version 2 because of significant architectural differences.

For this exercise, imagine that your PMM server:

  • Is running on an Amazon EC2 instance (Server A) from an AMI,
  • You want to move it to a dedicated server (Server B) running as a Docker container.
  • Server A monitors one client instance (node1) with a MongoDB service (mongodb1).

Here’s the output of pmm-admin status for this instance.

pmm-admin status

Export Data

PMM2 data is stored in the /srv folder for all types of installations. So first make a backup archive of it.

tar -cv /srv | gzip > pmm-data.tar.gz

Copy this archive to Server B.

scp pmm-data.tar.gz user1@172.17.0.2:~/

Prepare New Server

Connect to Server B run all further commands on this server. Prepare the Docker container.

docker create -v /srv --name pmm-data percona/pmm-server:2 /bin/true

Next extract exported data from the archive.

tar -zxvf pmm-data.tar.gz -C /tmp

Create a container for the new PMM Server with a

/srv

partition on a separate container (

pmm-data

).

docker run -d -p 443:443 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:2

Stop all services and copy the exported data into the container.

docker exec -it pmm-server supervisorctl stop all
docker exec -it pmm-server sh -c 'cd /; rm -rf /srv/victoriametrics/data'
docker cp /tmp/srv pmm-data:/

Restore permissions for migrated data folders.

docker exec -it pmm-server chown -R root:pmm /srv/clickhouse /srv/ia /srv/nginx /srv/pmm-distribution /srv/update
docker exec -it pmm-server chown -R pmm:pmm /srv/logs /srv/victoriametrics /srv/alertmanager /srv/prometheus
docker exec -it pmm-server chown -R grafana:grafana /srv/grafana
docker exec -it pmm-server chown -R postgres:postgres /srv/postgres /srv/logs/postgresql.log

Restart PMM Server so that it reloads files with the correct permissions.

docker restart pmm-server

Switch Services to New Server

That’s it! Now you can switch your monitored node1 to use the new server (Server B).

Edit the the PMM agent configuration file

/usr/local/percona/pmm2/config/pmm-agent.yaml

.

Set the IP address of Server B (

172.17.0.2

) and restart

pmm-agent

.

systemctl restart pmm-agent

Check Status

Check the status of

pmm-agent

and monitored services with

pmm-admin status

.

Check Status PMM
The agent is now connected to your new server.

In the Grafana UI, you can see the migrated data of Server B. (The time gap in the data is how long it took to run the import and switch node1 to the new server.)

Grafana UI

If historical data is here then we are done. Otherwise, please follow the commands that are provided in the next section.

Export/Import VictoriaMetrics Data

Copy the metrics in the VictoriaMetrics time-series database using an API request for export/import data. (You can do the export remotely and run all further commands on Server B.)

curl -k -G -u admin:admin https://3.86.222.201/prometheus/api/v1/export/native -d 'match={__name__!=""}' > exported_data.dump

Next import the VictoriaMetrics data.

curl -k -u admin:admin -X POST https://172.17.0.2/prometheus/api/v1/import/native -T exported_data.dump

By default, the maximum allowed size of the client request body for PMM Server’s Nginx service is 10Mb. If

exported_data.dump

is bigger than this you must increase the limit and repeat the import.

docker exec -it pmm-server bash -c "sed -i 's/client_max_body_size 10m;/client_max_body_size 1000m;/g' /etc/nginx/conf.d/pmm.conf"
docker exec -it pmm-server bash -c "supervisorctl restart nginx"

Conclusion

You can use the same process to move from any instance type to another. Also, we have got a separate blog post about how to migrate if the pmm-data container isn’t used. Check it out!

Jul
01
2021
--

Percona Monitoring and Management – MySQL Semi-Sync Summary Dashboard

Percona Monitoring and Management - MySQL Semi-Sync Summary Dashboard

Percona Monitoring and Management - MySQL Semi-Sync Summary DashboardSome of you may use MySQL’s asynchronous replication feature called Semisynchronous Replication (aka semi-sync), and now with the MySQL Semi-Sync Summary Dashboard + Percona Monitoring and Management (PMM), you can see the most important metrics! Refer to the Install & Usage steps for deployment details (note you need Replication Set defined!).

What is Semisynchronous Replication

When enabled, Semisynchronous Replication instructs the Primary to wait until at least one replica has received and logged the event to the replica’s local relay log before completing the COMMIT on a transaction. This provides a higher level of data integrity because now it is known that the data exists in two places. This feature ensures a balance between data integrity (number of replicas acknowledging receipt of a transaction) vs the speed of commits, which will be slower since they need to wait on replica acknowledgment. Also, keep in mind that semi-sync does not wait for COMMIT on the replica; it only waits until the transaction is queued in the relay log. The actual execution of the transaction from the relay log is still asynchronous.

Dashboard Layout

Now that we know we can improve data integrity but pay a penalty on writes, I want to display the following information:

  1. Replica semi-sync status – enabled or not
  2. Waits by type, on Network or on Transactions
  3. How much total time was spent waiting on Transactions – what’s my penalty due to writes slowing down
  4. How much average time was spent waiting per transaction
  5. Commit acknowledgments – what’s my replication throughput

Replica Status

This lists the states that each replica has been in, whether the Replica semi-sync was enabled or disabled:

Percona Monitoring and Management

Waits by Net & TX

How many waits on the Network and on Transactions. Since the Primary is only waiting on one successful acknowledgment even though there could be multiple semi-sync replicas (the fastest one wins), your count of TX waits should be the same as TX commits on the Primary, but the wait on Network can be much higher.

Waits by Net & TX

Time Spent Waiting on Transactions

This is the contribution to query latency that semi-sync incurs on the Primary related to waits on transactions.

Time spent waiting on Transactions

Average Wait Time per Transaction

This is the overhead of waiting on a single transaction acknowledgment.

Average wait time per transaction

Commit Acknowledgments

The semi-synchronous replication feature considers the possibility that Replicas may be unavailable, and is controlled by the rpl_semi_sync_master_timeout.  This controls how long the Primary will wait on a commit for acknowledgment from a Replica before timing out and reverting to asynchronous replication. Was the commit acknowledged by semi-sync (Yes) or did the Primary lose all semi-sync replicas and did not acknowledge the commit (No) aka running in asynchronous mode.  You should be seeing Acknowledged only when things are working smoothly.

Commit Acknowledgements

Installation & Usage

  1. Download the dashboard definition in JSON from https://grafana.com/grafana/dashboards/14636/
  2. Import into PMM Server (tested on 2.18 but should work on older 2.x versions)

I built the dashboard to leverage the Replication Set (–replication-set) variable (which can be set to any string you want), so you will need this enabled for all servers that you want to view statistics, for example, your pmm-admin add mysql statement should look like:

pmm-admin add mysql … --replication-set=semi-sync

You can check to see whether the Replication Set variable is defined by referencing the PMM Inventory dashboard, in the last column called Other Details:

When you have the dashboard loaded, select your Replication Set from the drop-down:

New to Percona Monitoring and Management (PMM)?

Check out the PMM Quickstart guide, which helps you deploy docker for PMM Server, and pmm2-client package from the Percona Repositories, to have you up and monitoring in minutes!

I hope you find this dashboard useful! Feel free to let me know if there are missing fields or other features you’d like to see included!

May
19
2021
--

Percona Monitoring and Management DBaaS Overview and Technical Details

Percona Monitoring and Management DBaaS Overview

Percona Monitoring and Management DBaaS OverviewDatabase-as-a-Service (DBaaS) is a managed database that doesn’t need to be installed and maintained but is instead provided as a service to the user. The Percona Monitoring and Management (PMM) DBaaS component allows users to CRUD (Create, Read, Update, Delete) Percona XtraDB Cluster (PXC) and Percona Server for MongoDB (PSMDB) managed databases in Kubernetes clusters.

PXC and PSMDB implement DBaaS on top of Kubernetes (k8s), and PMM DBaaS provides a nice interface and API to manage them.

Deploy Playground with minikube

The easiest way to play with and test PMM DBaaS is to use minikube. Please follow the minikube installation guideline. It is possible that your OS distribution provides native packages for it, so check that with your package manager as well.

In the examples below, Linux is used with kvm2 driver, so additionally kvm and libvirt should be installed. Other OS and drivers could be used as well. Install the kubectl tool as well, it would be more convenient to use it and minikube will configure kubeconfig so k8s cluster could be accessed from the host easily.

Let’s create a k8s cluster and adjust resources as needed. The minimum requirements can be found in the documentation.

  • Start minikube cluster
$ minikube start --cpus 12 --memory 32G --driver=kvm2

  • Download PMM Server deployment for minikube and deploy it in k8s cluster
$ curl -sSf -m 30 https://raw.githubusercontent.com/percona-platform/dbaas-controller/main/deploy/pmm-server-minikube.yaml \
| kubectl apply -f -

  • For the first time, it could take a while for the PMM Server to init the volume, but it will eventually start
  • Here’s how to check that PMM Server deployment is running:
$ kubectl get deployment
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
pmm-deployment   1/1     1            1           3m40s

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
pmm-deployment-d688fb846-mtc62   1/1     Running   0          3m42s

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM              STORAGECLASS   REASON   AGE
pmm-data                                   10Gi       RWO            Retain           Available                                              3m44s
pvc-cb3a0a18-b6dd-4b2e-92a5-dfc0bc79d880   10Gi       RWO            Delete           Bound       default/pmm-data   standard                3m44s

$ kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pmm-data   Bound    pvc-cb3a0a18-b6dd-4b2e-92a5-dfc0bc79d880   10Gi       RWO            standard       3m45s

$ kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP                      6m10s
pmm          NodePort    10.102.228.150   <none>        80:30080/TCP,443:30443/TCP   3m5

  • Expose PMM Server ports on the host, as this also opens links to the PMM UI as well as to the API endpoint in the default browser.
$ minikube service pmm

NOTE:

To not go into too much detail: PV (kubectl get pv) and PVC(kubectl get pvc) are essentially the storage for PMM data (/srv directory). Service is a network for PMM and how to access it.

Attention: this PMM Server deployment is not supposed to be used in production, but just as a sandbox for testing and playing around, as it always starts with the latest version of PMM and k8s is not yet a supported environment for it.

Configure PMM DBaaS

Now PMM DBaaS Dashboard can be used and a k8s cluster could be added, DB added, as well as configured.

DBaaS Dashboard

NOTE:

To enable the PMM DBaaS feature you need to either pass a special environment (ENABLE_DBAAS=1) to the container or enable it in the settings (next screenshot).

To allow PMM managing k8s cluster – it needs to be configured. Check the documentation, but here are short steps:

  • set Public Address address to pmm on Configuration -> Settings -> Advanced Settings page

PMM Advanced settings

  • Get k8s config (kubeconfig) and copy it for registration:
kubectl config view --flatten --minify

  • Register configuration that was copied on DBaaS Kubernetes Cluster dashboard:

DBaaS Register k8s Cluster

 

Let’s get into details on what that all means.

The Public Address is propagated to pmm-client containers that are run as part of PXC and PSMDB deployments to monitor DB services pmm-client containers run pmm-agent, which would need to connect to the PMM server. It uses Public Address. DNS name pmm is set by Service in pmm-server-minikube.yaml file for our PMM server deployment.

So far, PMM DBaaS uses kubeconfig to get access to k8s API to be able to manage PXC and PSMDB operators. The kubeconfig file and k8s cluster information is stored securely in PMM Server internal DB.

PMM DBaaS couldn’t deploy operators into the k8s cluster for now, but that feature will be implemented very soon. And that is why Operator status on the Kubernetes Cluster dashboard shows hints on how to install them.

What are the operators and why are they needed? This is defined very well in the documentation. Long story short, they are the heart of DBaaS that deploy and configure DBs inside of k8s cluster.

Operators themselves are complex pieces of software that need to be correctly started and configured to deploy DBs. That is where PMM DBaaS comes in handy, to configure a lot for the end-user and provide a UI to choose what DB needs to be created, configured, or deleted.

Deploy PSMDB with DBaaS

Let’s deploy the PSMDB operator and DBs step by step and check them in detail.

  • Deploy PSMDB operator
curl -sSf -m 30 https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v1.7.0/deploy/bundle.yaml \
| kubectl apply -f -

  • Here’s how it could be checked that operator was created:
$ kubectl get deployment
NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
percona-server-mongodb-operator   1/1     1            1           46h
pmm-deployment                    1/1     1            1           24h


$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
percona-server-mongodb-operator-586b769b44-hr7mg   1/1     Running   2          46h
pmm-deployment-7fcb579576-hwf76                    1/1     Running   1          24h

Now it is seen on the PMM DBaaS Kubernetes Cluster Dashboard that the MongoDB operator is installed.

Cluster with PSMDB

PMM API

All REST APIs could be discovered via Swagger; it is exposed on both ports (30080 and 30443 in case of minikube) and could be accessed by appending /swagger to the PMM server address. It is recommended to use https (30443 port), and for example, the URL could look like this: https://192.168.39.202:30443/swagger.

As DBaaS is a feature under active development, replace /swagger.json to /swagger-dev.json and push the Explore button.

Swagger API

Now all APIs can be seen and even executed.

Let’s try it out. First Authorize and then find /v1/management/DBaaS/Kubernetes/List and push Try it out and Execute. There will be an example of curl as well as response to the REST API POST request. The curl example could be used from the command line as well:

$ curl -kX POST "https://192.168.39.202:30443/v1/management/DBaaS/Kubernetes/List" -H  "accept: application/json" -H  "authorization: Basic YWRtaW46YWRtaW4=" -H  "Content-Type: application/json" -d "{}"
{
  "kubernetes_clusters": [
    {
      "kubernetes_cluster_name": "minikube",
      "operators": {
        "xtradb": {
          "status": "OPERATORS_STATUS_NOT_INSTALLED"
        },
        "psmdb": {
          "status": "OPERATORS_STATUS_OK"
        }
      },
      "status": "KUBERNETES_CLUSTER_STATUS_OK"
    }
  ]
}

PMM Swagger API Example

Create DB and Deep Dive

PMM Server consists of different components, and for the DBaaS feature, here are the main ones:

  • Grafana UI with DBaaS dashboards talk to pmm-managed through REST API to show the current state and provides a user interface
  • pmm-managed acts as REST gateway and holds kubeconfig and talks to dbaas-controller through gRPC
  • dbaas-controller implements DBaaS features, talks to k8s, and exposes gRPC interface for pmm-managed

The Grafana UI is what users see, and now when operators are installed, the user could create the MongoDB instance. Let’s do this.

  • Go to DBaaS -> DB Cluster page and push Create DB Cluster link
  • Choose your options and push Create Cluster button

Create MongoDB cluster

It has more advanced options to configure resources allocated for the cluster:

Advanced settings for cluster creation

As seen, the cluster was created and could be manipulated. Now let’s see in detail what has happened underneath.

PSMDB Cluster created

When the user pushes the Create Cluster button, Grafana UI POSTS /v1/management/DBaaS/PSMDBCluster/Create request to pmm-managed. pmm-managed handles the request and sends it via gRPC to the dbaas-controller together with kubeconfig.

dbaas-controller handles requests, and with knowledge of operator structure (Custom Resources/CRD), it prepares CR with all needed parameters to create a MongoDB cluster. After filling all needed structures, dbaas-controller converts CR to yaml file and applies it with kubectl apply -f command. kubectl gets pre-configured with kubeconf file (that was passed by pmm-managed from its DB) to talk to the correct cluster, and the kubeconf file is temporarily created and deleted immediately after the request.

The same happens when some parameters change or dbaas-controller gets some parameters from the k8s cluster.

Essentially, the dbaas-controller automates all stages to fill CRs with correct parameters, check that everything works correctly, and returns details about clusters created. The kubectl interface is used for simplicity but it is subject to change before GA, most probably to k8s Go API.

Summary

All together, PMM Server DBaaS provides a seamless experience for the user to deploy DB clusters on top of Kubernetes with simple and nice UI without the need to know operators’ internals. Deploying PXC and PSMDB clusters it also configures PMM Agents and exporters, thus all monitoring data is present in PMM Server right away.

PMM PSMDB overview

Go to PMM Dashboard -> MongoDB -> MongoDB Overview and see MongoDB monitoring data, explorer nodes, and service monitoring too, which comes pre-configured with the help of the DBaaS feature.

Give it a try, submit feedback, and chat with us, we would be happy to hear from you!

P.S.

Don’t forget to stop and/or delete your minikube cluster if it is not used:

  • Stop minikube cluster, to not use resources (could be started with start again)
$ minikube stop

  • If a cluster is not needed anymore, delete minikube cluster
$ minikube delete

 

May
03
2021
--

Changes to Percona Monitoring and Management on AWS Marketplace

Percona Monitoring and Management AWS Marketplace

Percona Monitoring and Management AWS MarketplacePercona Monitoring and Management has been available for single-click deployment from AWS Marketplace for several years now, and we have hundreds of instances concurrently active and growing rapidly due to unparalleled ease of deployment.

Today we’re announcing we are changing pricing for Percona Monitoring and Management on AWS Marketplace. Currently, Percona Monitoring and Management (PMM) is available on AWS Marketplace at no added cost, and effective June 1, 2021, we will add a surcharge equal to 10% of the PMM AWS EC2 Costs.

Why are we making this change?

Making Percona Monitoring and Management available as a one-click deployment on AWS Marketplace is a considerable resource investment, yet, with the current model, only AWS directly benefits from the value which we jointly provide to the users choosing to run PMM on AWS. With the addition of this surcharge, both companies will benefit.

How does this reflect on Percona’s Open Source Commitment?

Percona Monitoring and Management remains a fully Open Source Project.  We’re changing how commercial offerings jointly provided by AWS and Percona will operate.  

I do not want to pay this surcharge, are there free options?

Using Amazon Marketplace is not the only way to deploy PMM on AWS. Many deploy PMM on Amazon EC2 using Docker, and this option continues to require no additional spend other than your infrastructure costs.

What are the benefits of running Percona Monitoring and Management through AWS Marketplace compared to alternative deployment methods?

The main benefit of running Percona Monitoring and Management through the AWS Marketplace is convenience; you can easily change the instance type or add more storage as your PMM load grows. You also have an easy path to high availability with CloudWatch Alarm Actions.

 

Register for Percona Live ONLINE
A Virtual Event about Open Source Databases

 

How does the 10% surcharge compare?

We believe 10% extra for software on top of the infrastructure costs is a very modest charge.  Amazon RDS, for example, has a surcharge starting at 30% to more than 70%, depending on the instance type.

How will I know the exact amount of such a surcharge?

Your bill from AWS will include a separate line item for this charge, in addition to the infrastructure costs consumed by PMM.

What does it mean for Percona Monitoring and Management on AWS Marketplace?

Having a revenue stream that is directly tied to AWS Marketplace deployment will increase the amount of resources we can spend on making Percona Monitoring and Management work with AWS even better. If you’re using PMM with AWS, deploying it through Amazon Marketplace will be a great way to support PMM Development.

Will Percona Monitoring and Management started through AWS Marketplace be entitled to any additional Support options?

No, Percona Monitoring and Management commercial support is available with Percona Support for Open Source Databases.  If you do not have a commercial support subscription, you can get help from the community at the Percona Forums.

What will happen to Percona Monitoring Instances started from AWS Marketplace which are already up and running?

As new pricing goes into effect on June 1st, AWS will give you 90 days’ notice before applying new prices.  If you want to avoid the surcharge, you can move your installation to a Docker-based EC2 install.

What Could AWS Do Better?

It would be great if AWS would develop some sort of affiliate program for Open Source projects, which would allow them to get a share from the value they create for AWS by driving additional infrastructure spend without having to resort to added costs. I believe this would be a win-win for Open Source projects, especially smaller ones, and AWS.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com