Apr
16
2021
--

Integrated Alerting Design in Percona Monitoring and Management

Integrated Alerting Design Percona Monitoring and Management

Integrated Alerting Design Percona Monitoring and ManagementPercona Monitoring and Management 2.13 (PMM) introduced the Integrated Alerting feature as a technical preview. It adds a user-friendly way to set up and manage alerts for your databases. You can read more about this feature usage in our announcement blog post and in our documentation, while in this article we will be focusing on design and implementation details.

Entities

There are four basic entities used for IA: Alert Rule Template, Alert Rule, Alert, and Notification Channel.

Everything starts from the alert rule template. You can see its YAML representation below:

---
templates:
 - name: pmm_mongodb_high_memory_usage
   version: 1
   summary: Memory used by MongoDB
   expr: |-
     sum by (node_name) (mongodb_ss_mem_resident * 1024 * 1024)
     / on (node_name) (node_memory_MemTotal_bytes)
     * 100
     > [[ .threshold ]]
   params:
     - name: threshold
       summary: A percentage from configured maximum
       unit: "%"
       type: float
       range: [0, 100]
       value: 80
   for: 5m
   severity: warning
   labels: 
     cultom_label: demo
   annotations:
     summary: MongoDB high memory usage ({{ $labels.service_name }})
     description: |-
       {{ $value }}% of memory (more than [[ .threshold ]]%) is used
       by {{ $labels.service_name }} on {{ $labels.node_name }}.

A template serves as the base for alert rules. It defines several fields, let’s look at them:

  • name: uniquely identifies template (required)
  • version: defines template format version (required)
  • summary: a template description (required)
  • expr: a MetricsQL query string with parameter placeholders. MetricsQL is backward compatible with PromQL and provides some additional features. (required)
  • params: contains parameter definitions required for the query. Each parameter has a name, type, and summary. It also may have a unit, available range, and default value.
  • for: specifies the duration of time the expression must be met for;  The  alert query should return true for this period of time at which point the alert will be fired (required)
  • severity: specifies default alert severity level (required)
  • labels: are additional labels to be added to generated alerts (optional)
  • annotations: are additional annotations to be added to generated alerts. (optional)

A template is designed to be re-used as the basis for multiple alert rules so from a single pmm_node_high_cpu_load template you can have alerts for production vs non-production, warning vs critical, etc.

Register for Percona Live ONLINE
A Virtual Event about Open Source Databases

Users can create alert rules from templates. An alert rule is what’s actually executed against metrics and what produces an alert. The rule can override default values specified in the template, add filters to apply the rule to only required services/nodes/etc, and specify target notification channels, such as email, Slack, PagerDuty, or Webhooks. If the rule hasn’t any associated notification channels its alerts will be available only via PMM UI. It’s useful to note that after creation rule keeps its relation with the template and any change in the template will affect all related rules.

Here is an alert rule example:

---
groups:
 - name: PMM Integrated Alerting
   rules:
     - alert: /rule_id/c8e5c559-ffba-43ed-847b-921f69c031a9
       rule: test
       expr: |-
         sum by (node_name) (mongodb_ss_mem_resident * 1024 * 1024)
         / on (node_name) (node_memory_MemTotal_bytes)
         * 100
         > 40
       for: 5s
       labels:
         ia: "1"
         rule_id: /rule_id/c8e5c559-ffba-43ed-847b-921f69c031a9
         severity: error
         template_name: pmm_mongodb_high_memory_usage
         cultom_label: demo
       annotations:
         description: |-
         { { $value } }% of memory (more than 40%) is used
         by {{ $labels.service_name }} on {{ $labels.node_name }}.
         summary: MongoDB high memory usage ({{ $labels.service_name }})

It has a Prometheus alert rule format.

How it Works

Integrated Alerting feature built on top of Prometheus Alertmanager, VictoriaMetrics TimescaleDB (TSDB), and VMAlert.

VictoriaMetrics TSDB is the main metrics storage in PMM, VMalert responsible for alert rules execution, and Prometheus Alertmanager responsible for alerts delivery. VMAlert runs queries on VM TSDB, checks if they are positive for the specified amount of time (example: MySQL is down for 5 minutes), and triggers alerts. All alerts forwarded to the PMM internal Alertmanager but also can be duplicated to some external Alertmanager (it can be set up on the PMM Settings page).

There are four available templates sources:

  1. Built-in templates, shipped with PMM distribution. They are embedded into the managed binary (core component on PMM).
  2. Percona servers. It’s not available yet, but it will be similar to the STT checks delivery mechanism (HTTPS + files signatures).
  3. Templates created by the user via PMM UI. We persist them in PMM’s database.
  4. Templates created by the user as files in the /srv/ia/templates directory.

During PMM startup, managed loads templates from all sources into the memory.

Alert rules can be created via PMM UI or just by putting rule files in the /srv/prometheus/rules directory. Alert rules created via UI persist in PMM’s internal PostgreSQL database. For each alert rule from DB, managed binary creates a YAML file in /etc/ia/rules/ and asks VMalert to reload the configuration and reread rule files. VMAlert executes query from each loaded alert rule every minute, once the rule condition is met (query is positive for the specified amount of time) VMAlert produces an alert and passes it to the Alertmanager. Please note that /etc/ia/rules/ controlled by managed and any manual changes in that directory will be lost.

Managed generates configuration for Alertmanager and updates it once any related entity changes.

Managed goes through the list of the existing rules and collects unique notification channel combinations. For example, if we have two rules and each of them has assigned channels a,b, and c it will be the one unique channel combination. For each rule managed generates a route and for each unique channel combination, it generates a receiver in the Alertmanager configuration file. Each route has a target receiver and filter by rule id, also it can contain user-defined filters. If a rule hasn’t assigned notification channels, then a special empty receiver will be used. Users can redefine an empty receiver with Alertmanagers base configuration file /srv/alertmanager/alertmanager.base.yml. When some Notification Channel is disabled, managed recollects unique channel combinations excluding disabled channels and regenerates receivers and routing rules. If the rule has only one specified channel and it was disabled then a special disabled receiver will be used for that. Unlike empty receiver, disabled can’t be redefined by the user and always means “do nothing”.  It prevents unexpected behavior after channels disabling. After each Alertmanager configuration update, managed asks Alermanager to reload it.

When Alertmanager receives an alert from VMAlert, it uses routes to find an appropriate receiver and forward alerts to destination channels. The user also can observe alerts via PMM UI. In that case, managed gets all available alerts from Alertmanager API and applies required filters before showing them.

Conclusion

The Integrated Alerting feature has many moving parts, and functionally it’s more about managing configuration for different components and making them work together. It provides a really nice way to be aware of important events in your system. While it’s still in tech preview state, it’s already helpful. With built-in templates, it’s easy to try without diving into documentation about Prometheus queries and other stuff. So please try it and tell us about your experience. What parameters of a system you would like to have covered with templates? What use cases do you have for alerting? We will happy to any feedback.

Apr
16
2021
--

Add Microsoft Azure Monitoring Within Percona Monitoring and Management 2.16.0

microsoft azure percona monitoring and management

microsoft azure percona monitoring and managementThe Microsoft Azure SQL Database is among the most popular databases of 2020, according to DB-Engine’s DBMS of the Year award. Also, it’s steadily climbing up in DB-Engines Ranking. The ranking is updated monthly and places database management systems according to their popularity. In case you didn’t know, DB-Engines is an initiative to collect and present information on database management systems (DBMS).

So we are excited to share that you can now monitor Azure instances in the Percona Monitoring and Management 2.16.0 (PMM) release. PMM can collect Azure DB metrics as well as available system metrics.

Only basics metrics are provided by Azure Portal.


No Disk, virtual CPU, or RAM data are available in PMM dashboards. Here is an example of a home page with a monitored Azure service. It’s shown in the middle row.

Microsoft Azure Monitoring Within Percona Monitoring and Management
DB metrics are collected by exporters from services directly. It allows you to have all possible metrics. You can find some screenshots of MySQL and PostgreSQL dashboards at the end of this blog post.

Simple Steps to Add an Azure DB Service and Get Metrics in PMM

  • The feature is a technical preview and has to be enabled on the Setting page. Turning this feature OFF will not remove added Services from monitoring, it will just hide the ability to dscover and add new Microsoft Azure Services.

Add an Azure DB Service PMM
This feature is a technical preview because we are releasing it as soon as possible to get some feedback from users. We are expecting to do more work on this feature, to make it more API and resource-efficient.

  • Go to page “Add Instance” (Configuration … PMM Inventory … Add instance)

  • Press the button “Microsoft Azure MySQL or PostgreSQL” and fill in the requested Azure and DB credentials.

Microsoft Azure MySQL or PostgreSQL

Please follow the link “Where do I get the security credentials for my Azure DB instance” if some credential parameters are missing.

Also, please keep in mind that a separate node will be created for each service. It’s named as a service hostname and can’t be changed. But you may specify a service name when adding service details. By default, node and service names are equal.

That’s it. You may go to the list of dashboards and observe collected data.

If you are a Microsoft Azure user or going to become one, please give Percona Monitoring and Management a test run.  We are always open to suggestions and propositions.  Please contact us, leave a message on our forum, or join the slack channel.

Here are screenshots of the “MySQL Instance Summary” and “PostgreSQL Instance Summary” dashboards for Azure instances.

 

Read more about the release of Percona Monitoring and Management 2.16 and all the exciting new features included with it!

Percona Live ONLINE, the open source database conference, is coming up quickly! Registration is now OPEN… and FREE! 

Apr
16
2021
--

Percona Monitoring and Management 2.16 Brings Microsoft Azure Monitoring via a Technical Preview

Percona Monitoring and Management 2.16 release

Percona Monitoring and Management 2.16 releaseThis week we release Percona Monitoring and Management 2.16 (PMM), which brings some exciting new additions we’d like to highlight!

Amazon RDS PostgreSQL Monitoring

AWS monitoring in PMM now covers PostgreSQL RDS and PostgreSQL Aurora types. PMM will include them in a Discovery UI where they can be added which will result in node-related metrics as well as PostgreSQL database performance metrics. Before this release, this was available only to MySQL-related instances from Amazon RDS.

Security Threat Tool Scheduling

Security Threat Tool users are now able to control the Security Check execution time intervals for groups of checks, move checks between groups, and disable individual checks if necessary, allowing for an even more configurable experience for users.

Microsoft Azure Discovery and Node Metrics Extraction

Percona Monitoring and Management now monitors Azure instances and can collect Azure DB metrics as well as available System metrics. (Please note that only basic metrics are provided by Azure Portal.)

This means that as of today our Technical Preview has PMM providing the same level of support for Microsoft Azure Database as a Service (DBaaS) as we have for AWS’s DBaaS (RDS/Aurora on MySQL or PostgreSQL). Users are able to easily discover and add Azure databases for monitoring by PMM complete with node-level monitoring. This feature is available only if you explicitly activate it on the PMM Settings page. Deactivating it will not remove added services from monitoring, but will just hide the ability to discover and add new Microsoft Azure Services. Read more about Microsoft Azure monitoring within Percona Monitoring and Management.

Percona Monitoring and Management 2.16

Percona Live, the open source database conference, is going to be even BIGGER and BETTER in 2021. Registration is now OPEN! 

Improvements to Integrated Alerting within PMM

The PMM 2.16 release also brings numerous improvements to the Technical Preview of Integrated Alerting within Percona Monitoring and Management. You can read more on the design and implementation details of this work at that link.

Additional PMM 2.16 release highlights include…

Support for pg_stat_monitor v0.8

Technical Preview: Added compatibility with pg_stat_monitor plugin v 0.8.0. This is not exposing the new features for the plugin in PMM yet but ensures Query Analytics metrics are collected to the same degree it was with version 0.6.0 of the plugin.

[DBaaS] Resource planning and prediction (Resource calculator)

The Preview of DBaaS in PMM: While creating a DB cluster a user can see a prediction of the resources this cluster will consume with all components as well as the current total and available resources in the Kubernetes cluster. Users will be warned that if they attempt to create a DB cluster; it may be unsuccessful because of available resources in the Kubernetes cluster.

[DBaaS] Percona Server for MongoDB 1.7.0 Operator Support

The Preview of DBaaS in PMM will be using the recently-released Percona Kubernetes Operator for Percona Server for MongoDB 1.7.0 to create MongoDB clusters.

Conclusion

The release of PMM 2.16 includes many impressive enhancements AND brand new features for our user base. We hope as always that you will continue to let us know your thoughts on these new PMM v2 features as well as any ideas you have for improvement!

Download and try Percona Monitoring and Management today! Read the PMM 2.16 full release notes.

 

Apr
06
2021
--

Monitoring OPNSense Firewall with Percona Monitoring and Management

OPNsense firewall Percona Monitoring Management

OPNsense firewall Percona Monitoring ManagementI try to use Open Source when a good solution exists, so for my home firewall, I’m using OPNSense  – a very powerful FreeBSD-based firewall appliance with great features along with a powerful GUI.

One of the plugins available with OPNSense is  node_exporter, which exposes a lot of operating system metrics through the Prometheus protocol.

Installing this plugin will allow you to monitor your OPNSense based firewall with any Prometheus-compatible system including, as you have guessed,  Percona Monitoring and Management (PMM).

For best results, you will need PMM 2.14 or later, as it has improved support for external exporters.

Adding OPNSense to PMM for monitoring requires just one simple command:

 

pmm-admin add external-serverless --url=http://10.11.13.1:9100/metrics --external-name fw01 --group opnsense

 

Let’s break down what this command does:

  • We are adding this as “serverless” exported because there are no pmm-agent processes running on that node and the only access we have to it is through the Prometheus protocol.
  • 10.11.13.1 is the IP of the firewall.  Port 9100 is what OPNSense uses by default.
  • I chose to name this firewall “fw01” for purpose of monitoring, this is how it will be identified in PMM.
  • We put it in the group “opnsense” which will allow us to easily have dashboards that are focused on OPNSense firewalls only, not accidentally picking data from other services.

If you prefer, you can also use your PMM installation instead (See PMM -> PMM Add Instance Menu) and pick “External Service”.

PMM External Service

 

After this step, we will already have some information available in our PMM installation.

 

PMM dashboard

 

The Node Summary Dashboard will pick up some of the OS metrics, however, as this dashboard is built with a focus on Linux rather than FreeBSD, we will not have all data populated or tested to be correct, and this should be seen as a lucky incident rather than an expected outcome.

The next step you can take is to look if there are any dashboards available for the system you’re looking to monitor.  A quick search located this dashboard on the Grafana website.

While this dashboard was a good start, it relied on very particular naming of the hosts in order to work and had some bugs which needed fixing.   If a given dashboard was not designed to work with PMM, you also often need to make some adjustments because PMM applies different labels to the metrics compared to a vanilla Prometheus installation.

I uploaded an updated dashboard back to the Grafana website.

This makes installing it with PMM very easy; just go to Import Dashboard and Enter Dashboard ID – 14150

 

import dashboard percona monitoring and management

 

Once the dashboard is imported you will see a variety of data the OpnSense built-in node_exporter provides:

 

OpnSense built-in node_exporter

 

That’s it!


Percona Monitoring and Management is free to download and use. Try it today!

Mar
31
2021
--

Webinar April 14: Optimize and Troubleshoot MySQL Using Percona Monitoring and Management

Troubleshoot MySQL Using Percona Monitoring and Management

Troubleshoot MySQL Using Percona Monitoring and ManagementOptimizing MySQL performance and troubleshooting MySQL problems are two of the most critical and challenging tasks for MySQL DBAs. The databases powering applications need to be able to handle changing traffic workloads while remaining responsive and stable in order to deliver an excellent user experience. Further, DBAs are also expected to find cost-efficient means of solving these issues.

In this webinar, we will demonstrate the advanced options of Percona Monitoring and Management V.2 that enable you to solve these challenges, which are built on free and open-source software. We will look at specific, common MySQL problems and review them.

Please join Peter Zaitsev on Wednesday, April 14th, 2021, at 11 am EDT for his webinar Optimize and Troubleshoot MySQL using Percona Monitoring and Management (PMM).

Register for Webinar

If you can’t attend, sign up anyway, and we’ll send you the slides and recording afterward.

Mar
30
2021
--

How To Automate Dashboard Importing in Percona Monitoring and Management

Automate Dashboard Importing in Percona Monitoring and Management

Automate Dashboard Importing in Percona Monitoring and ManagementIn this blog post, I’ll look at how to import custom dashboards into Percona Monitoring and Management (PMM) 2.x, and give some tips on how to automate it.

The number of dashboards in PMM2 is constantly growing. For example, we recently added a new HAProxy dashboard to the latest 2.15.0 release. Even though the PMM server has more than fifty dashboards, it’s not possible to cover all common server applications.

The greatest source of dashboards is the official Grafana site. Here, anyone can share their own dashboards with the community or find already uploaded ones. Percona has its own account and publishes as-yet-unreleased or unique (non-PMM) dashboards.

Each dashboard has its own number which can be used to refer to it. For example, 12630 is assigned to the dashboard “MySQL Query Performance Troubleshooting”.
Percona Monitoring and Management Dashboard

You can download dashboards as JSON files and import them into your PMM2 installation using the UI.

PMM2
This is easy, but we are forgetting that dashboards can be updated by publishers as new revisions. So it’s possible that the dashboard has a bunch of useful changes that were published after you downloaded it. But, you keep using an old version of the dashboard.

So the only way to use the latest dashboard version is to check the site from time to time. It can really be a pain in the neck, especially if you have to track more than one dashboard.

This is why it’s time to take a look at automation. Grafana has a very powerful API that I used to create this shell script. Let’s take a peek at it. It’s based on the api/dashboards/import API function. The function requires a POST request with a dashboard body.

The first step is to download a dashboard.

curl -s https://grafana.com/api/dashboards/12630/revisions/1/download --output 12630_rev1.json

Note how I used dashboard number 12630 and revision 1 in the command. By increasing the revision number I can find out the latest available dashboard version. This is exactly the approach used in the script.

In the next example, I’ll use a dashboard from our dashboard repository. (I will explain why later.)

curl -L -k https://github.com/percona/grafana-dashboards/raw/PMM-2.0/dashboards/Disk_Details.json --output Disk_Details.json

Now I have a file and can form a POST request to import the dashboard into a PMM installation.

$ curl -s -k -X POST -H "Content-Type: application/json" -d "{\"dashboard\":$(cat Disk_Details.json),\"overwrite\":true}" -u admin:admin https://18.218.63.13/graph/api/dashboards/import


The dashboard has been uploaded. If you take a look at the output you may notice the parameter folderId. With this, it’s possible to specify a Grafana folder for my dashboards.

Here is the command for fetching a list of existing folders.

curl -s -k -u admin:admin https://172.20.0.1/graph/api/folders

I now have folder IDs and can use them in the importing command. The Folder ID should be specified in a POST request as shown in the next example.


Now you are familiar with API import commands, I’ll give you a closer look at community dashboards.

Most of them have the parameter “Data Sources”.
It means that for dashboard importing, you have to specify the data source names assigned by your installation.


This point makes it impossible to import any downloaded dashboards with the API without modifying them. If I execute the import command used earlier (the 12630_rev1.json file downloaded from Grafana.com) I will get an error.


So, here’s another script (cleanup_dash.py) that replaces the datasource fields in dashboards and allows me to pass an importing command. The script takes a dashboard file name as a parameter.


The importing script calls cleanup-dash.py automatically if an initial importing attempt was unsuccessful.

Note the parameters of the importing script. Here you should set the details of your PMM installation. dashboards is an array of dashboards IDs that you want to import into PMM2.

#!/bin/bash
dashboards=(13266 12630 12470)
pmm_server="172.20.0.1"
user_pass="admin:admin"
folderName="General"

Now, you should download both scripts and try to import dashboards. Make sure that both scripts are executable and in the same folder. Here are the commands to do it.

curl -LJOs https://github.com/Percona-Lab/pmm-dashboards/raw/master/misc/import-dashboard-grafana-cloud.sh --output import-dashboard-grafana-cloud.sh
curl -LJOs https://github.com/Percona-Lab/pmm-dashboards/raw/master/misc/cleanup-dash.py --output cleanup-dash.py

chmod a+x import-dashboard-grafana-cloud.sh
chmod a+x cleanup-dash.py

./import-dashboard-grafana-cloud.sh

You can next find the imported dashboards in your PMM installation. They were put into the ‘Insight’ folder and can be found by the keyword ‘PMM2’.

imported PMM dashboards

By default, the script imports all designed for PMM2 dashboards from Percona account. Also, folder names and dashboard IDs can be specified as parameters for the script.

Here are some usage examples:

import-dashboard-grafana-cloud.sh Default list of dashboards will be uploaded into General folder
import-dashboard-grafana-cloud.sh Insight Default list of dashboards will be uploaded into Insight folder
import-dashboard-grafana-cloud.sh 13266 12630 12470 Dashboards 13266 12630 12470 will be uploaded into General folder
import-dashboard-grafana-cloud.sh Insight 13266 12630 12470 Dashboards 13266 12630 12470 will be uploaded into Insight folder

You can define any number of dashboards in the script parameters and run the script periodically to always have the most recent dashboard versions.


Percona Monitoring and Management is free to download and use. Try it today!

Mar
10
2021
--

A Peek at Percona Kubernetes Operator for Percona Server for MongoDB New Features

Percona Kubernetes Operator for Percona Server for MongoDB New Features

Percona Kubernetes Operator for Percona Server for MongoDB New FeaturesThe latest 1.7.0 release of Percona Kubernetes Operator for Percona Server for MongoDB came out just recently and enables users to:

Today we will look into these new features, the use cases, and highlight some architectural and technical decisions we made when implementing them.

Sharding

The 1.6.0 release of our Operator introduced single shard support, which we highlighted in this blog post and explained why it makes sense. But horizontal scaling is not possible without support for multiple shards.

Adding a Shard

A new shard is just a new ReplicaSet which can be added under spec.replsets in cr.yaml:

spec:
  ...
  replsets:
  - name: rs0
    size: 3
  ....
  - name: rs1
    size: 3
  ...

Read more on how to configure sharding.

In the Kubernetes world, a MongoDB ReplicaSet is a StatefulSet with a number of pods specified in

spec.replsets.[].size

variable.

Once pods are up and running, the Operator does the following:

  • Initiates ReplicaSet by connecting to newly created pods running mongod
  • Connects to mongos and adds a shard with sh.addShard() command

    adding a shard mongodb operator

Then the output of db.adminCommand({ listShards:1 }) will look like this:

        "shards" : [
                {
                        "_id" : "replicaset-1",
                        "host" : "replicaset-1/percona-cluster-replicaset-1-0.percona-cluster-replicaset-1.default.svc.cluster.local:27017,percona-cluster-replicaset-1-1.percona-cluster-replicaset-1.default.svc.cluster.local:27017,percona-cluster-replicaset-1-2.percona-cluster-replicaset-1.default.svc.cluster.local:27017",
                        "state" : 1
                },
                {
                        "_id" : "replicaset-2",
                        "host" : "replicaset-2/percona-cluster-replicaset-2-0.percona-cluster-replicaset-2.default.svc.cluster.local:27017,percona-cluster-replicaset-2-1.percona-cluster-replicaset-2.default.svc.cluster.local:27017,percona-cluster-replicaset-2-2.percona-cluster-replicaset-2.default.svc.cluster.local:27017",
                        "state" : 1
                }
        ],

Have open source expertise to share? Submit your talk for Percona Live ONLINE!

Deleting a Shard

Percona Operators are built to simplify the deployment and management of the databases on Kubernetes. Our goal is to provide resilient infrastructure, but the operator does not manage the data itself. Deleting a shard requires moving the data to another shard before removal, but there are a couple of caveats:

  • Sometimes data is not moved automatically by MongoDB – unsharded collections or jumbo chunks
  • We hit the storage problem – what if another shard does not have enough disk space to hold the data?

shard does not have enough disk space to hold the data

There are a few choices:

  1. Do not touch the data. The user needs to move the data manually and then the operator removes the empty shard.
  2. The operator decides where to move the data and deals with storage issues by upscaling if necessary.
    • Upscaling the storage can be tricky, as it requires certain capabilities from the Container Storage Interface (CNI) and the underlying storage infrastructure.

For now, we decided to pick option #1 and won’t touch the data, but in future releases, we would like to work with the community to introduce fully-automated shard removal.

When the user wants to remove the shard now, we first check if there are any non-system databases present on the ReplicaSet. If there are none, the shard can be removed:

func (r *ReconcilePerconaServerMongoDB) checkIfPossibleToRemove(cr *api.PerconaServerMongoDB, usersSecret *corev1.Secret, rsName string) error {
  systemDBs := map[string]struct{}{
    "local": {},
    "admin": {},
    "config":  {},
  }

delete a shard

Custom Sidecars

The sidecar container pattern allows users to extend the application without changing the main container image. They leverage the fact that all containers in the pod share storage and network resources.

Percona Operators have built-in support for Percona Monitoring and Management to gain monitoring insights for the databases on Kubernetes, but sometimes users may want to expose metrics to other monitoring systems.  Lets see how mongodb_exporter can expose metrics running as a sidecar along with ReplicaSet containers.

1. Create the monitoring user that the exporter will use to connect to MongoDB. Connect to mongod in the container and create the user:

> db.getSiblingDB("admin").createUser({
    user: "mongodb_exporter",
    pwd: "mysupErpassword!123",
    roles: [
      { role: "clusterMonitor", db: "admin" },
      { role: "read", db: "local" }
    ]
  })

2. Create the Kubernetes secret with these login and password. Encode both the username and password with base64:

$ echo -n mongodb_exporter | base64
bW9uZ29kYl9leHBvcnRlcg==
$ echo -n 'mysupErpassword!123' | base64
bXlzdXBFcnBhc3N3b3JkITEyMw==

Put these into the secret and apply:

$ cat mongoexp_secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mongoexp-secret
data:
  username: bW9uZ29kYl9leHBvcnRlcg==
  password: bXlzdXBFcnBhc3N3b3JkITEyMw==

$ kubectl apply -f mongoexp_secret.yaml

3. Add a sidecar for mongodb_exporter into cr.yaml and apply:

replsets:
- name: rs0
  ...
  sidecars:
  - image: bitnami/mongodb-exporter:latest
    name: mongodb-exporter
    env:
    - name: EXPORTER_USER
      valueFrom:
        secretKeyRef:
          name: mongoexp-secret
          key: username
    - name: EXPORTER_PASS
      valueFrom:
        secretKeyRef:
          name: mongoexp-secret
          key: password
    - name: POD_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: MONGODB_URI
      value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_IP):27017"
    args: ["--web.listen-address=$(POD_IP):9216"

$ kubectl apply -f deploy/cr.yaml

All it takes now is to configure the monitoring system to fetch the metrics for each mongod Pod. For example, prometheus-operator will start fetching metrics once annotations are added to ReplicaSet pods:

replsets:
- name: rs0
  ...
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: '9216'

PVCs Clean Up

Running CICD pipelines that deploy MongoDB clusters on Kubernetes is a common thing. Once these clusters are terminated, the Persistent Volume Claims (PVCs) are not. We have now added automation that removes PVCs after cluster deletion. We rely on Kubernetes Finalizers – asynchronous pre-delete hooks. In our case we hook the finalizer to the Custom Resource (CR) object which is created for the MongoDB cluster.

PVCs Clean Up

A user can enable the finalizer through cr.yaml in the metadata section:

metadata:
  name: my-cluster-name
  finalizers:
     - delete-psmdb-pvc

Conclusion

Percona is committed to providing production-grade database deployments on Kubernetes. Our Percona Kubernetes Operator for Percona Server for MongoDB is a feature-rich tool to deploy and manage your MongoDB clusters with ease. Our Operator is free and open source. Try it out by following the documentation here or help us to make it better by contributing your code and ideas to our Github repository.

Mar
09
2021
--

3 Percona Software Products Take the SourceForge Leader Award!

Percona Software SourceForge Award

We are so grateful to all users of our software. Thanks to you, some of them have just been recognized as a Winter 2021 category leader by SourceForge!

The SourceForge Leader Award is only awarded to select products that have attained the highest levels of praise from user reviews on SourceForge.

This is a huge achievement, as Percona Monitoring and Management and Percona Server for MongoDB have been selected as best-in-class from over 60,000 products on SourceForge. SourceForge gets over 30 million visitors per month looking for business software and solutions.

Have open source expertise to share? Submit your talk for Percona Live ONLINE!

Thank you, all the users of the open source software products, for your trust and support. We highly appreciate it.

The best reviews are helpful to others by adding technical details and solutions. If you haven’t left a review for our software on SourceForge yet, we are looking forward to reading yours.

Percona Products Take SourceForge Leader Award

Mar
02
2021
--

Percona Monitoring and Management 2.15 Brings Even MORE Reasons to Upgrade to PMM v2!

Percona Monitoring and Management - 2.15

Percona Monitoring and Management - 2.15In November of 2020, we announced that in early 2021 Percona was slated to release a version of Percona Monitoring and Management (PMM) v2 that would include all of the critical functionality users of PMM v1 have come to know and love over the years. In our initial blog, we also addressed some of the specifics related to features for which we had not yet achieved parity such as external services, annotations, MongoDB Explain, and custom collectors per service to name a few.

Well friends the time has come, and we’re happy to announce that any remaining critical parity items have been completed… but even MORE importantly, the enhancements to Percona Monitoring and Management v2 are ones you won’t want to miss out on. This means one thing: if you haven’t already — IT’S TIME TO UPGRADE!

Some of the most recent work included in the PMM 2.15 release include:

Disable collectors while adding node/service to monitoring:

  • PMM users can disable any collector PMM utilizes to gather metrics. In certain situations, disabling the collector(s) prevents PMM from flooding logs or saves infrastructure resources if the given metrics simply aren’t needed. This is an early step towards providing our users full management capabilities when it comes to the metrics they collect. We will continue to expand this effort in future releases.

External services monitoring:

  • Prior to this release, PMM v2 did not support external services monitoring on systems that couldn’t also run the PMM client. BUT as of this week, any non-native services supported by PMM can now be monitored with external services monitoring. You can see the list of possible exporters to be used here: https://prometheus.io/docs/instrumenting/exporters/.

Provide summary information for systems (pt-*-summary actions):

  • With the addition of “pt-*-summary” in PMM v2, users can now view summary information pertaining to services and nodes within their PMM dashboard. Summary information is provided in the format of pt-*-summary tools output, in order to simplify the portability of this data. This format will also be preserved when summary information is shared with the Percona Support team, simplifying their investigations of issues.

Note: “pt-*-summary” includes formats for: 

  • pt-mysql-summary
  • pt-mongodb-summary
  • pt-pg-summary
  • pt-summary

 

Have open source expertise you want to share? Submit your talk for Percona Live ONLINE 2021!

 

HAProxy support by PMM

  • Users are now able to add HAProxy services to be monitored in PMM v2. This allows users who use HAProxy in their HA configuration to have this component also monitored by PMM.

As a refresher, PMM v2 users also benefit from other valuable enhancements over PMM v1, including:

  • A complete rewrite of the Query Analytics (QAN) tool, including improved speed, global sparkline hover, filtering, new dimensions to collect data, and rich searching capabilities.
  • Our Security Threat Tool (STT) so that you not only can monitor database performance but also database security vulnerabilities.
  • A robust expansion of MongoDB and PostgreSQL support (along with continued improvements for MySQL).
  • Integration with external AlertManager to create and deploy alerting and “integrated alerting” to provide native alerting inside PMM itself.
  • Global and local annotations across nodes and services to highlight key events for correlation. Get to the “WHY” and easily see changes occurring in your environment(s).

There is no better time than now to upgrade to Percona Monitoring and Management v2!

One last reminder, we are flipping the latest version flag to the PMM v2 series from PMM v1 with this release.

Please note that this does NOT mean that we are “sunsetting” PMM v1 and will no longer support that application. While we are not creating new features for PMM v1, we do continue to maintain it with critical bug fixes as needed as well as support for the product for those customers on a support contract. This maintenance and support will continue until PMM moves to version 3.x at a date to be determined in the future.

Let us know your thoughts on these new PMM v2 features as well as any ideas you have for improvement.

Download and Try Percona Monitoring and Management Today!

Read the PMM 2.15 full release notes here

Feb
25
2021
--

MySQL Monitoring and Reporting Using the MySQL Shell

Monitoring Using the MySQL Shell

Monitoring Using the MySQL ShellMySQL Shell is the advanced MySQL client, which has many excellent features. In this blog, I am going to explain the MySQL shell commands “\show” and “\watch”. Both commands are very useful to monitor the MySQL process. It provides more insights into the foreground and background threads as well. 

Overview

“\show” and “\watch” are the MySQL shell commands, which can be executed using the Javascript (JS), Python (Py), and SQL interfaces. Both commands are providing the same information, but the difference is you can refresh the results when using the command “\watch”. The refresh interval is two seconds. 

  • \show: Run the specified report using the provided options and arguments.
  • \watch: Run the specified report using the provided options and arguments, and refresh the results at regular intervals.

Below are the available options you can use with the “\show” or “\watch” command to retrieve the data.

MySQL  localhost:33060+ ssl  percona  JS > \show
Available reports: query, thread, threads.

MySQL  localhost:33060+ ssl  percona  JS > \watch
Available reports: query, thread, threads.

  • Query
  • Thread
  • Threads

“\show” with “query”

It will just execute the query provided as an argument within the double quotes and print the result. 

MySQL  localhost:33060+ ssl  percona  JS > \show query "select database()"
+------------+
| database() |
+------------+
| percona    |
+------------+
MySQL  localhost:33060+ ssl  percona  JS > \show query --vertical "select database()"
*************************** 1. row ***************************
database(): percona

You can also use the same option with the “\watch” command. Let’s say, if you want to monitor the processlist for every two seconds, then you can use the command like

\watch query “show processlist”

Have open source expertise you want to share? Submit your talk for Percona Live ONLINE 2021!

“\show” with “thread”

This option is designed to provide various information about the specific thread. Below are some of the important details you can retrieve from the specific thread. 

  • InnoDB details ( –innodb )
  • Locks Details ( –locks )
  • Prepared statement details ( –prep-stmts )
  • Client connection details ( –client )
  • Session status ( –status ) and session variables details ( –vars )

Example:

I am going to show the example for the below scenario. 

At session1:

My connection id is 121. I have started the transaction and updated the row where “id=3”. But, still not committed or rolled back the transaction.

mysql> \r
Connection id:    121
Current database: percona

mysql> select * from herc;
+------+--------+
| id   | name   |
+------+--------+
|    1 | jc     |
|    2 | herc7  |
|    3 | sakthi |
+------+--------+
3 rows in set (0.00 sec)

mysql> begin;
Query OK, 0 rows affected (0.00 sec)

mysql> update herc set name='xxx' where id=3;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

At session 2:

My connection id is 123. I have started the transaction and tried to update the same row where “id=3”. The query is still executing because the transaction from session 1 is blocking the row ( id = 3 )

mysql> \r
Connection id:    123
Current database: percona

mysql> begin;
Query OK, 0 rows affected (0.00 sec)

mysql> update herc set name='hercules' where id=3;

Now let’s use the command “\show thread” for both connection IDs (121, 123) and see what information we can get.

General information ( conncetion id = 123 ):

MySQL  localhost:33060+ ssl  JS > \show thread --cid=123 --general
GENERAL
Thread ID:                161
Connection ID:            123
Thread type:              FOREGROUND
Program name:             mysql
User:                     root
Host:                     localhost
Database:                 percona
Command:                  Query
Time:                     00:08:49
State:                    updating
Transaction state:        LOCK WAIT
Prepared statements:      0
Bytes received:           282
Bytes sent:               131
Info:                     update herc set name='hercules' where id=3
Previous statement:       NULL

From the general information, you can find some basic information about your id.

InnoDB information:

MySQL  localhost:33060+ ssl  JS > \show thread --cid=123 --innodb
INNODB STATUS
State:                    LOCK WAIT
ID:                       28139179
Elapsed:                  00:10:23
Started:                  2021-02-23 17:40:06.000000
Isolation level:          REPEATABLE READ
Access:                   READ WRITE
Locked tables:            1
Locked rows:              1
Modified rows:            0

Using the “–innodb” option, you can find out the information about the InnoDB like transaction state,  thread start time, elapsed time, locked tables, rows, modified rows. 

Locks information:

For connection id 123:

MySQL  localhost:33060+ ssl  JS > \show thread --cid=123 --locks
LOCKS
Waiting for InnoDB locks
+---------------------+----------+------------------+--------+-----+-------+----------------+---------------------+----------+
| Wait started        | Elapsed  | Locked table     | Type   | CID | Query | Account        | Transaction started | Elapsed  |
+---------------------+----------+------------------+--------+-----+-------+----------------+---------------------+----------+
| 2021-02-23 17:40:06 | 00:12:27 | `percona`.`herc` | RECORD | 121 | NULL  | root@localhost | 2021-02-23 17:39:32 | 00:13:01 |
+---------------------+----------+------------------+--------+-----+-------+----------------+---------------------+----------+

Waiting for metadata locks
N/A

Blocking InnoDB locks
N/A

Blocking metadata locks
N/A

Connection id 123 is from session 2. Which is currently waiting to release the lock from connection id 121 (session 1). Let’s see the “–locks” status for connection id 121.

MySQL  localhost:33060+ ssl  JS > \show thread --cid=121 --locks
LOCKS

Waiting for InnoDB locks
N/A

Waiting for metadata locks
N/A

Blocking InnoDB locks
+---------------------+----------+------------------+--------+-----+--------------------------------------------+
| Wait started        | Elapsed  | Locked table     | Type   | CID | Query                                      |
+---------------------+----------+------------------+--------+-----+--------------------------------------------+
| 2021-02-23 17:40:06 | 00:14:23 | `percona`.`herc` | RECORD | 123 | update herc set name='hercules' where id=3 |
+---------------------+----------+------------------+--------+-----+--------------------------------------------+

Blocking metadata locks
N/A

Here, you can find the details on “Blocking InnoDB Locks”. It blocks the connection id 123 (session 2).

Like the above example, you can explore the other options as well, which are helpful. 

“\show” with “threads”

This is very helpful to know the details about your ongoing threads. It will provide the details about both “FOREGROUND” and “BACKGROUND” threads. There are many columns, which are very useful to know about thread status. You can filter the needed columns with the option “-o”. By executing the command “\show threads –help”, you can find all the available options and their purposes. 

  • It supports the WHERE clause for generating the report
  • It supports ORDER BY for generating the report
  • It supports LIMIT for generating the report. 

Below, I am sharing some examples, which will help you to understand how we can use the “threads” command with the MySQL shell.

  • How to find the running “FOREGROUND” threads details
  • How to find the running “BACKGROUND” threads details
  • How to find the top five threads, which are consuming more memory from a particular user
  • How to find the Query digest details from ongoing threads
  • How to find the top five threads which consumed huge IO operations
  • How to find the top five blocked and blocking threads

I am running the sysbench against the server to get my database loaded. 

sysbench /usr/share/sysbench/oltp_read_write.lua --events=0 --time=30000 --mysql-host=localhost --mysql-user=root --mysql-password=Course@321 --mysql-port=3306 --delete_inserts=10 --index_updates=10 --non_index_updates=10 --report-interval=1 --threads=100 run

How to Find the Running “FOREGROUND” Threads Details

You can use the option “–foreground” to see all the running foreground threads.

MySQL  localhost:33060+ ssl  JS > \show threads --foreground
+-----+-----+-----------------+-----------+---------+---------+----------+------------------------+-----------+-------------------------------------------------------------------+-----------+
| tid | cid | user            | host      | db      | command | time     | state                  | txstate   | info                                                              | nblocking |
+-----+-----+-----------------+-----------+---------+---------+----------+------------------------+-----------+-------------------------------------------------------------------+-----------+
| 27  | 114 | root            | localhost | NULL    | Query   | 00:00:00 | executing              | NULL      | SELECT json_object('cid',t.PRO ... READ_ID = io.thread_id WHERE t | 0         |
| 42  | 5   | event_scheduler | localhost | NULL    | Daemon  | 17:42:20 | Waiting on empty queue | NULL      | NULL                                                              | 0         |
| 46  | 7   | NULL            | NULL      | NULL    | Daemon  | 17:42:20 | Suspending             | NULL      | NULL                                                              | 0         |
| 158 | 120 | root            | localhost | NULL    | Sleep   | 00:32:24 | NULL                   | NULL      | 

.  . . . .. . ... .     . . .. . .. . .. . 
.  . . . .. . ... .     . . .. . .. . .. . 
.  . . . .. . ... .     . . .. . .. . .. . 
                                                              | 0         |
| 260 | 222 | root            | localhost | sbtest  | Execute | 00:00:00 | updating               | LOCK WAIT | NULL                                                              | 1         |
| 261 | 223 | root            | localhost | sbtest  | Execute | 00:00:00 | updating               | LOCK WAIT | NULL                                                              | 0         |
+-----+-----+-----------------+-----------+---------+---------+----------+------------------------+-----------+-------------------------------------------------------------------+-----------+

How to Find the Running “BACKGROUND” Threads Details

This will give detailed information about the background threads, mostly InnoDB. You can use the flag “–background” to get these details. These details will be really helpful for debugging the performance issues.

MySQL  localhost:33060+ ssl  JS > \show threads --background
+-----+--------------------------------------+---------+-----------+------------+------------+------------+
| tid | name                                 | nio     | ioltncy   | iominltncy | ioavgltncy | iomaxltncy |
+-----+--------------------------------------+---------+-----------+------------+------------+------------+
| 1   | sql/main                             | 92333   | 192.51 ms | 229.63 ns  | 96.68 us   | 1.42 ms    |
| 3   | innodb/io_ibuf_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 4   | innodb/io_log_thread                 | NULL    | NULL      | NULL       | NULL       | NULL       |
| 5   | innodb/io_read_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 6   | innodb/io_read_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 7   | innodb/io_read_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 8   | innodb/io_read_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 9   | innodb/io_write_thread               | 37767   | 45.83 s   | 1.26 us    | 1.21 ms    | 17.81 ms   |
| 10  | innodb/io_write_thread               | 36763   | 44.57 s   | 1.23 us    | 1.21 ms    | 30.11 ms   |
| 11  | innodb/io_write_thread               | 37989   | 45.87 s   | 1.26 us    | 1.21 ms    | 24.03 ms   |
| 12  | innodb/io_write_thread               | 37745   | 45.78 s   | 1.23 us    | 1.21 ms    | 28.93 ms   |
| 13  | innodb/page_flush_coordinator_thread | 456128  | 2.19 min  | 5.27 us    | 419.75 us  | 29.98 ms   |
| 14  | innodb/log_checkpointer_thread       | 818     | 479.84 ms | 2.62 us    | 710.63 us  | 9.26 ms    |
| 15  | innodb/log_flush_notifier_thread     | NULL    | NULL      | NULL       | NULL       | NULL       |
| 16  | innodb/log_flusher_thread            | 1739344 | 41.71 min | 1.46 us    | 1.44 ms    | 30.22 ms   |
| 17  | innodb/log_write_notifier_thread     | NULL    | NULL      | NULL       | NULL       | NULL       |
| 18  | innodb/log_writer_thread             | 5239157 | 10.23 min | 1.14 us    | 117.16 us  | 29.02 ms   |
| 19  | innodb/srv_lock_timeout_thread       | NULL    | NULL      | NULL       | NULL       | NULL       |
| 20  | innodb/srv_error_monitor_thread      | NULL    | NULL      | NULL       | NULL       | NULL       |
| 21  | innodb/srv_monitor_thread            | NULL    | NULL      | NULL       | NULL       | NULL       |
| 22  | innodb/buf_resize_thread             | NULL    | NULL      | NULL       | NULL       | NULL       |
| 23  | innodb/srv_master_thread             | 270     | 4.02 ms   | 6.75 us    | 14.90 us   | 41.74 us   |
| 24  | innodb/dict_stats_thread             | 3088    | 429.12 ms | 3.22 us    | 138.96 us  | 5.93 ms    |
| 25  | innodb/fts_optimize_thread           | NULL    | NULL      | NULL       | NULL       | NULL       |
| 26  | mysqlx/worker                        | NULL    | NULL      | NULL       | NULL       | NULL       |
| 28  | mysqlx/acceptor_network              | NULL    | NULL      | NULL       | NULL       | NULL       |
| 32  | innodb/buf_dump_thread               | 1060    | 7.61 ms   | 2.74 us    | 7.18 us    | 647.18 us  |
| 33  | innodb/clone_gtid_thread             | 4       | 689.86 us | 4.46 us    | 172.46 us  | 667.95 us  |
| 34  | innodb/srv_purge_thread              | 7668    | 58.21 ms  | 3.34 us    | 336.20 us  | 1.64 ms    |
| 35  | innodb/srv_worker_thread             | 30      | 278.22 us | 5.57 us    | 9.27 us    | 29.69 us   |
| 36  | innodb/srv_purge_thread              | NULL    | NULL      | NULL       | NULL       | NULL       |
| 37  | innodb/srv_worker_thread             | NULL    | NULL      | NULL       | NULL       | NULL       |
| 38  | innodb/srv_worker_thread             | 24      | 886.23 us | 5.24 us    | 36.93 us   | 644.75 us  |
| 39  | innodb/srv_worker_thread             | NULL    | NULL      | NULL       | NULL       | NULL       |
| 40  | innodb/srv_worker_thread             | 22      | 223.92 us | 5.84 us    | 10.18 us   | 18.34 us   |
| 41  | innodb/srv_worker_thread             | NULL    | NULL      | NULL       | NULL       | NULL       |
| 43  | sql/signal_handler                   | NULL    | NULL      | NULL       | NULL       | NULL       |
| 44  | mysqlx/acceptor_network              | NULL    | NULL      | NULL       | NULL       | NULL       |
+-----+--------------------------------------+---------+-----------+------------+------------+------------+

How to Find the Top Five Threads, Which are Consuming More Memory From a Particular User

From the below example, I am finding the top five threads, which are consuming more memory from user “root”. 

MySQL  localhost:33060+ ssl  JS > \show threads --foreground -o tid,user,memory,started --order-by=memory --desc --where "user = 'root'" --limit=5
+-----+------+----------+---------------------+
| tid | user | memory   | started             |
+-----+------+----------+---------------------+
| 247 | root | 9.47 MiB | 2021-02-23 18:30:29 |
| 166 | root | 9.42 MiB | 2021-02-23 18:30:29 |
| 248 | root | 9.41 MiB | 2021-02-23 18:30:29 |
| 186 | root | 9.39 MiB | 2021-02-23 18:30:29 |
| 171 | root | 9.38 MiB | 2021-02-23 18:30:29 |
+-----+------+----------+---------------------+

How to Find the Query Digest Details From Ongoing Threads

You can use the options “digest” and “digesttxt” to find the digest output of the running threads.

MySQL  localhost:33060+ ssl  JS > \show threads -o tid,cid,info,digest,digesttxt --where "digesttxt like 'UPDATE%'" --vertical
*************************** 1. row ***************************
      tid: 161
      cid: 123
     info: update herc set name='hercules' where id=3
   digest: 7832494e46eee2b28a46dc1fdae2e1b18d1e5c00d42f56b5424e5716d069fd39
digesttxt: UPDATE `herc` SET NAME = ? WHERE `id` = ?

How to Find the Top Five Threads Which Consumed Huge IO Operations

MySQL  localhost:33060+ ssl  JS > \show threads -o tid,cid,nio --order-by=nio --desc --limit=5
+-----+-----+-------+
| tid | cid | nio   |
+-----+-----+-------+
| 27  | 114 | 36982 |
| 238 | 200 | 2857  |
| 215 | 177 | 2733  |
| 207 | 169 | 2729  |
| 232 | 194 | 2724  |
+-----+-----+-------+

Nio ? Total number of IO events for the thread.

How to Find the Top Five Blocked and Blocking Threads

  • nblocked  – The number of other threads blocked by the thread
  • nblocking – The number of other threads blocking the thread
  • Ntxrlckd   – The approximate number of rows locked by the current InnoDB transaction

Blocking threads:

MySQL  localhost:33060+ ssl  JS > \show threads -o tid,cid,nblocked,nblocking,ntxrlckd,txstate --order-by=nblocking --desc --limit 5
+-----+-----+----------+-----------+----------+-----------+
| tid | cid | nblocked | nblocking | ntxrlckd | txstate   |
+-----+-----+----------+-----------+----------+-----------+
| 230 | 192 | 0        | 7         | 5        | LOCK WAIT |
| 165 | 127 | 0        | 6         | 2        | LOCK WAIT |
| 215 | 177 | 0        | 5         | 9        | LOCK WAIT |
| 221 | 183 | 0        | 4         | NULL     | NULL      |
| 233 | 195 | 1        | 4         | NULL     | NULL      |
+-----+-----+----------+-----------+----------+-----------+

Blocked threads:

MySQL  localhost:33060+ ssl  JS > \show threads -o tid,cid,nblocked,nblocking,ntxrlckd,txstate --order-by=nblocked --desc --limit 5
+-----+-----+----------+-----------+----------+-----------+
| tid | cid | nblocked | nblocking | ntxrlckd | txstate   |
+-----+-----+----------+-----------+----------+-----------+
| 203 | 165 | 15       | 0         | 8        | LOCK WAIT |
| 181 | 143 | 10       | 1         | 5        | LOCK WAIT |
| 223 | 185 | 9        | 0         | 8        | LOCK WAIT |
| 209 | 171 | 9        | 1         | 5        | LOCK WAIT |
| 178 | 140 | 6        | 0         | 7        | LOCK WAIT |
+-----+-----+----------+-----------+----------+-----------+

Like this, you have many options to explore and you can generate the report based on your requirements. I hope this blog post is helpful to understand the “\show” and “\watch” commands from the MySQL shell!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com