This article will be helpful if you use the Percona Monitoring and Management (PMM) instance and alert notifications, as it is nice to capture the image of the graph when you receive the alert. We will see how to capture and attach the image of the graph when receiving the alert notification (email, telegram, Slack, […]
08
2025
Attaching a Percona Monitoring and Management Graph Image Along with an Alerting Notification
19
2023
Setting Up Percona Monitoring and Management Alerts for External Channels (Telegram, Slack, WebHook)

Setting up Percona Monitoring and Management (PMM) alerts for multiple channels can significantly enhance your monitoring strategy. In this blog post, we will talk about the steps to configure alerts for some well-known communication platforms like Telegram, Slack, and WebHook.
Please note that I am not covering the basic alerting and configuration setup. For that, you can refer to the official manual:- https://docs.percona.com/percona-monitoring-and-management/get-started/alerting.html#percona-alerting
Before using these channels, it is essential to first create some alerts for notification purposes.


https://docs.percona.com/percona-monitoring-and-management/get-started/alerting.html
Setting up PMM alerts for Slack
1) Go to the Slack API website – https://api.slack.com/apps
2) Then, click on “Create New App” and follow the prompts to create a new app for your workspace.

- Choose the “slack workspace” and define the “app name”. You can also create/add a new workspace in Slack as required instead of using the existing ones.

- Then, select the “Bots” section.

- And, click the “Review Scopes to Add” section.

- Here, we need to add the different “Scopes” levels or types of access for the Slack app.

- Then, hit the “Install to Workplace” section.

3) Now, you will see the “OAuth Token,” which we will use later in PMM Slack configurations.

4) Next, we need to add the app “pmm_alerts” to one or more Slack channels in order to get the notifications.



5) Finally, we can add the details below in the PMM -> Alerting -> Contact Points section.
TOKEN: xoxb-5840606778679-6229228252662-teofbDQUiFCnrp3cZT08epyL ## Bot Api Token Recipient: test ###channel name

- On successful testing we should see some test alerts.

- The original alert message based on the trigger condition will look like below.

Setting up PMM Alerts for Telegram
For the Telegram app, we also need the “BOT API Token” and “Chat ID” details to set up alert notifications.
1) Create a new bot in Telegram. Select “@BotFather” in the search tab and send the command “/newbot” in the chat section.

2) Provide a “name” for your bot and then a “username” for the bot. The username needs to end in “bot”.

Here, we have received the BOT API Token :- 6396505873:AAEQT5DCFAlzpqqdh9p69YwiQermTespfDA
3) Now, we need to change the “group privacy policy” in order to allow the bot to read messages sent to any group it is a member of.
- In the “@BotFather” section, type “/setprivacy” for user:- “@pmm_test_alerts_bot” and disable “Disable” the group policy as below:

4) After disabling group privacy, create a group “pmm_alerts” and add the new bot: “pmm_test_alerts” to that group.



So here, we have successfully created the group. Now, we need to send at least one message in order to activate the group.

5) Next, we need the second input, “Chat ID,” that PMM requires. This can be done using either curl or directly running the URL on the web browser.
Here, we are using the complete token “6396505873:AAEQT5DCFAlzpqqdh9p69YwiQermTespfDA” we got in step 2. In order to use this with the API, we need the prefix “bot” in the string.
curl https://api.telegram.org/bot6396505873:AAEQT5DCFAlzpqqdh9p69YwiQermTespfDA/getUpdates
Output:
{"ok":true,"result":[{"update_id":815074387,
"message":{"message_id":3,"from":{"id":6452928862,"is_bot":false,"first_name":"Anil","last_name":"Joshi"},"chat":{"id":-4013864418,"title":"pmm_alerts","type":"group","all_members_are_administrators":true},"date":1700735471,"text":"hi"}}]}
So finally, we got the CHAT ID “-4013864418″ as well.
6) Now it’s time to use the above details in the PMM -> Alerting -> Contact Points section.
BOT API TOKEN: 6396505873:AAEQT5DCFAlzpqqdh9p69YwiQermTespfDA CHAT ID: -4013864418

- On successful testing we should see some test alerts.

- If we trigger a real alert, it will appear like this.

Setting up PMM Alerts for WebHook
Webhooks are a powerful tool for building integrations between different applications or services, enabling them to work together seamlessly. They are widely used in web development, APIs, and cloud services to create more dynamic and responsive systems.
URL/API:
https://xxx.mn/v1/main/update/status
In simple terms, it’s just an API that can be created in any programming language (PHP, Java, Node Js, etc.) to send and get the response and integrate that with any 3rd party applications.
Here, we are simply integrating the URL with the PMM in order to get the response.

Response from the URL
{"Info":"{"receiver":"grafana-default-email","status":"firing","alerts":[{"status":"firing","labels":{"alertname":"pmm_mysql_down Alerting Rule","grafana_folder":"MySQL","node_name":"localhost.localdomain","percona_alerting":"1","service_name":"localhost.localdomain-mysql","severity":"critical","template_name":"pmm_mysql_down"},"annotations":{"description":"MySQL localhost.localdomain-mysql on localhost.localdomain is down.","summary":"MySQL down (localhost.localdomain-mysql)"},"startsAt":"2023-11-24T03:45:10Z","endsAt":"0001-01-01T00:00:00Z","generatorURL":"https://localhost/graph/alerting/grafana/1E1kb3SSz/view","fingerprint":"3be1993cc9a48420","silenceURL":"https://localhost/graph/alerting/silence/new?alertmanager=grafana&matcher=alertname%3Dpmm_mysql_down+Alerting+Rule&matcher=grafana_folder%3DMySQL&matcher=node_name%3Dlocalhost.localdomain&matcher=percona_alerting%3D1&matcher=service_name%3Dlocalhost.localdomain-mysql&matcher=severity%3Dcritical&matcher=template_name%3Dpmm_mysql_down","dashboardURL":null,"panelURL":null,"valueString":"[ var='A' labels={node_name=localhost.localdomain, service_name=localhost.localdomain-mysql} value=1 ]"}],"groupLabels":{"alertname":"pmm_mysql_down Alerting Rule","grafana_folder":"MySQL"},"commonLabels":{"alertname":"pmm_mysql_down Alerting Rule","grafana_folder":"MySQL","node_name":"localhost.localdomain","percona_alerting":"1","service_name":"localhost.localdomain-mysql","severity":"critical","template_name":"pmm_mysql_down"},"commonAnnotations":{"description":"MySQL localhost.localdomain-mysql on localhost.localdomain is down.","summary":"MySQL down (localhost.localdomain-mysql)"},"externalURL":"https://localhost/graph/","version":"1","groupKey":"{}:{alertname="pmm_mysql_down Alerting Rule", grafana_folder="MySQL"}","truncatedAlerts":0,"orgId":1,"title":"[FIRING:1] pmm_mysql_down Alerting Rule MySQL (localhost.localdomain 1 localhost.localdomain-mysql critical pmm_mysql_down)","state":"alerting","message":"**Firing**nnValue: [ var='A' labels={node_name=localhost.localdomain, service_name=localhost.localdomain-mysql} value=1 ]nLabels:n - alertname = pmm_mysql_down Alerting Rulen - grafana_folder = MySQLn - node_name = localhost.localdomainn - percona_alerting = 1n - service_name = localhost.localdomain-mysqln - severity = criticaln - template_name = pmm_mysql_downnAnnotations:n - description = MySQL localhost.localdomain-mysql on localhost.localdomain is down.n - summary = MySQL down (localhost.localdomain-mysql)nSource: https://localhost/graph/alerting/grafana/1E1kb3SSz/viewnSilence: https://localhost/graph/alerting/silence/new?alertmanager=grafana&matcher=alertname%3Dpmm_mysql_down+Alerting+Rule&matcher=grafana_folder%3DMySQL&matcher=node_name%3Dlocalhost.localdomain&matcher=percona_alerting%3D1&matcher=service_name%3Dlocalhost.localdomain-mysql&matcher=severity%3Dcritical&matcher=template_name%3Dpmm_mysql_downn"}"} [
The above response can be generated via directly submitting the URL or by using tools like Postman or cURL, which are widely used to interact with the HTTP-based APIs.
curl -X POST -d /tmp/file.json -H "Content-Type: application/json" https://xxx.mn/v1/main/update/status
There are a few use cases where webhook would be useful:
- Webhooks allow systems to receive real-time updates when certain events occur. For example, in a messaging application, a webhook can be used to notify a third-party service whenever a new message is received.
- Webhooks are commonly employed for sending notifications. This could include alerts for system events, status changes, or important updates. For example, a monitoring system can use webhooks to notify administrators when there’s a critical issue.
Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.
Download Percona Monitoring and Management Today
Further references:
07
2023
How to Filter or Customize Alert Notifications in Percona Monitoring and Management (Subject and Body)

In many scenarios, the standard alert notification template in Percona Monitoring and Management (PMM), while comprehensive, may not align perfectly with specific operational needs. This often leads to an excess of details in the notification’s “Subject” and “Body”, cluttering your inbox with information that may not be immediately relevant.
The focus today is on tailoring these notifications to fit your unique requirements. We’ll guide you through the process of editing the “Subject” and “Body” in the PMM UI, ensuring that the alerts you receive are filtered and relevant to your specific business context.
Please note: This post assumes a foundational understanding of basic alerting and configuration in PMM. For those new to these concepts, we recommend consulting the documentation on “SMTP” and “PMM Integrated/Grafana alert” for a primer.
Customizing the “Subject” section of alert notification
1) The default “Subject” will look something like below.

2) Now, let’s proceed to edit the “subject” content.
I) First, we need to create a new message template called “email.subject” in Alerting -> Contact points with the following content.
Template_name: email.subject
{{ define "email.subject" }}
{{ range .Alerts }} Percona Alert | {{ .Labels. }} | {{ .Labels.node_name }} | {{ .Labels.DB }} {{ end }}
{{ end }}

Here, we are simply using the range to iterate over the alert labels. We loop through the alert labels and extract the alert name and node name.
The provided template is written in Go’s templating language. For a more detailed understanding of the syntax and usage of templates, please refer to the official manual.
II) Then we need to edit the default contact point name inside “Alerting->Contact points”

And define the below “Subject” under “Optional Email Settings”.
{{ template "email.subject". }}
III) After successfully testing, we can save the changes.

That’s it. Now, if the alert triggers, we will observe a customized subject in the email.
Example:

Customizing the “Body” section of alert notification
1) Let’s first see how the notifications appear with the native alerting. This is a basic notification alert that triggers when the database/MySQL is down. As we can see, it includes additional information, such as various labels and a summary.

2) Now, suppose we want to get rid of some content and want only a few relevant details. This can be achieved by following the below outlined steps.
I) Go to Alerting -> Contact points and add new “Message templates”.

II) Next, create a notification template named “email” with two templates in the content: “email.message_alert” and “email.message”.
The “email.message_alert” template is used to display the labels and values for each firing and resolved alert, while the “email.message” template contains the email’s structure.
Template name: email.message
### These are the key-value pairs that we want to display in our alerts.###
{{- define "email.message_alert" -}}
AlertName = {{ index .Labels "alertname" }}{{ "n" }}
Database = {{ index .Labels "DB" }}{{ "n" }}
Node_name = {{ index .Labels "node_name" }}{{ "n" }}
Service_name = {{ index .Labels "service_name" }}{{ "n" }}
Service Type = MySQL {{ "n" }}
Severity = {{ index .Labels "severity" }}{{ "n" }}
TemplateName = {{ index .Labels "template_name" }}{{ "n" }}
{{- end -}}
### Next, we have defined the main section that governs the alerting and firing rules. ###
{{ define "email.message" }}
There are {{ len .Alerts.Firing }} firing alert(s), and {{ len .Alerts.Resolved }} resolved alert(s){{ "n" }}
###Finally, the alerts and firing rules are invoked and triggered based on the generated alerts or fixes. ###
{{ if .Alerts.Firing -}}
Firing alerts:{{ "n" }}
{{- range .Alerts.Firing }}
- {{ template "email.message_alert" . }}
{{- end }}
{{- end }}
{{ if .Alerts.Resolved -}}
Resolved alerts:{{ "n" }}
{{- range .Alerts.Resolved }}
- {{ template "email.message_alert" . }}
{{- end }}
{{- end }}
{{ end }}
The above template is written in Go’s templating language. To know more in detail about the syntax and template usage you can refer to the manual.
III) Lastly, simply save the template


3) Next, we will edit the default “Contact points” and define the below content under “Update contact point -> Optional Email settings->Message” for email. Similarly, you can add other channels as well, like Telegram, Slack, etc.
Execute the template from the “message” field in your contact point integration.

{{ template "email.message" . }}
Percona Alerting comes with a pre-configured default notification policy. This policy utilizes the grafana-default-email contact point and is automatically applied to all alerts that do not have a custom notification policy assigned to them.
Reference:- https://docs.percona.com/percona-monitoring-and-management/use/alerting.html#notification-policies
After verifying a successful test message, we can save the updated contact point.

4) Finally, once the alert is triggered, you will be able to see the customized notification reflecting only the defined key/values.


Moreover, we can also use “LABEL LOOPS” instead of defining the separate “Key/Value” pairs as we did in the above steps. In this way, we can have all the default parameters in iteration without explicitly defining each of them.
Here, we use a range to iterate over the alerts such that dot refers to the current alert in the list of alerts, and then use a range on the sorted labels so dot is updated to refer to the current label. Inside the range, use “.Name” and “.Value” to print the name and value of each label.
### applying label loop option ###
{{- define "email.message_alert" -}}
Label Loop:
{{ range .Labels.SortedPairs }}
{{ .Name }} => {{ .Value }}
{{ end }}
{{- end -}}
{{ define "email.message" }}
There are {{ len .Alerts.Firing }} firing alert(s), and {{ len .Alerts.Resolved }} resolved alert(s){{ "n" }}
{{ if .Alerts.Firing -}}
Firing alerts:{{ "n" }}
{{- range .Alerts.Firing }}
- {{ template "email.message_alert" . }}
{{- end }}
{{- end }}
{{ if .Alerts.Resolved -}}
Resolved alerts:{{ "n" }}
{{- range .Alerts.Resolved }}
- {{ template "email.message_alert" . }}
{{- end }}
{{- end }}
{{ end }}
To add some more options, say (summary and description) in the customized alerts below, template changes can be performed.
I) First, you can add/update the “Summary and annotations” section inside the “alert rule” based on your preference.

II) Then, edit the below Message template (“email.message”) in Alerting->contact points with the updated changes.
Template name: email.message
{{- define "email.message_alert" -}}
AlertName = {{ index .Labels "alertname" }}{{ "n" }}
Database = {{ index .Labels "DB" }}{{ "n" }}
Node_name = {{ index .Labels "node_name" }}{{ "n" }}
Service_name = {{ index .Labels "service_name" }}{{ "n" }}
Service Type = {{ index .Labels "service_type" }}{{ "n" }}
Severity = {{ index .Labels "severity" }}{{ "n" }}
TemplateName = {{ index .Labels "template_name" }}{{ "n" }}
{{- end -}}
{{ define "email.message" }}
There are {{ len .Alerts.Firing }} firing alert(s), and {{ len .Alerts.Resolved }} resolved alert(s){{ "n" }}
{{ if .Alerts.Firing -}}
Firing alerts:{{ "n" }}
{{- range .Alerts.Firing }}
- {{ template "email.message_alert" . }}
- {{ template "alerts.summarize" . }}
{{- end }}
{{- end }}
{{ if .Alerts.Resolved -}}
Resolved alerts:{{ "n" }}
{{- range .Alerts.Resolved }}
- {{ template "email.message_alert" . }}
- {{ template "alerts.summarize" . }}
{{- end }}
{{- end }}
{{ end }}
{{ define "alerts.summarize" -}}
{{ range .Annotations.SortedPairs}}
{{ .Name }} = {{ .Value }}
{{ end }}
{{ end }}
Reference:- https://grafana.com/blog/2023/04/05/grafana-alerting-a-beginners-guide-to-templating-alert-notifications/
Sometimes, the alert notifications might appear in a single line instead of separate lines for all the Keys. Although this is not a regular behavior it can be fixed by using the below changes.
I) Access to the PMM Server
sudo docker exec -it pmm-server bash
II) Thereafter, you can edit the file:- “/usr/share/grafana/public/emails/ng_alert_notification.html” and replace the text in between lines (288 to 290) as below.
Replace:
{{ if gt (len .Message) 0 }}
<div style="white-space: pre-line;" align="left">{{ .Message }}
{{ else }}
With:
{{ if gt (len .Message) 0 }}
<span style="white-space: pre-line;">{{ .Message }}</span>
{{ else }}
Note: Please ensure to take the backup before making any changes to the PMM Server files. Moreover, these changes could be lost when doing a PMM upgrade, especially when Grafana is upgraded as part of PMM, so a backup of the edited version would also be needed for later restoration purposes.
III) Finally, you can restart the Grafana service.
supervisorctl restart grafana
Summary
Filtering in alert notifications proves useful in concealing extraneous information from the relevant users. Only the specified elements are displayed in the notification email, thereby preventing unnecessary clutter in the alert content.
Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.
14
2021
MongoDB Integrated Alerting in Percona Monitoring and Management

Percona Monitoring and Management (PMM) recently introduced the Integrated Alerting feature as a technical preview. This was a very eagerly awaited feature, as PMM doesn’t need to integrate with an external alerting system anymore. Recently we blogged about the release of this feature.
PMM includes some built-in templates, and in this post, I am going to show you how to add your own alerts.
Enable Integrated Alerting
The first thing to do is navigate to the PMM Settings by clicking the wheel on the left menu, and choose Settings:

Next, go to Advanced Settings, and click on the slider to enable Integrated Alerting down in the “Technical Preview” section.

While you’re here, if you want to enable SMTP or Slack notifications you can set them up right now by clicking the new Communications tab (which shows up after you hit “Apply Changes” turning on the feature).
The example below shows how to configure email notifications through Gmail:

You should now see the Integrated Alerting option in the left menu under Alerting, so let’s go there next:

Configuring Alert Destinations
After clicking on the Integrated Alerting option, go to the Notification Channels to configure the destination for your alerts. At the time of this writing, email via your SMTP server, Slack and PagerDuty are supported.

Creating a Custom Alert Template
Alerts are defined using MetricsQL which is backward compatible with Prometheus QL. As an example, let’s configure an alert to let us know if MongoDB is down.
First, let’s go to the Explore option from the left menu. This is the place to play with the different metrics available and create the expressions for our alerts:

To identify MongoDB being down, one option is using the up metric. The following expression would give us the alert we need:
up{service_type="mongodb"}
To validate this, I shut down a member of a 3-node replica set and verified that the expression returns 0 when the node is down:

The next step is creating a template for this alert. I won’t go into a lot of detail here, but you can check Integrated Alerting Design in Percona Monitoring and Management for more information about how templates are defined.
Navigate to the Integrated Alerting page again, and click on the Add button, then add the following template:
---
templates:
- name: MongoDBDown
version: 1
summary: MongoDB is down
expr: |-
up{service_type="mongodb"} == 0
severity: critical
annotations:
summary: MongoDB is down ({{ $labels.service_name }})
description: |-
MongoDB {{ $labels.service_name }} on {{ $labels.node_name }} is down
This is how it looks like:

Next, go to the Alert Rules and create a new rule. We can use the Filters section to add comma-separated “key=value” pairs to filter alerts per node, per service, per agent, etc.
For example: node_id=/node_id/123456, service_name=mongo1, agent_id=/agent_id/123456

After you are done, hit the Save button and go to the Alerts dashboard to see if the alert is firing:

From this page, you can also silence any firing alerts.
If you configured email as a destination, you should have also received a message like this one:

For now, a single notification is sent. In the future, it will be possible to customize the behavior.
Creating MongoDB Alerts
In addition to the obvious “MongoDB is down” alert, there are a couple more things we should monitor. For starters, I’d suggest creating alerts for the following conditions:
- Replica set member in an unusual state
mongodb_replset_member_state != 1 and mongodb_replset_member_state != 2
- Connections higher than expected
avg by (service_name) (mongodb_connections{state="current"}) > 5000
- Cache evictions higher than expected
avg by(service_name, type) (rate(mongodb_mongod_wiredtiger_cache_evicted_total[5m])) > 5000
- Low WiredTiger tickets
avg by(service_name, type) (max_over_time(mongodb_mongod_wiredtiger_concurrent_transactions_available_tickets[1m])) < 50
The values listed above are just for illustrative purposes, you need to decide the proper thresholds for your specific environment(s).
As another example, let’s add the alert template for the low WiredTiger tickets:
---
templates:
- name: MongoDB Wiredtiger Tickets
version: 1
summary: MongoDB Wiredtiger Tickets low
expr: avg by(service_name, type) (max_over_time(mongodb_mongod_wiredtiger_concurrent_transactions_available_tickets[1m])) < 50
severity: warning
annotations:
description: "WiredTiger available tickets on (instance {{ $labels.node_name }}) are less than 50"
Conclusion
Integrated alerting is a really nice to have feature. While it is still in tech preview state, we already have a few built-in alerts you can test, and also you can define your own. Make sure to check the Integrated Alerting official documentation for more information about this topic.
Do you have any specific MongoDB alerts you’d like to see? Given the feature is still in technical preview, any contributions and/or feedback about the functionality are welcome as we’re looking to release this as GA very soon!
29
2020
Using Security Threat Tool and Alertmanager in Percona Monitoring and Management

With version 2.9.1 of Percona Monitoring and Management (PMM) we delivered some new improvements to its Security Threat Tool (STT).
Aside from an updated user interface, you now have the ability to run STT checks manually at any time, instead of waiting for the normal 24 hours check cycle. This can be useful if, for example, you want to see an alert gone after you fixed it. Moreover, you can now also temporarily mute (for 24 hours) some alerts you may want to work on later.

But how do these actions work?
Alertmanager
In a previous article, we briefly explained how the STT back end publishes alerts to Alertmanager so they appear in the STT section of PMM.
Now, before we uncover the details of that, please bear in mind that PMM’s built-in Alertmanager is still under development. We do not recommend you use it directly for your own needs, at least not for now.
With that out of the way, let’s see the details of the interaction with Alertmanager.
To retrieve the current alerts, the interface calls an Alertmanager’s API, filtering for non-silenced alerts:
GET /alertmanager/api/v2/alerts?silenced=false[...]
This call returns a list of active alerts, which looks like this:
[
{
"annotations": {
"description": "MongoDB admin password does not meet the complexity requirement",
"summary": "MongoDB password is weak"
},
"endsAt": "2020-09-30T14:39:03.575Z",
"startsAt": "2020-04-20T12:08:48.946Z",
"labels": {
"service_name": "mongodb-inst-rpl-1",
"severity": "warning",
...
},
...
},
...
]
Active alerts have a
startsAt
timestamp at the current time or in the past, while the
endsAt
timestamp is in the future. The other properties contain descriptions and the severity of the issue the alert is about.
labels
, in particular, uniquely identify a specific alert and are used by Alertmanager to deduplicate alerts. (There are also other “meta” properties, but they are out of the scope of this article.)
Force Check
Clicking on “Run DB checks” will trigger an API call to the PMM server, which will execute the checks workflow on the PMM back end (you can read more about it here). At the end of that workflow, alerts are sent to Alertmanager through a POST call to the same endpoint used to retrieve active alerts. The call payload has the same structure as shown above.
Note that while you could create alerts manually this way, that’s highly discouraged, since it could negatively impact STT alerts. If you want to define your own rules for Alertmanager, PMM can integrate with an external Alertmanager, independent of STT. You can read more in Percona Monitoring and Management, Meet Prometheus Alertmanager.
Silences
Alertmanager has the concept of Silences. To temporarily mute an alert, the front end generates a “silence” payload starting from the metadata of the alert the user wants to mute and calls the silence API on Alertmanager:
POST /alertmanager/api/v2/silences
An example of a silence payload:
{
"matchers": [
{ "name": "service_name", "value": "mongodb-inst-rpl-1", "isRegex": false },
{ "name": "severity", "value": "warning", "isRegex": false },
...
],
"startsAt": "2020-09-14T20:24:15Z",
"endsAt": "2020-09-15T20:24:15Z",
"createdBy": "someuser",
"comment": "reason for this silence",
"id": "a-silence-id"
}
As a confirmation of success, this API call will return a
silenceID
:
{ "silenceID": "1fcaae42-ec92-4272-ab6b-410d98534dfc" }
Conclusion
From this quick overview, you can hopefully understand how simple it is for us to deliver security checks. Alertmanager helps us a lot in simplifying the final stage of delivering security checks to you in a reliable way. It allows us to focus more on the checks we deliver and the way you can interact with them.
We’re constantly improving our Security Threat Tool, adding more checks and features to help you protect your organization’s valuable data. While we’ll try to make our checks as comprehensive as possible, we know that you might have very specific needs. That’s why for the future we plan to make STT even more flexible, adding scheduling of checks (since some need to run more/less frequently than others), disabling of checks, and even the ability to let you add your own checks! Keep following the latest releases as we continue to iterate on STT.
For now, let us know in the comments: what other checks or features would you like to see in STT? We love to hear your feedback!
Check out our Percona Monitoring and Management Demo site or download Percona Monitoring and Management today and give it a try!
02
2017
PMM Alerting with Grafana: Working with Templated Dashboards

In this blog post, we will look into more intricate details of PMM alerting. More specifically, we’ll look at how to set up alerting based on templated dashboards.
Percona Monitoring and Management (PMM) 1.0.7 includes Grafana 4.0, which comes with the Alerting feature. Barrett Chambers shared how to enable alerting in general. This blog post looks at the specifics of setting up alerting based on the templated dashboards. Grafana 4.0 does not support basic alerting out-of-the-box.
This means if I try to set up an alert on the number of MySQL threads running, I get the error “Template variables are not supported in alert queries.”
What is the solution?
Until Grafana provides a better option, you need to do alerting based on graphs (which don’t use templating). This is how to do it.
Click on “Create New” in the Dashboards list to create a basic dashboard for your alerts:
Click on “Add Panel” and select “Graph”:
Click on the panel title of the related panel on the menu sign, and then click on “Panel JSON”.
This shows you the JSON of the panel, which will look like something like this:
Now you need to go back to the other browser window, and the dashboard with the graph you want to alert on. Show the JSON panel for it. In our case, we go to “MySQL Overview” and show the JSON for “MySQL Active Threads” panel.
Copy the JSON from the “MySQL Active Threads” panel and paste it into the new panel in the dashboard created for alerting.
Once we have done the copy/paste, click on the green Update button, and we’ll see the broken panel:
It’s broken because we’re using templating variables in dashboard expressions. None of them are set up in this dashboard. Expressions won’t work. We must replace the template variables in the formulas with actual hosts, instances, mount points, etc., for we want to alert on:
We need to change
$host
to the name of the host we want to alert on, and the
$interval
should align with the data capture interval (here we’ll set it to 5 seconds):
If correctly set up, you should see the graph showing the data.
Finally, we can go to edit the graph. Click on the “Alert” and “Create Alert”.
Specify
Evaluate Every
to create an alert. This sets up the evaluation interval for the alert rule. Obviously, the more often the alert evaluates the condition, the more quickly you get alerted if something goes wrong (as well as alert conditions).
In our case, we want to get an alert if the number of running threads are sustained at a high rate. To do this, look at the minimum number of threads for last minute to be above 30:
Note that our query has two parameters: “A” is the number of threads connected, and “B” is the number of threads running. We’re choosing to Alert on “B”.
The beautiful thing Grafana does is show the alert threshold clearly on the graph, and allows you to edit the alert just by moving this alert line with a mouse:
You may want to click on the floppy drive at the top to save dashboard (giving it whatever identifying name you want).
At this point, you should see the alert working. A little heart sign appears by the graph title, colored green (indicating it is not active) or red (indicating it is active). Additionally, you will see the red and green vertical lines in the alert history. These show when this alert gets triggered and when the system went back to normal.
You probably want to set up notifications as well as see alerts on the graphs.
To set up notifications, go to the Grafana Configuration menu and configure Alerting. There are Grafana Support Email, Slack, Pagerduty and general Webhook notification options (with more on the way, I’m sure).
The same way you added the “Graph” panel to set up an alert, you can add the “Alert List” panel to see all the alerts you have set up (and their status):
Summary
As you can see, it is possible to set up alerts in PMM using the new Grafana 4.0 alerting feature. It is not very convenient or easy to do. This is first alerting support release for Grafana and PMM. As such, I’m sure it will become much easier and more convenient over time.
22
2016
NGINX’s Amplify monitoring tool is now in public beta
NGINX today launched Amplify, its new application monitoring tool, out of private beta. While the cloud-based tool is still officially in beta, it’s now available to all NGINX users — both those who run the paid NGINX Plus edition or the free open-source version. As NGINX CEO Gus Robertson and CMO Peter Guagenti told me, the company’s users told the team that they wanted to… Read More













