Sep
05
2018
--

Modifying List of Collected Metrics on PMM Linux Exporter

Changing metrics collection with PMM for Linux

Changing metrics collection with PMM for LinuxDo you need to modify the metrics collected from Linux by Percona Monitoring and Management (PMM)? In this blog post we will see how to enable, disable, and update collected metrics on PMM’s linux:metrics exporter. 

We will assume that the PMM client packages are installed, and they are configured already.

Using a custom list of metrics

Let’s now suppose we are not yet collecting any metrics on our desired client server, and we want to enable only the following: diskstats, meminfo, netdev and vmstat. We can use the following command:

pmm-admin add linux:metrics -- -collectors.enabled=diskstats,meminfo,netdev,vmstat

So, in order to enable or disable the functionality we want, we need to modify the collectors.enabled list accordingly. In the following online documentation page, we are able to find all collectors supported:

https://www.percona.com/doc/percona-monitoring-and-management/section.exporter.node.html

In this way, by adding or removing items from the collectors.enabled list, we can choose which functionality will be set in our PMM linux:metrics collectors.

Checking list of metrics used

We can use the ps aux command to check the collectors list that applies at any given time. Let’s see this in practical terms. After running the previous command, we should expect to see the following:

shell> ps aux | grep node_exporter
...
root     20450 3.2  0.0 1660248 15924 ?       Sl 13:48 0:13 /usr/local/percona/pmm-client/node_exporter -collectors.enabled=diskstats,filefd,filesystem,loadavg,meminfo,netdev,netstat,stat,time,uname,vmstat,meminfo_numa -web.listen-address=10.10.8.141:42000 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml -web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt -web.ssl-key-file=/usr/local/percona/pmm-client/server.key -collectors.enabled=diskstats,meminfo,netdev,vmstat

Note: there is currently a bug where you will see duplicate entries when using custom collectors. We are tracking this under issue PMM-2857. For now, if you see two different lists on ps aux outputs, the one that applies is the last one (as seen above).

Updating the list of metrics

There is currently no way to enable (or disable) metrics dynamically (however, we are working on it). We will need to stop the collector, manually get and edit the metrics list, and re-add it with the updated collectors in place. For this, we will need to know the current list of metrics used from the ps aux outputs.

Continuing with the same example, suppose we are not happy with what netdev metrics offer, and we wish to disable them to avoid unnecessary overhead. Knowing the metrics list was:

-collectors.enabled=diskstats,meminfo,netdev,vmstat

We will need to run the following commands to have it removed from the collectors list:

pmm-admin stop linux:metrics
pmm-admin rm linux:metrics
pmm-admin add linux:metrics -- -collectors.enabled=diskstats,meminfo,vmstat

Links for reference

  • Passing options to exporter: here and here
  • Collector options for linux:metrics exporter: here

Did you find this post useful?

You might also enjoy some of our other resources. Check out some of these recent technical webinars on using PMM, presented by my colleagues from Percona:

The post Modifying List of Collected Metrics on PMM Linux Exporter appeared first on Percona Database Performance Blog.

Aug
28
2018
--

Extend Metrics for Percona Monitoring and Management Without Modifying Code

PMM Extended Metrics

Percona Monitoring and Management (PMM) provides an excellent solution for system monitoring. Sometimes, though, you’ll have the need for a metric that’s not present in the list of node_exporter metrics out of the box. In this post, we introduce a simple method and show how to extend the list of available metrics without modifying the node_exporter code. It’s based on the textfile collector.

Enable the textfile collector in pmm-client

This collector is not enabled by default in the latest version of pmm-client. So, first let’s enable the textfile collector.

# pmm-admin rm linux:metrics
OK, removed system pmm-client-hostname from monitoring.
# pmm-admin add linux:metrics -- --collectors.enabled=diskstats,filefd,filesystem,loadavg,meminfo,netdev,netstat,stat,time,uname,vmstat,textfile --collector.textfile.directory="/tmp"
OK, now monitoring this system.
# pmm-admin ls
pmm-admin 1.13.0
PMM Server      | 10.178.1.252  
Client Name     | pmm-client-hostname
Client Address  | 10.178.1.252  
Service Manager | linux-upstart
-------------- -------------------- ----------- -------- ------------ --------
SERVICE TYPE   NAME                 LOCAL PORT  RUNNING  DATA SOURCE  OPTIONS  
-------------- -------------------- ----------- -------- ------------ --------
linux:metrics  pmm-client-hostname  42000       YES      -

Notice that the whole list of default collectors has to be re-enabled. Also, don’t forget to specify the directory for reading files with the metrics (–collector.textfile.directory=”/tmp”). The exporter reads files with the extension .prom

Add a crontab task

The second step is to add a crontab task to collect metrics and place them into a file.

Here are the cron commands for collecting the number of running and stopping docker containers.

*/1 * * * *     root   echo -n "" > /tmp/docker_all.prom; /usr/bin/docker ps -a | sed -n '1!p'| /usr/bin/wc -l | sed -ne 's/^/node_docker_containers_total /p' >> /tmp/doc
ker_all.prom;
*/1 * * * *     root   echo -n "" > /tmp/docker_running.prom; /usr/bin/docker ps | sed -n '1!p'| /usr/bin/wc -l | sed -ne 's/^/node_docker_containers_running_total /p' >>
/tmp/docker_running.prom;

The result of the commands is placed into the files 

/tmp/docker_running.prom

and

/tmp/docker_running.prom

and read by exporter.

Look - we got a new metric!

Adding the crontab tasks by using a script

Also, we have a few bash scripts that make it much easier to add crontab tasks.

The first one allows you to collect the logged-in users and the size of Innodb data files.

Modifying the cron job - a script

You may use the suggested names of files and metrics or set new ones.

The second script is more universal. It allows us to get the size of any directories or files. This script can be placed directly into a crontab task. You should just specify the list of monitored instances (e.g. /var/log /var/cache/apt /var/lib/mysql/ibdata1)

echo  "*/5 * * * * root bash  /root/object_sizes.sh /var/log /var/cache/apt /var/lib/mysql/ibdata1"  > /etc/cron.d/object_size

So, I hope this has provided useful insight into how to set up the collection of new PMM metrics without the need to write code. Please feel free to use the scripts or configure commands similar to the ones provided above.

More resources you might enjoy

If you are new to PMM, there is a great demo site of the latest version, showing you those out of the box metrics. Or how about our free webinar on monitoring Amazon RDS with PMM?

The post Extend Metrics for Percona Monitoring and Management Without Modifying Code appeared first on Percona Database Performance Blog.

Oct
25
2016
--

Monitoring OS metrics for Amazon RDS with Grafana

OS Metrics

This article describes how to visualize and monitor OS metrics for Amazon RDS instances using Grafana. We use CloudWatch as a data source, and Grafana to query CloudWatch API in real-time to get data. The setup instructions include just four quick steps (assuming you already use Grafana).

Setup

1. Create an IAM user on the AWS panel, and attach the managed policy “CloudWatchReadOnlyAccess”.

2. Put the credentials in the file “~grafana/.aws/credentials”:

[default]
aws_access_key_id = youraccesskeyid
aws_secret_access_key = yoursecretaccesskey

3. Add CloudWatch as the data source to Grafana, with the name “CloudWatch” (the pre-defined graphs rely on this name) and choose any region. Other fields can be left blank.

4. Import the dashboard JSON file from our collection of MySQL dashboards, or enter dashboard id “702” in the “Grafana.net Dashboard” field of the importing form in Grafana.

Finally, choose the AWS region, select the DB Instance,  and see the graphs. I have created my db.t2.micro instance on us-west-2 region, and named it “blackbox” to show these screenshots:

screen-shot-2016-10-17-at-18-10-54

Here are the disk performance graphs (I am using a magnetic drive, and we shouldn’t expect many iops):
screen-shot-2016-10-17-at-18-11-49

Available RAM, storage space and network traffic:
screen-shot-2016-10-17-at-18-12-00

The dashboard uses 60s resolution and shows an average over each datapoint. An exception is the “CPU Credit Usage” graph, which has a five min. average and interval length. All data is fetched in real-time and not stored anywhere.

This dashboard can be used with any Amazon RDS DB engine, including MySQL, Aurora, etc.

Cost

Amazon provides one million CloudWatch API requests each month at no additional charge. Past this, it costs $0.01 per 1,000 requests. The pre-defined dashboard performs 15 requests on each refresh and an extra two on initial loading.

More monitoring for RDS

“Amazon RDS OS Metrics” dashboard will also be a part of the next release of Percona Monitoring and Management (PMM). If you are already using it, you can use the same instructions as in this blog post, but on step two you have to create a local credentials file (say /root/aws_credentials) and share it with pmm-server container by adding “-v /root/aws_credentials:/usr/share/grafana/.aws/credentials -e HOME=/usr/share/grafana” with the create command.

To enable MySQL metrics and query analytics for Amazon RDS using PMM platform, follow this guide.

Example of query analytics on the same instance:
screen-shot-2016-10-17-at-22-56-34

MySQL metrics using the Prometheus data source (persistent storage):
screen-shot-2016-10-17-at-23-04-16

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com