ProcFS UDF: A Different Approach to Agentless Operating System Observability in your Database

Percona ProcFS UDF MySQL

Percona ProcFS UDF MySQLWhen you look at database performance, it does not only depend on internal database operations but also on things going on in the operating system, network, storage system, hardware, etc. With that, if you’re building software to help analyze and optimize database performance, such as Percona Monitoring and Management (PMM), you need access to information from both the database software level and operating system level. The operating system tends to provide an interface to the best available “hardware” metrics, too.

A classical approach to access both is to install some kind of agent on every monitored host to provide the required data.  The problem with this approach is that it may not always be available.

One of the most common use cases is Database as a Service (DBaaS) deployment, where the user can’t run additional software on database hosts. Although, it also could be internal politics and separation of duties with the database team, who is interested in the best database observability and does not decide which software can be run on the database hosts.  It also might be a question of additional complexity at scale.

There is an OS Metrics plugin for MySQL as well as PGNodeMX plugin for PostgreSQL, the two most popular open source databases, which do a great job exposing the most important metrics in their own format.

This approach creates two problems; first, the number of metrics exposed by Linux Kernel always growths, and second, it may not even be 100 percent compatible between different Linux distributions, as those plugins would always need to play catchup.  You have to settle only for some of the information available, or most likely both.

Introducing ProcFS UDF

We are taking another approach with ProcFS UDF, a plugin available with Percona Server for MySQL 8 (currently in Tech Preview).

Instead of writing a plugin that can parse all varieties of ProcFS and SysFS files (the most standard way to access system metrics on Linux), we have a plugin that simply exposes the content of those files:

*************************** 1. row ***************************
FILE: /proc/version
CONTENTS: Linux version 5.4.0-125-generic (buildd@lcy02-amd64-083) (gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)) #141-Ubuntu SMP Wed Aug 10 13:42:03 UTC 2022

1 row in set (0.00 sec)

*************************** 1. row ***************************
FILE: /proc/vmstat
CONTENTS: nr_free_pages 9722272
nr_zone_inactive_anon 90
nr_zone_active_anon 22786408
nr_zone_inactive_file 168228
nr_zone_active_file 52298
nr_zone_unevictable 4650
thp_deferred_split_page 0
thp_split_pmd 0
thp_split_pud 0
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0
thp_swpout 0
thp_swpout_fallback 0
balloon_inflate 0
balloon_deflate 0
balloon_migrate 0
swap_ra 0
swap_ra_hit 0

1 row in set (0.00 sec)

If you want to extract particular values you can write a stored function, which is probably easier to maintain than a bunch of C code.

You can use well-maintained code from the libraries specializing in parsing the ProcFS format such as GoLang ProcFS Package.

This package is great as it can read contents from ProcFS and SysFS mounted at any location, meaning you can materialize content provided by a ProcFS MySQL package on an in-memory file system and do it very quickly and dirty with reasonable overhead.


You may say WOW, this is great, but what about security?  Does this plugin provide a backdoor to read any file on my filesystem?  Of course not!

First, only files that you specifically allowed by including in the “procfs_files_spec” configuration variable are allowed.  For additional security, this is NOT a dynamic variable, so you can’t change it without restarting MySQL  AND it only allows files with names starting at  “/proc” and “/sys”.

The default configuration as of the time of this writing is this:

mysql> show variables like "procfs_files_spec" \G
*************************** 1. row ***************************
Variable_name: procfs_files_spec
Value: /proc/cpuinfo;/proc/irq/*/*;/proc/loadavg;/proc/net/dev;/proc/net/sockstat;/proc/net/sockstat_rhe4;/proc/net/tcpstat;/proc/self/net/netstat;/proc/self/stat;/proc/self/io;/proc/self/numa_maps/proc/softirqs;/proc/spl/kstat/zfs/arcstats;/proc/stat;/proc/sys/fs/file-nr;/proc/version;/proc/vmstat

Note: Additionally, you may need to check your SELinux/AppArmor configuration to ensure the MySQL process can access those files.

Second, accessing data produced by the plugin requires ACCESS_PROCFS dynamic privilege so by default even the “root” user is not able to access this data.

(More information about security and how to grant access)

Using  Prometheus Node Exporter with MySQL ProcFS Plugin

As I mentioned, Prometheus Node Exporter relies on the fantastic ProcFS Library which is rather easy to hack to use information retrieved through the MySQL ProcFS Plugin.

Here is a prototype, created by one of the wonderful Percona engineers, Nickolay Ihalainen.

You can build and run it this way:

mkdir node_exporter
export GOPATH=$PWD
mkdir -p bin pkg src/
git clone src/
cd src/
go build .
mkdir rootfs/
cd rootfs/
mkdir sys proc
../node_exporter  --collector.mysqlprocfs="MYSQLUSER:MYSQLPASSWORD@tcp(MYSQLHOST:3306)"  --path.sysfs="$PWD/sys" --path.procfs="$PWD/proc"

Or alternatively, you could run it in a container, that we pre-built for you:

docker/podman run -p 9100:9100 -d --collector.mysqlprocfs="MYSQLUSER:MYSQLPASSWORD@tcp(MYSQLHOST:3306)"





are already pre-configured in the image so you shouldn’t worry about it.

And you can see the same Node Exporter stats you got used to, but coming from a remote MySQL host rather than a local operating system!

As you will query metrics from Exporter, you may see some errors:

INFO[0000] Starting HTTP server for http://:9100/metrics ...  source="server.go:140"
ERRO[0006] ERROR: entropy collector failed after 0.000033s: couldn't get entropy_avail: open /tmp/rootfs/proc/sys/kernel/random/entropy_avail: no such file or directory  source="collector.go:135"
ERRO[0006] ERROR: diskstats collector failed after 0.000010s: couldn't get diskstats: open /tmp/rootfs/proc/diskstats: no such file or directory  source="collector.go:135"
ERRO[0006] ERROR: netclass collector failed after 0.000028s: could not get net class info: error obtaining net class info: cannot access /tmp/rootfs/sys/class/net dir open /tmp/rootfs/sys/class/net: no such file or directory  source="collector.go:135"
ERRO[0006] ERROR: arp collector failed after 0.000009s: could not get ARP entries: open /tmp/rootfs/proc/net/arp: no such file or directory  source="collector.go:135"
ERRO[0006] ERROR: filesystem collector failed after 0.000013s: open /tmp/rootfs/proc/mounts: no such file or directory  source="collector.go:135"
ERRO[0006] ERROR: netstat collector failed after 0.000025s: couldn't get netstats: open /tmp/rootfs/proc/net/netstat: no such file or directory  source="collector.go:135"
ERRO[0006] ERROR: meminfo collector failed after 0.000007s: couldn't get meminfo: open /tmp/rootfs/proc/meminfo: no such file or directory  source="collector.go:135"

You can go ahead and add files you consider appropriate to ProcFS UDF configuration file:

cat /etc/mysql/mysql.conf.d/procfs.cnf

Next steps

This is a Preview solution, so your feedback is extremely valuable. Let us know if we’re on the right track here or should approach this problem in an entirely different way instead. Give the plugin and prototype Exporter a try and let us know how it is working for you

From our side, we are looking to release a more polished version of Node Exporter which can fetch data from the MySQL ProcFS Plugin for agentless monitoring. We will make our patches available to the upstream team, but I’m not sure they will align with the Node Exporter focus. This will make this solution useful with any dashboards, and any software that uses Prometheus Node Exporter as a data source.

We’re also considering seamlessly integrating it with Percona Monitoring and Management “Agentless” so if the ProcFS Plugin is detected and accessible, operating system stats will become available together with database stats, with no action needed.

And if this approach proves to be useful for MySQL, we’re looking to provide the same functionality for PostgreSQL and MongoDB, as well as work with communities looking to integrate such an approach with other open source database technologies Percona currently does not support.

Finally, if this approach is generally useless for many monitoring applications, we will advocate cloud vendors to make such plugins optionally available with their DBaaS services. Currently, the only way to get OS data with those systems is CloudWatch, StackDriver, and Azure Monitor, which have only supported a limited number of metrics and limited resolution, at an extra cost.

What do you think?

Learn more about ProcFS UDF


Percona Monitoring Plugins 1.1.8 Release Is Now Available

Percona Monitoring Plugins 1.1.7

Percona Monitoring Plugins 1.1.8Percona announces the release of Percona Monitoring Plugins 1.1.8.


  • Add MySQL 5.7 support
  • Changed a canary check to use and return a timedelta.seconds
  • Remove an additional condition for the Dictionary memory allocated
  • Fixed a false-positive problem when the calculated delay was less than 0 and the -m was not set.
  • Fixed the problem where slaves would alert due to deadlocks on the master.
  • If using pt-heartbeat, get_slave_status was only called when the -s option is set to MASTER
  • Disabled UNK alerts by default (it is possible to enable them explicitly).
  • A fix was added for MySQL Multi-Source replication.
  • The graph Percona InnoDB Memory Allocation showed zeroes for the
    metrics Total memory (data source item nl) and Dictionary memory
    (data source item nm) when used for MySQL 5.7.18, because the syntax
    of SHOW ENGINE INNODB STATUS has changed in MySQL 5.7 (see
  • The graph Percona InnoDB I/O Pending showed NaN for the metrics
    Pending Log Writes (data source item hn) and Pending Chkp Writes
    (data source item hk) when used for MySQL 5.7.18, because the syntax
    of SHOW ENGINE INNODB STATUS has changed in MySQL 5.7 (see
  • Added server @@hostname as a possible match to avoid DNS lookups while allowing hostname-match.

A new tarball is available from downloads area or in packages from our software repositories. The plugins are fully supported for customers with a Percona Support contract and free installation services are provided as part of some contracts. You can find links to the documentation, forums and more at the project homepage.

About Percona Monitoring Plugins
Percona Monitoring Plugins are monitoring and graphing components designed to integrate seamlessly with widely deployed solutions such as Nagios, Cacti and Zabbix.

Powered by WordPress | Theme: Aeros 2.0 by