Percona Monitoring and Management 1.2.2 is Now Available

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM)Percona announces the release of Percona Monitoring and Management 1.2.2 on August 23, 2017.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

This release contains bug fixes related to performance and introduces various improvements. It also contains an updated version of Grafana.

Changes in PMM Server

We introduced the following changes in PMM Server 1.2.2:

Bug fixes

  • PMM-927: The error “Cannot read property ‘hasOwnProperty’ of undefined” was displayed on the QAN page for MongoDB.

    After enabling monitoring and generating data for MongoDB, the PMM client showed the following error message on the QAN page: “Cannot read property ‘hasOwnProperty’ of undefined”. This bug is now fixed.

  • PMM-949: Percona Server was not detected properly, the log_slow_* variables were not properly detected.

  • PMM-1081: Performance Schema Monitor treated queries that didn’t show up in every snapshot as new queries reporting a wrong number of counts between snapshots.

  • PMM-1272: MongoDB: the query empty abstract. This bug is now fixed.

  • PMM-1277: The QPS Graph had inappropriate Prometheus query. This bug is now fixed.

  • PMM-1279: The MongoDB summary did not work in QAN2 if mongodb authentication was activated. This bug is now fixed.

  • PMM-1284: Dashboards pointed to QAN2 instead of QAN. This bug is now fixed.


  • PMM-586: The wsrep_evs_repl_latency parameter is now monitored in Grafana dashboards

  • PMM-624: The Grafana User ID remains the same in the pmm-server docker image

  • PMM-1209: OpenStack support is now enabled during the OVA image creation

  • PMM-1211: It is now possible to configure a static IP for an OVA image

    The root password can only be set from the console. If the root password is not changed from the default, a warning message appears on the console requesting the user to change the root password on the root first login from the console. Web/SSH users can neither use the root account password nor detect if the root password is set to the default value.

  • PMM-1221: Grafana updated to version 4.4.3

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.


Percona Monitoring and Management 1.2.1 is Now Available

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM)Percona announces the release of Percona Monitoring and Management 1.2.1 on August 16, 2017.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

This hotfix release improves memory consumption.

Changes in PMM Server

We’ve introduced the following changes in PMM Server 1.2.1:

Bug fixes

  • PMM-1280: PMM server affected by nGinx CVE-2017-7529. An integer overflow exploit could result in a DOS (Denial of Service) for the affected nginx service with the max_ranges directive not set. This problem is solved by setting the set max_ranges directive to 1 in the nGinx configuration.


  • PMM-1232: Update the default value of the METRICS_MEMORY configuration setting. Previous versions of PMM Server used a different value for the METRICS_MEMORY configuration setting which allowed Prometheus to use up to 768MB of memory. PMM Server 1.2.0 used the setting, its default value is 256MB. Unintentionally, this value reduced the amount of memory that Prometheus could use. As a result, the performance of Prometheus was affected. To improve the performance of Prometheus, the default setting of has been set to 768 MB.

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.


Saturation Metrics in PMM 1.2.0

Saturation Metrics 5 small

One of the new graphs added to Percona Monitoring and Management (PMM) is saturation metrics. This blog post explains how to use the information provided by these graphs.

You might have heard about Brendan Gregg’s USE Method  (Utilization-Saturation-Errors) as a way to analyze the performance of any system. Our goal in PMM is to support this method fully over time, and these graphs take us one step forward.

When it comes to utilization, there are many graphs available in PMM. There is the CPU Usage graph:

Saturation Metrics 1

There is also Disk IO Utilization:

Saturation Metrics 2

And there is Network Traffic:

Saturation Metrics 3

If you would like to look at saturation type metrics, there is classical the Load Average graph:

Saturation Metrics 4

While Load Average is helpful for understanding system saturation in general, it does not really distinguish whether it is the CPU or Disk that is saturated. Load Average, as the name says, is already averaged — so we can’t really observe short saturation spikes with Load Average. It is averaged for at least one minute. Finally, the problem with Load Average is it does not keep the number of CPU cores/threads into account. Suppose I have a CPU-bound Load Average of 16, for example. That is quite a load and will cause high saturation and queueing if you have two CPU threads. But if you have 64 threads, then 16 becomes a trivial load with no saturation at all.

Let’s take a look at the Saturation Metrics graph:

Saturation Metrics 5

It provides us two metrics: one showing the CPU load and another is showing the IO load.These values roughly correspond to  the “r” and “b” columns in VMSTAT output:

Saturation Metrics 6

These are sampled every second and then averaged over the reporting interval.

We also normalize the CPU load by dividing the raw number of runnable processes by a number of threads available. “Rocky” has 56 threads, which is why the normalized CPU load is about one even though the number of runnable processes shown by VMSTAT is around 50.

We do not normalize the IO load, as systems can have multiple IO devices and a number of requests they can handle in parallel is largely unknown. If you want to understand specific IO device performance, you should check out the Disk Performance Dashboard.

Testing Saturation Metrics in Practice

Let’s see if saturation graphs indeed show us when CPU saturation is the issue. I will use a sysbench CPU test for illustration, run as:

sysbench cpu  --cpu-max-prime=100000 --threads=1 --time=60 run

This will use the said number of threads to execute compute jobs, each of which will compute the said number of prime numbers. If we have enough CPU resources available, with no saturation, the latency of executing such requests should be about the same. When we overload the system, so there are not enough CPU execution units to process everything in the parallel, the average latency should increase.   

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=1 --time=300 run sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 1
Initializing random number generator from current time
Prime numbers limit: 100000
Initializing worker threads...
Threads started!
General statistics:
   total time:                          300.0234s
   total number of events:              12784
Latency (ms):
        min:                                 23.39
        avg:                                 23.47
        max:                                 28.07
        95th percentile:                     23.52
        sum:                             300018.06

As we can see with one thread working, the average time it takes to handle a single request is 23ms. Obviously, there is no saturation happening in this case:

Saturation Metrics 7

“Ts140i” has four CPU cores, and as you can see the Normalized CPU load stays below one. You may wonder why isn’t it closer to 0.25 in this case, with one active thread and four cores available? The reason is at exactly the time when the metrics are being captured, there often happen to be an additional two to three threads active to facilitate the process. They are only active for a very few milliseconds at the time, so they do not produce much load — but they tend to skew the number a little bit.

Let’s now run with four threads. The number of threads matches the number of CPU cores available (and it is true cores in this case, no hyperthreading). In this case, don’t expect too much increase in the event processing time.

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=4 --time=300 run
sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 4
Initializing random number generator from current time
Prime numbers limit: 100000
Initializing worker threads...
Threads started!
General statistics:
   total time:                          300.0215s
   total number of events:              48285
Latency (ms):
        min:                                 24.19
        avg:                                 24.85
        max:                                 43.61
        95th percentile:                     24.83
        sum:                            1200033.93

As you see test confirms the theory – we have avg latency increased just by about 6%  with Normalized CPU load in saturation metrics mostly hovering between 1 and 2:

Saturation Metrics 8

Let’s now do the test with 16 threads, which is four times more than available CPU cores. We should see the latency dramatically increase due to CPU overload (or saturation). The same will happen to your CPU bound MySQL queries if you have more concurrency than CPUs available.

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=16 --time=300 run
sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 16
Initializing random number generator from current time
Prime numbers limit: 100000
Initializing worker threads...
Threads started!
General statistics:
   total time:                          300.0570s
   total number of events:              48269
Latency (ms):
        min:                                 27.83
        avg:                                 99.44
        max:                                189.05
        95th percentile:                    121.08
        sum:                            4799856.52

We can see it takes about four times longer to process each request due to CPU overload and queueing. Let’s see what saturation metrics tell us:

Saturation Metrics 9

As you can see, Normalized CPU Load floats between four and five on the graph, consistent with saturation we’re observing.

You may ask does the CPU utilization graph help us here? Not really. You will see 100% CPU usage for both the run with four threads and 16 threads, while request latencies are completely different.   


As we can see from our test, Normalized CPU Load is very helpful for understanding when the CPU is overloaded. An overloaded CPU causes response times to increase and performance to degrade. Furthermore, you can use it to (roughly) see how serious the overload is. As a rule of thumb, if you see Normalized CPU saturation over two, it indicates your CPUs are overloaded.


How much disk space should I allocate for Percona Monitoring and Management?

Percona Monitoring and Management

I heard a frequent question at last week’s Percona Live conference regarding Percona Monitoring and Management (PMM): How much disk space should I allocate for PMM Server?

First, let’s review the three components of Percona Monitoring and Management that consume non-negligible disk space:

  1. Prometheus data source for the time series metrics
  2. Query Analytics (QAN) which uses Percona Server XtraDB (Percona’s enhanced version of the InnoDB storage engine)
  3. Orchestrator, also backed by Percona Server XtraDB

Of these, you’ll find that Prometheus is generally your largest consumer of disk space. Prometheus hits a steady state of disk utilization once you reach the defined storage.local.retention period. If you deploy Percona Monitoring and Management 1.1.3 (the latest stable version), you’ll be using a retention period of 30 days. “Steady state” in this case means you’re not adding or removing nodes frequently, since each node comes with its own 1k-7k metrics to be scraped. Prometheus stores a one-time series per metric scraped, and automatically trims chunks (like InnoDB pages) from the tail of the time series once they exceed the retention period (so the disk requirement per static list of metrics remains “fixed” for the retention period).

However, if you’re in a dynamic environment with nodes being added and removed frequently, or you’re on the extreme end like these guys who re-deploy data centers every day, steady state for Prometheus may remain an elusive goal. The guidance you find below may help you establish at least a minimum disk provisioning threshold.

Percona Monitoring and Management

QAN is based on a web application and uses Percona Server 5.7.17 as it’s datastore. The Percona QAN agent runs one instance per monitored MySQL server, and obtains queries from either the Slow log or Performance Schema. It performs analysis locally to generate a list of unique queries and their corresponding metrics: min, max, avg, med, and p95. There are dimensions based on Tmp table, InnoDB, Query time, Lock time, etc. Check the schema for a full listing, as there are actually 149 columns on this table (show create table pmm.query_class_metricsG). While the table is wide, it isn’t too long: PMM Demo is ~9mil rows and is approximately 1 row per distinct query per minute per host.

Finally, there is Orchestrator. While the disk requirements for Orchestrator are not zero, they are certainly dwarfed by Prometheus and QAN.  As you’ll read below, Percona’s Orchestrator footprint is a measly ~250MB, which is a rounding error. I’d love to hear other experiences with Orchestrator and how large your InnoDB footprint is for a large or active cluster.

For comparison, here is the resource consumption from Percona’s PMM Demo site:

  • ~47k time series
  • 25 hosts, which is on average ~1,900 time series/host, some are +4k
  • 8-day retention for metrics in Prometheus
  • Prometheus data is ~40GB
    • Which should not increase until we add more host, as this isn’t a dynamic Kubernetes environment ?
  • QAN db is 6.5GB
    • We don’t currently prune records, so this will continue to grow
    • 90% of space consumed is in query_class_metrics, which is ~9mil rows
    • Our first record is ~September 2016, but only in the past three months
    • This is MySQL QAN only, the MongoDB nodes don’t write anything into QAN (yet… we’re working on QAN for MongoDB and hope to ship this quarter!!)
  • Orchestrator db is ~250MB
    • audit table is 97% of the space consumed, ~2mil rows

So back to the original question: How much space should I allocate for Percona Monitoring and Management Server? The favorite answer at Percona is “It Depends®,” and this case is no different. Using PMM Demo as our basis, 46GB / 25 hosts / 8 days = ~230MB/host/day or ~6.9GB/host/30 day retention period. For those of you running 50 instances in PMM, you should be provisioning ~400GB of disk.

Of course, your environment is likely to be different and directly related to what you do and don’t enable. For example, a fully verbose Percona Server 5.7.17 configuration file like this:

## PMM Enhanced options

with none of the mysqld_exporter features disabled:


can lead to an instance that has +4k metrics and will push you above 230MB/host/day. This is what the top ten metrics and hosts by time series count from the PMM Demo look like:

Percona Monitoring and Management

What does the future hold related to minimizing disk space consumption?

  1. The PMM development team is working on the ability to purge a node’s data without access to the instance
    • Today you need to call pmm-admin purge from the instance – which becomes impossible if you’ve already terminated or decommissioned the instance!
  2. We are following Prometheus’ efforts on the 3rd Gen storage re-write in Prometheus 2.0, where InfluxDB will do more than just indices
  3. Again we are following Prometheus’ efforts on Remote Read / Remote Write so we can provide a longer-term storage model for users seeking > 30 days (another popular topic at PL2017)
    • Allows us to store less granular data (every 5s vs. every 1s)
    • Usage of Graphite, OpenTSDB, and InfluxDB as secondary data stores on the Remote end

I’d love to hear about your own experiences using Percona Monitoring and Management, and specifically the disk requirements you’ve faced! Please share them with us via the comments below, or feel free to drop me a line directly Thanks for reading!


Prophet: Forecasting our Metrics (or Predicting the Future)


In this blog post, we’ll look at how Prophet can forecast metrics.

Facebook recently released a forecasting tool called Prophet. Prophet can forecast a particular metric in which we have an interest. It works by fitting time-series data to get a prediction of how that metric will look in the future.

For example, it could be used to:

  • Predict how much HTTP traffic we will get, and scale accordingly when needed
  • See if a particular feature of our application will have success or if its usage will decline
  • Get an approximate date when our database server’s resources will be exhausted
  • Forecast new customer’s sign up and resize the staff accordingly
  • See what next year’s Black Friday or Cyber Monday will look like, and if we have the resources to handle them
  • Predict how many animals will enter a shelter in the coming years, as I did in a personal project I will show here

At its core, it uses a Generalized Additive Model. It is basically the merging of two models. First, a generalized linear model that, in the case of Prophet, can be a linear or logistic regression (depending on what we choose). Second, an additive model applied to that regression. The final graph represents the combination of those two. That is, the smoothed regression area of the variable to predict. For more technical details of how it works, check out Prophet’s paper.

Most of the previous points can be summarized in a simple concept, capacity planning. Let’s see how it works.

Usage Example

Prophet provides either a Python or R library. The following example will use the Python one. You can install it using:

pip install prophet

Prophet expects the metrics with a particular structure: a Pandas DataFrame with two columns, ds and y:

ds y
0 2013-10-01 34
1 2013-10-02 43
2 2013-10-03 20
3 2013-10-04 12
4 2013-10-05 46


The data I am going to use here is from Kaggle Competition Shelter Animal Outcomes. The idea is to find out how Austin Animal Center‘s workload will evolve in the future by trying to predict the number of animal outcomes per day for the next three years. I am using this dataset because it has enough data, shows a very simple trend and it is a non-technical metric (no previous knowledge on the topic is needed). The same method can be applied to most of the services or business metrics you could have.

At this point, we have the metric stored in a local variable, called “series” in this particular example. Now we only need to fit it into our model:

m = Prophet();

and define how far into the future we want to predict (three years in this case):

future = m.make_future_dataframe(periods=365*3)

Now, just plot the data:

plt.title("Outcomes forecast per Year",fontsize=20)
plt.ylabel("Number of outcomes",fontsize=20)


The graph shows a smoothed regression surface. We can see that the data provided covers from the last months 2013 to the first of 2016. From that point, those are the predictions.

We can already find some interesting data. Our data shows a large increase during the summer months and predicts it to continue in the future. But this representation also has some problems. As we can see, there are at least three outliers with values > 65. The fastest way to deal with outliers is to just remove them. ?


ds y
0 2014-07-12 129
1 2015-07-18 97
2 2015-07-19 81



Now the graph looks much better. Let’s also add a horizontal line that will help to see the trend:


From that forecast, Austin Animal Center should expect an increase in the next few years but not a large one. Therefore, the increase trend year-over-year won’t cause problems in the near future. But there could be a moment when we reach the shelter’s maximum capacity.


  • If we want to forecast a metric, we recommend you have at least one year of data to fit the model. If we have less data, we could miss some seasonal effects. In our model above, for example, the large increase of work during summer months.
  • In some cases, you might only want information about particular holidays (for example Black Fridays or Christmas). In that case, it is possible to create a model for those particular days. The documentation explains how to do this. But in summary, you need to create a new Pandas DataFrame that includes all previous Black Friday dates, and those from the future that you want to predict. Then, create the model as before, but specify that you are interested in a holiday effect:
    m = Prophet(holidays=holidays)
  • We recommend you use daily data. The graph could show strange results if we want daily forecasts from non-daily data. In case the metric shows monthly information, freq=’M’ can be used (as shown in the documentation).


When we want to predict the future of a particular metric, we can use Prophet to make that forecast, and then plan for it based on the information we get from the model. It can be used on very different types of problems, and it is very easy to use. Do you want to know how loaded your database will be in the future? Ask Prophet!


Percona Monitoring and Management (PMM) Graphs Explained: Custom MongoDB Graphs and Metrics

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM)This blog post is another in the series on the Percona Server for MongoDB 3.4 bundle release. In this blog post, we will cover how to add custom MongoDB graphs to Percona Monitoring and Management (PMM) and (for the daring Golang developers out there) how to add custom metrics to PMM’s percona/mongodb_exporter metric exporter.

To get to adding new graphs and metrics, we first need to go over how PMM gets metrics from your database nodes and how they become graphs.

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB. It was developed by Percona on top of open-source technology. Behind the scenes, the graphing features this article covers use Prometheus (a popular time-series data store), Grafana (a popular visualisation tool), mongodb_exporter (our MongoDB database metric exporter) plus other technologies to provide database and operating system metric graphs for your database instances.


As mentioned, Percona Monitoring and Management uses Prometheus to gather and store database and operating system metrics. Prometheus works on an HTTP(s) pull-based architecture, where Prometheus “pulls” metrics from “exporters” on a schedule.

To provide a detailed view of your database hosts, you must enable two PMM monitoring services for MongoDB graphing capabilities:

  1. linux:metrics
  2. mongodb:metrics

See this link for more details on adding monitoring services to Percona Monitoring and Management:

It is important to note that not all metrics gathered by Percona Monitoring and Management are graphed. This is by design. Storing more metrics vs. what is graphed is very useful when more advanced insight is necessary. We also aim for PMM to be simple and straightforward to use, explaining why we don’t graph all of the nearly 1,000 metrics we collect per MongoDB node on each polling.

My personal monitoring philosophy is “monitor until it hurts and then take one step back.” In other words, try to get a much data as you can without impacting the database or adding monitoring resources/cost. Also, Prometheus stores large volumes of metrics efficiently due to compression and highly optimized data structures on disk. This offsets a lot of the cost of collecting extra metrics.

Later in this blog, I will show how to add a graph for an example metric that is currently gathered by PMM but is not graphed in PMM as of today. To see what metrics are available on PMM’s Prometheus instance, visit “http://<pmm-server>/promtheus/graph”.

prometheus/node_exporter (linux:metrics)

PMM’s OS-level metrics are provided to Prometheus via the 3rd-party exporter: prometheus/node_exporter.

This exporter provides 712 metrics per “pull” on my test CentOS 7 host that has:

  • 2 x CPUs
  • 3 x disks with 6 x LVMs
  • 2 x network interfaces

Note: more physical or virtual devices add to the number of node_exporter metrics.

The inner workings and addition of metrics to this exporter will not be covered in this blog post. Generally, the current metrics offered by node_exporter are more than enough.

Below is a full example of a single “pull” of metrics from node_exporter on my test host:

$ curl -sk|grep node
# HELP node_boot_time Node boot time, in unixtime.
# TYPE node_boot_time gauge
node_boot_time 1.488904144e+09
# HELP node_context_switches Total number of context switches.
# TYPE node_context_switches counter
node_context_switches 1.0407839e+07
# HELP node_cpu Seconds the cpus spent in each mode.
# TYPE node_cpu counter
node_cpu{cpu="cpu0",mode="guest"} 0
node_cpu{cpu="cpu0",mode="idle"} 6437.71
node_cpu{cpu="cpu0",mode="iowait"} 24
node_cpu{cpu="cpu0",mode="irq"} 0
node_cpu{cpu="cpu0",mode="nice"} 0
node_cpu{cpu="cpu0",mode="softirq"} 1.38
node_cpu{cpu="cpu0",mode="steal"} 0
node_cpu{cpu="cpu0",mode="system"} 117.65
node_cpu{cpu="cpu0",mode="user"} 33.6
node_cpu{cpu="cpu1",mode="guest"} 0
node_cpu{cpu="cpu1",mode="idle"} 6443.68
node_cpu{cpu="cpu1",mode="iowait"} 15.03
node_cpu{cpu="cpu1",mode="irq"} 0
node_cpu{cpu="cpu1",mode="nice"} 0
node_cpu{cpu="cpu1",mode="softirq"} 7.15
node_cpu{cpu="cpu1",mode="steal"} 0
node_cpu{cpu="cpu1",mode="system"} 118.08
node_cpu{cpu="cpu1",mode="user"} 28.84
# HELP node_disk_bytes_read The total number of bytes read successfully.
# TYPE node_disk_bytes_read counter
node_disk_bytes_read{device="dm-0"} 2.09581056e+08
node_disk_bytes_read{device="dm-1"} 1.114112e+06
node_disk_bytes_read{device="dm-2"} 5.500416e+06
node_disk_bytes_read{device="dm-3"} 2.98752e+06
node_disk_bytes_read{device="dm-4"} 3.480576e+06
node_disk_bytes_read{device="dm-5"} 4.0983552e+07
node_disk_bytes_read{device="dm-6"} 303104
node_disk_bytes_read{device="sda"} 2.40276992e+08
node_disk_bytes_read{device="sdb"} 4.5549568e+07
node_disk_bytes_read{device="sdc"} 8.967168e+06
node_disk_bytes_read{device="sr0"} 49152
# HELP node_disk_bytes_written The total number of bytes written successfully.
# TYPE node_disk_bytes_written counter
node_disk_bytes_written{device="dm-0"} 4.1160192e+07
node_disk_bytes_written{device="dm-1"} 0
node_disk_bytes_written{device="dm-2"} 2.413568e+06
node_disk_bytes_written{device="dm-3"} 9.0589184e+07
node_disk_bytes_written{device="dm-4"} 4.631552e+06
node_disk_bytes_written{device="dm-5"} 2.140672e+06
node_disk_bytes_written{device="dm-6"} 0
node_disk_bytes_written{device="sda"} 4.3257344e+07
node_disk_bytes_written{device="sdb"} 6.772224e+06
node_disk_bytes_written{device="sdc"} 9.3002752e+07
node_disk_bytes_written{device="sr0"} 0
# HELP node_disk_io_now The number of I/Os currently in progress.
# TYPE node_disk_io_now gauge
node_disk_io_now{device="dm-0"} 0
node_disk_io_now{device="dm-1"} 0
node_disk_io_now{device="dm-2"} 0
node_disk_io_now{device="dm-3"} 0
node_disk_io_now{device="dm-4"} 0
node_disk_io_now{device="dm-5"} 0
node_disk_io_now{device="dm-6"} 0
node_disk_io_now{device="sda"} 0
node_disk_io_now{device="sdb"} 0
node_disk_io_now{device="sdc"} 0
node_disk_io_now{device="sr0"} 0
# HELP node_disk_io_time_ms Milliseconds spent doing I/Os.
# TYPE node_disk_io_time_ms counter
node_disk_io_time_ms{device="dm-0"} 6443
node_disk_io_time_ms{device="dm-1"} 41
node_disk_io_time_ms{device="dm-2"} 319
node_disk_io_time_ms{device="dm-3"} 61024
node_disk_io_time_ms{device="dm-4"} 1159
node_disk_io_time_ms{device="dm-5"} 772
node_disk_io_time_ms{device="dm-6"} 1
node_disk_io_time_ms{device="sda"} 6965
node_disk_io_time_ms{device="sdb"} 2004
node_disk_io_time_ms{device="sdc"} 60718
node_disk_io_time_ms{device="sr0"} 5
# HELP node_disk_io_time_weighted The weighted # of milliseconds spent doing I/Os. See
# TYPE node_disk_io_time_weighted counter
node_disk_io_time_weighted{device="dm-0"} 9972
node_disk_io_time_weighted{device="dm-1"} 46
node_disk_io_time_weighted{device="dm-2"} 369
node_disk_io_time_weighted{device="dm-3"} 147704
node_disk_io_time_weighted{device="dm-4"} 1618
node_disk_io_time_weighted{device="dm-5"} 968
node_disk_io_time_weighted{device="dm-6"} 1
node_disk_io_time_weighted{device="sda"} 10365
node_disk_io_time_weighted{device="sdb"} 3213
node_disk_io_time_weighted{device="sdc"} 143822
node_disk_io_time_weighted{device="sr0"} 5
# HELP node_disk_read_time_ms The total number of milliseconds spent by all reads.
# TYPE node_disk_read_time_ms counter
node_disk_read_time_ms{device="dm-0"} 7037
node_disk_read_time_ms{device="dm-1"} 46
node_disk_read_time_ms{device="dm-2"} 312
node_disk_read_time_ms{device="dm-3"} 56
node_disk_read_time_ms{device="dm-4"} 115
node_disk_read_time_ms{device="dm-5"} 906
node_disk_read_time_ms{device="dm-6"} 1
node_disk_read_time_ms{device="sda"} 7646
node_disk_read_time_ms{device="sdb"} 1192
node_disk_read_time_ms{device="sdc"} 412
node_disk_read_time_ms{device="sr0"} 5
# HELP node_disk_reads_completed The total number of reads completed successfully.
# TYPE node_disk_reads_completed counter
node_disk_reads_completed{device="dm-0"} 11521
node_disk_reads_completed{device="dm-1"} 133
node_disk_reads_completed{device="dm-2"} 1139
node_disk_reads_completed{device="dm-3"} 130
node_disk_reads_completed{device="dm-4"} 199
node_disk_reads_completed{device="dm-5"} 2077
node_disk_reads_completed{device="dm-6"} 42
node_disk_reads_completed{device="sda"} 13429
node_disk_reads_completed{device="sdb"} 2483
node_disk_reads_completed{device="sdc"} 1376
node_disk_reads_completed{device="sr0"} 12
# HELP node_disk_reads_merged The number of reads merged. See
# TYPE node_disk_reads_merged counter
node_disk_reads_merged{device="dm-0"} 0
node_disk_reads_merged{device="dm-1"} 0
node_disk_reads_merged{device="dm-2"} 0
node_disk_reads_merged{device="dm-3"} 0
node_disk_reads_merged{device="dm-4"} 0
node_disk_reads_merged{device="dm-5"} 0
node_disk_reads_merged{device="dm-6"} 0
node_disk_reads_merged{device="sda"} 12
node_disk_reads_merged{device="sdb"} 0
node_disk_reads_merged{device="sdc"} 0
node_disk_reads_merged{device="sr0"} 0
# HELP node_disk_sectors_read The total number of sectors read successfully.
# TYPE node_disk_sectors_read counter
node_disk_sectors_read{device="dm-0"} 409338
node_disk_sectors_read{device="dm-1"} 2176
node_disk_sectors_read{device="dm-2"} 10743
node_disk_sectors_read{device="dm-3"} 5835
node_disk_sectors_read{device="dm-4"} 6798
node_disk_sectors_read{device="dm-5"} 80046
node_disk_sectors_read{device="dm-6"} 592
node_disk_sectors_read{device="sda"} 469291
node_disk_sectors_read{device="sdb"} 88964
node_disk_sectors_read{device="sdc"} 17514
node_disk_sectors_read{device="sr0"} 96
# HELP node_disk_sectors_written The total number of sectors written successfully.
# TYPE node_disk_sectors_written counter
node_disk_sectors_written{device="dm-0"} 80391
node_disk_sectors_written{device="dm-1"} 0
node_disk_sectors_written{device="dm-2"} 4714
node_disk_sectors_written{device="dm-3"} 176932
node_disk_sectors_written{device="dm-4"} 9046
node_disk_sectors_written{device="dm-5"} 4181
node_disk_sectors_written{device="dm-6"} 0
node_disk_sectors_written{device="sda"} 84487
node_disk_sectors_written{device="sdb"} 13227
node_disk_sectors_written{device="sdc"} 181646
node_disk_sectors_written{device="sr0"} 0
# HELP node_disk_write_time_ms This is the total number of milliseconds spent by all writes.
# TYPE node_disk_write_time_ms counter
node_disk_write_time_ms{device="dm-0"} 2935
node_disk_write_time_ms{device="dm-1"} 0
node_disk_write_time_ms{device="dm-2"} 57
node_disk_write_time_ms{device="dm-3"} 146868
node_disk_write_time_ms{device="dm-4"} 1503
node_disk_write_time_ms{device="dm-5"} 62
node_disk_write_time_ms{device="dm-6"} 0
node_disk_write_time_ms{device="sda"} 2797
node_disk_write_time_ms{device="sdb"} 2096
node_disk_write_time_ms{device="sdc"} 143569
node_disk_write_time_ms{device="sr0"} 0
# HELP node_disk_writes_completed The total number of writes completed successfully.
# TYPE node_disk_writes_completed counter
node_disk_writes_completed{device="dm-0"} 2701
node_disk_writes_completed{device="dm-1"} 0
node_disk_writes_completed{device="dm-2"} 70
node_disk_writes_completed{device="dm-3"} 415913
node_disk_writes_completed{device="dm-4"} 2737
node_disk_writes_completed{device="dm-5"} 17
node_disk_writes_completed{device="dm-6"} 0
node_disk_writes_completed{device="sda"} 3733
node_disk_writes_completed{device="sdb"} 3189
node_disk_writes_completed{device="sdc"} 421341
node_disk_writes_completed{device="sr0"} 0
# HELP node_disk_writes_merged The number of writes merged. See
# TYPE node_disk_writes_merged counter
node_disk_writes_merged{device="dm-0"} 0
node_disk_writes_merged{device="dm-1"} 0
node_disk_writes_merged{device="dm-2"} 0
node_disk_writes_merged{device="dm-3"} 0
node_disk_writes_merged{device="dm-4"} 0
node_disk_writes_merged{device="dm-5"} 0
node_disk_writes_merged{device="dm-6"} 0
node_disk_writes_merged{device="sda"} 147
node_disk_writes_merged{device="sdb"} 19
node_disk_writes_merged{device="sdc"} 2182
node_disk_writes_merged{device="sr0"} 0
# HELP node_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which node_exporter was built.
# TYPE node_exporter_build_info gauge
node_exporter_build_info{branch="master",goversion="go1.7.4",revision="2d78e22000779d63c714011e4fb30c65623b9c77",version="1.1.1"} 1
# HELP node_exporter_scrape_duration_seconds node_exporter: Duration of a scrape job.
# TYPE node_exporter_scrape_duration_seconds summary
node_exporter_scrape_duration_seconds{collector="diskstats",result="success",quantile="0.5"} 0.00018768700000000002
node_exporter_scrape_duration_seconds{collector="diskstats",result="success",quantile="0.9"} 0.000199063
node_exporter_scrape_duration_seconds{collector="diskstats",result="success",quantile="0.99"} 0.000199063
node_exporter_scrape_duration_seconds_sum{collector="diskstats",result="success"} 0.0008093360000000001
node_exporter_scrape_duration_seconds_count{collector="diskstats",result="success"} 3
node_exporter_scrape_duration_seconds{collector="filefd",result="success",quantile="0.5"} 2.6501000000000003e-05
node_exporter_scrape_duration_seconds{collector="filefd",result="success",quantile="0.9"} 0.000149764
node_exporter_scrape_duration_seconds{collector="filefd",result="success",quantile="0.99"} 0.000149764
node_exporter_scrape_duration_seconds_sum{collector="filefd",result="success"} 0.0004463360000000001
node_exporter_scrape_duration_seconds_count{collector="filefd",result="success"} 3
node_exporter_scrape_duration_seconds{collector="filesystem",result="success",quantile="0.5"} 0.000350556
node_exporter_scrape_duration_seconds{collector="filesystem",result="success",quantile="0.9"} 0.000661542
node_exporter_scrape_duration_seconds{collector="filesystem",result="success",quantile="0.99"} 0.000661542
node_exporter_scrape_duration_seconds_sum{collector="filesystem",result="success"} 0.006669826000000001
node_exporter_scrape_duration_seconds_count{collector="filesystem",result="success"} 3
node_exporter_scrape_duration_seconds{collector="loadavg",result="success",quantile="0.5"} 4.1976e-05
node_exporter_scrape_duration_seconds{collector="loadavg",result="success",quantile="0.9"} 4.2433000000000005e-05
node_exporter_scrape_duration_seconds{collector="loadavg",result="success",quantile="0.99"} 4.2433000000000005e-05
node_exporter_scrape_duration_seconds_sum{collector="loadavg",result="success"} 0.0006603080000000001
node_exporter_scrape_duration_seconds_count{collector="loadavg",result="success"} 3
node_exporter_scrape_duration_seconds{collector="meminfo",result="success",quantile="0.5"} 0.000244919
node_exporter_scrape_duration_seconds{collector="meminfo",result="success",quantile="0.9"} 0.00038740000000000004
node_exporter_scrape_duration_seconds{collector="meminfo",result="success",quantile="0.99"} 0.00038740000000000004
node_exporter_scrape_duration_seconds_sum{collector="meminfo",result="success"} 0.003157154
node_exporter_scrape_duration_seconds_count{collector="meminfo",result="success"} 3
node_exporter_scrape_duration_seconds{collector="netdev",result="success",quantile="0.5"} 0.000152886
node_exporter_scrape_duration_seconds{collector="netdev",result="success",quantile="0.9"} 0.00048569400000000006
node_exporter_scrape_duration_seconds{collector="netdev",result="success",quantile="0.99"} 0.00048569400000000006
node_exporter_scrape_duration_seconds_sum{collector="netdev",result="success"} 0.0033770220000000004
node_exporter_scrape_duration_seconds_count{collector="netdev",result="success"} 3
node_exporter_scrape_duration_seconds{collector="netstat",result="success",quantile="0.5"} 0.0007874710000000001
node_exporter_scrape_duration_seconds{collector="netstat",result="success",quantile="0.9"} 0.001612906
node_exporter_scrape_duration_seconds{collector="netstat",result="success",quantile="0.99"} 0.001612906
node_exporter_scrape_duration_seconds_sum{collector="netstat",result="success"} 0.0055138800000000005
node_exporter_scrape_duration_seconds_count{collector="netstat",result="success"} 3
node_exporter_scrape_duration_seconds{collector="stat",result="success",quantile="0.5"} 9.3368e-05
node_exporter_scrape_duration_seconds{collector="stat",result="success",quantile="0.9"} 0.00014552100000000002
node_exporter_scrape_duration_seconds{collector="stat",result="success",quantile="0.99"} 0.00014552100000000002
node_exporter_scrape_duration_seconds_sum{collector="stat",result="success"} 0.00039008900000000004
node_exporter_scrape_duration_seconds_count{collector="stat",result="success"} 3
node_exporter_scrape_duration_seconds{collector="time",result="success",quantile="0.5"} 1.0485e-05
node_exporter_scrape_duration_seconds{collector="time",result="success",quantile="0.9"} 2.462e-05
node_exporter_scrape_duration_seconds{collector="time",result="success",quantile="0.99"} 2.462e-05
node_exporter_scrape_duration_seconds_sum{collector="time",result="success"} 6.4423e-05
node_exporter_scrape_duration_seconds_count{collector="time",result="success"} 3
node_exporter_scrape_duration_seconds{collector="uname",result="success",quantile="0.5"} 1.811e-05
node_exporter_scrape_duration_seconds{collector="uname",result="success",quantile="0.9"} 6.731300000000001e-05
node_exporter_scrape_duration_seconds{collector="uname",result="success",quantile="0.99"} 6.731300000000001e-05
node_exporter_scrape_duration_seconds_sum{collector="uname",result="success"} 0.00030852200000000004
node_exporter_scrape_duration_seconds_count{collector="uname",result="success"} 3
node_exporter_scrape_duration_seconds{collector="vmstat",result="success",quantile="0.5"} 0.00035561100000000003
node_exporter_scrape_duration_seconds{collector="vmstat",result="success",quantile="0.9"} 0.00046660900000000004
node_exporter_scrape_duration_seconds{collector="vmstat",result="success",quantile="0.99"} 0.00046660900000000004
node_exporter_scrape_duration_seconds_sum{collector="vmstat",result="success"} 0.002186003
node_exporter_scrape_duration_seconds_count{collector="vmstat",result="success"} 3
# HELP node_filefd_allocated File descriptor statistics: allocated.
# TYPE node_filefd_allocated gauge
node_filefd_allocated 1696
# HELP node_filefd_maximum File descriptor statistics: maximum.
# TYPE node_filefd_maximum gauge
node_filefd_maximum 382428
# HELP node_filesystem_avail Filesystem space available to non-root users in bytes.
# TYPE node_filesystem_avail gauge
node_filesystem_avail{device="/dev/mapper/centos-root",fstype="xfs",mountpoint="/"} 2.977058816e+09
node_filesystem_avail{device="/dev/mapper/data-home",fstype="xfs",mountpoint="/home"} 2.3885000704e+10
node_filesystem_avail{device="/dev/mapper/data-opt",fstype="xfs",mountpoint="/opt"} 6.520070144e+09
node_filesystem_avail{device="/dev/mapper/tmpdata-docker",fstype="xfs",mountpoint="/var/lib/docker"} 8.374427648e+09
node_filesystem_avail{device="/dev/mapper/tmpdata-mongo_data",fstype="xfs",mountpoint="/home/tim/mongo/data"} 6.386853888e+10
node_filesystem_avail{device="/dev/sda1",fstype="xfs",mountpoint="/boot"} 1.37248768e+08
node_filesystem_avail{device="rootfs",fstype="rootfs",mountpoint="/"} 2.977058816e+09
node_filesystem_avail{device="tmpfs",fstype="tmpfs",mountpoint="/run"} 1.978773504e+09
node_filesystem_avail{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/0"} 3.975168e+08
node_filesystem_avail{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/1000"} 3.975168e+08
# HELP node_filesystem_files Filesystem total file nodes.
# TYPE node_filesystem_files gauge
node_filesystem_files{device="/dev/mapper/centos-root",fstype="xfs",mountpoint="/"} 6.991872e+06
node_filesystem_files{device="/dev/mapper/data-home",fstype="xfs",mountpoint="/home"} 2.9360128e+07
node_filesystem_files{device="/dev/mapper/data-opt",fstype="xfs",mountpoint="/opt"} 8.388608e+06
node_filesystem_files{device="/dev/mapper/tmpdata-docker",fstype="xfs",mountpoint="/var/lib/docker"} 4.194304e+06
node_filesystem_files{device="/dev/mapper/tmpdata-mongo_data",fstype="xfs",mountpoint="/home/tim/mongo/data"} 3.145728e+07
node_filesystem_files{device="/dev/sda1",fstype="xfs",mountpoint="/boot"} 512000
node_filesystem_files{device="rootfs",fstype="rootfs",mountpoint="/"} 6.991872e+06
node_filesystem_files{device="tmpfs",fstype="tmpfs",mountpoint="/run"} 485249
node_filesystem_files{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/0"} 485249
node_filesystem_files{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/1000"} 485249
# HELP node_filesystem_files_free Filesystem total free file nodes.
# TYPE node_filesystem_files_free gauge
node_filesystem_files_free{device="/dev/mapper/centos-root",fstype="xfs",mountpoint="/"} 6.839777e+06
node_filesystem_files_free{device="/dev/mapper/data-home",fstype="xfs",mountpoint="/home"} 2.9273287e+07
node_filesystem_files_free{device="/dev/mapper/data-opt",fstype="xfs",mountpoint="/opt"} 8.376436e+06
node_filesystem_files_free{device="/dev/mapper/tmpdata-docker",fstype="xfs",mountpoint="/var/lib/docker"} 4.194188e+06
node_filesystem_files_free{device="/dev/mapper/tmpdata-mongo_data",fstype="xfs",mountpoint="/home/tim/mongo/data"} 3.1457181e+07
node_filesystem_files_free{device="/dev/sda1",fstype="xfs",mountpoint="/boot"} 511638
node_filesystem_files_free{device="rootfs",fstype="rootfs",mountpoint="/"} 6.839777e+06
node_filesystem_files_free{device="tmpfs",fstype="tmpfs",mountpoint="/run"} 484725
node_filesystem_files_free{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/0"} 485248
node_filesystem_files_free{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/1000"} 485248
# HELP node_filesystem_free Filesystem free space in bytes.
# TYPE node_filesystem_free gauge
node_filesystem_free{device="/dev/mapper/centos-root",fstype="xfs",mountpoint="/"} 2.977058816e+09
node_filesystem_free{device="/dev/mapper/data-home",fstype="xfs",mountpoint="/home"} 2.3885000704e+10
node_filesystem_free{device="/dev/mapper/data-opt",fstype="xfs",mountpoint="/opt"} 6.520070144e+09
node_filesystem_free{device="/dev/mapper/tmpdata-docker",fstype="xfs",mountpoint="/var/lib/docker"} 8.374427648e+09
node_filesystem_free{device="/dev/mapper/tmpdata-mongo_data",fstype="xfs",mountpoint="/home/tim/mongo/data"} 6.386853888e+10
node_filesystem_free{device="/dev/sda1",fstype="xfs",mountpoint="/boot"} 1.37248768e+08
node_filesystem_free{device="rootfs",fstype="rootfs",mountpoint="/"} 2.977058816e+09
node_filesystem_free{device="tmpfs",fstype="tmpfs",mountpoint="/run"} 1.978773504e+09
node_filesystem_free{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/0"} 3.975168e+08
node_filesystem_free{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/1000"} 3.975168e+08
# HELP node_filesystem_readonly Filesystem read-only status.
# TYPE node_filesystem_readonly gauge
node_filesystem_readonly{device="/dev/mapper/centos-root",fstype="xfs",mountpoint="/"} 0
node_filesystem_readonly{device="/dev/mapper/data-home",fstype="xfs",mountpoint="/home"} 0
node_filesystem_readonly{device="/dev/mapper/data-opt",fstype="xfs",mountpoint="/opt"} 0
node_filesystem_readonly{device="/dev/mapper/tmpdata-docker",fstype="xfs",mountpoint="/var/lib/docker"} 0
node_filesystem_readonly{device="/dev/mapper/tmpdata-mongo_data",fstype="xfs",mountpoint="/home/tim/mongo/data"} 0
node_filesystem_readonly{device="/dev/sda1",fstype="xfs",mountpoint="/boot"} 0
node_filesystem_readonly{device="rootfs",fstype="rootfs",mountpoint="/"} 0
node_filesystem_readonly{device="tmpfs",fstype="tmpfs",mountpoint="/run"} 0
node_filesystem_readonly{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/0"} 0
node_filesystem_readonly{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/1000"} 0
# HELP node_filesystem_size Filesystem size in bytes.
# TYPE node_filesystem_size gauge
node_filesystem_size{device="/dev/mapper/centos-root",fstype="xfs",mountpoint="/"} 7.149191168e+09
node_filesystem_size{device="/dev/mapper/data-home",fstype="xfs",mountpoint="/home"} 3.0054285312e+10
node_filesystem_size{device="/dev/mapper/data-opt",fstype="xfs",mountpoint="/opt"} 8.579448832e+09
node_filesystem_size{device="/dev/mapper/tmpdata-docker",fstype="xfs",mountpoint="/var/lib/docker"} 8.579448832e+09
node_filesystem_size{device="/dev/mapper/tmpdata-mongo_data",fstype="xfs",mountpoint="/home/tim/mongo/data"} 6.439305216e+10
node_filesystem_size{device="/dev/sda1",fstype="xfs",mountpoint="/boot"} 5.20794112e+08
node_filesystem_size{device="rootfs",fstype="rootfs",mountpoint="/"} 7.149191168e+09
node_filesystem_size{device="tmpfs",fstype="tmpfs",mountpoint="/run"} 1.987579904e+09
node_filesystem_size{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/0"} 3.975168e+08
node_filesystem_size{device="tmpfs",fstype="tmpfs",mountpoint="/run/user/1000"} 3.975168e+08
# HELP node_forks Total number of forks.
# TYPE node_forks counter
node_forks 3928
# HELP node_intr Total number of interrupts serviced.
# TYPE node_intr counter
node_intr 4.629898e+06
# HELP node_load1 1m load average.
# TYPE node_load1 gauge
node_load1 0.2
# HELP node_load15 15m load average.
# TYPE node_load15 gauge
node_load15 0.21
# HELP node_load5 5m load average.
# TYPE node_load5 gauge
node_load5 0.22
# HELP node_memory_Active Memory information field Active.
# TYPE node_memory_Active gauge
node_memory_Active 4.80772096e+08
# HELP node_memory_Active_anon Memory information field Active_anon.
# TYPE node_memory_Active_anon gauge
node_memory_Active_anon 3.62610688e+08
# HELP node_memory_Active_file Memory information field Active_file.
# TYPE node_memory_Active_file gauge
node_memory_Active_file 1.18161408e+08
# HELP node_memory_AnonHugePages Memory information field AnonHugePages.
# TYPE node_memory_AnonHugePages gauge
node_memory_AnonHugePages 1.2582912e+07
# HELP node_memory_AnonPages Memory information field AnonPages.
# TYPE node_memory_AnonPages gauge
node_memory_AnonPages 3.62438656e+08
# HELP node_memory_Bounce Memory information field Bounce.
# TYPE node_memory_Bounce gauge
node_memory_Bounce 0
# HELP node_memory_Buffers Memory information field Buffers.
# TYPE node_memory_Buffers gauge
node_memory_Buffers 1.90464e+06
# HELP node_memory_Cached Memory information field Cached.
# TYPE node_memory_Cached gauge
node_memory_Cached 3.12205312e+08
# HELP node_memory_CommitLimit Memory information field CommitLimit.
# TYPE node_memory_CommitLimit gauge
node_memory_CommitLimit 2.847408128e+09
# HELP node_memory_Committed_AS Memory information field Committed_AS.
# TYPE node_memory_Committed_AS gauge
node_memory_Committed_AS 5.08229632e+09
# HELP node_memory_DirectMap2M Memory information field DirectMap2M.
# TYPE node_memory_DirectMap2M gauge
node_memory_DirectMap2M 4.22576128e+09
# HELP node_memory_DirectMap4k Memory information field DirectMap4k.
# TYPE node_memory_DirectMap4k gauge
node_memory_DirectMap4k 6.914048e+07
# HELP node_memory_Dirty Memory information field Dirty.
# TYPE node_memory_Dirty gauge
node_memory_Dirty 69632
# HELP node_memory_HardwareCorrupted Memory information field HardwareCorrupted.
# TYPE node_memory_HardwareCorrupted gauge
node_memory_HardwareCorrupted 0
# HELP node_memory_HugePages_Free Memory information field HugePages_Free.
# TYPE node_memory_HugePages_Free gauge
node_memory_HugePages_Free 0
# HELP node_memory_HugePages_Rsvd Memory information field HugePages_Rsvd.
# TYPE node_memory_HugePages_Rsvd gauge
node_memory_HugePages_Rsvd 0
# HELP node_memory_HugePages_Surp Memory information field HugePages_Surp.
# TYPE node_memory_HugePages_Surp gauge
node_memory_HugePages_Surp 0
# HELP node_memory_HugePages_Total Memory information field HugePages_Total.
# TYPE node_memory_HugePages_Total gauge
node_memory_HugePages_Total 0
# HELP node_memory_Hugepagesize Memory information field Hugepagesize.
# TYPE node_memory_Hugepagesize gauge
node_memory_Hugepagesize 2.097152e+06
# HELP node_memory_Inactive Memory information field Inactive.
# TYPE node_memory_Inactive gauge
node_memory_Inactive 1.95702784e+08
# HELP node_memory_Inactive_anon Memory information field Inactive_anon.
# TYPE node_memory_Inactive_anon gauge
node_memory_Inactive_anon 8.564736e+06
# HELP node_memory_Inactive_file Memory information field Inactive_file.
# TYPE node_memory_Inactive_file gauge
node_memory_Inactive_file 1.87138048e+08
# HELP node_memory_KernelStack Memory information field KernelStack.
# TYPE node_memory_KernelStack gauge
node_memory_KernelStack 1.1583488e+07
# HELP node_memory_Mapped Memory information field Mapped.
# TYPE node_memory_Mapped gauge
node_memory_Mapped 6.7530752e+07
# HELP node_memory_MemAvailable Memory information field MemAvailable.
# TYPE node_memory_MemAvailable gauge
node_memory_MemAvailable 3.273515008e+09
# HELP node_memory_MemFree Memory information field MemFree.
# TYPE node_memory_MemFree gauge
node_memory_MemFree 3.15346944e+09
# HELP node_memory_MemTotal Memory information field MemTotal.
# TYPE node_memory_MemTotal gauge
node_memory_MemTotal 3.975163904e+09
# HELP node_memory_Mlocked Memory information field Mlocked.
# TYPE node_memory_Mlocked gauge
node_memory_Mlocked 0
# HELP node_memory_NFS_Unstable Memory information field NFS_Unstable.
# TYPE node_memory_NFS_Unstable gauge
node_memory_NFS_Unstable 0
# HELP node_memory_PageTables Memory information field PageTables.
# TYPE node_memory_PageTables gauge
node_memory_PageTables 9.105408e+06
# HELP node_memory_SReclaimable Memory information field SReclaimable.
# TYPE node_memory_SReclaimable gauge
node_memory_SReclaimable 4.4638208e+07
# HELP node_memory_SUnreclaim Memory information field SUnreclaim.
# TYPE node_memory_SUnreclaim gauge
node_memory_SUnreclaim 3.2059392e+07
# HELP node_memory_Shmem Memory information field Shmem.
# TYPE node_memory_Shmem gauge
node_memory_Shmem 8.810496e+06
# HELP node_memory_Slab Memory information field Slab.
# TYPE node_memory_Slab gauge
node_memory_Slab 7.66976e+07
# HELP node_memory_SwapCached Memory information field SwapCached.
# TYPE node_memory_SwapCached gauge
node_memory_SwapCached 0
# HELP node_memory_SwapFree Memory information field SwapFree.
# TYPE node_memory_SwapFree gauge
node_memory_SwapFree 8.59828224e+08
# HELP node_memory_SwapTotal Memory information field SwapTotal.
# TYPE node_memory_SwapTotal gauge
node_memory_SwapTotal 8.59828224e+08
# HELP node_memory_Unevictable Memory information field Unevictable.
# TYPE node_memory_Unevictable gauge
node_memory_Unevictable 0
# HELP node_memory_VmallocChunk Memory information field VmallocChunk.
# TYPE node_memory_VmallocChunk gauge
node_memory_VmallocChunk 3.5184346918912e+13
# HELP node_memory_VmallocTotal Memory information field VmallocTotal.
# TYPE node_memory_VmallocTotal gauge
node_memory_VmallocTotal 3.5184372087808e+13
# HELP node_memory_VmallocUsed Memory information field VmallocUsed.
# TYPE node_memory_VmallocUsed gauge
node_memory_VmallocUsed 1.6748544e+07
# HELP node_memory_Writeback Memory information field Writeback.
# TYPE node_memory_Writeback gauge
node_memory_Writeback 0
# HELP node_memory_WritebackTmp Memory information field WritebackTmp.
# TYPE node_memory_WritebackTmp gauge
node_memory_WritebackTmp 0
# HELP node_netstat_IcmpMsg_InType3 Protocol IcmpMsg statistic InType3.
# TYPE node_netstat_IcmpMsg_InType3 untyped
node_netstat_IcmpMsg_InType3 32
# HELP node_netstat_IcmpMsg_OutType3 Protocol IcmpMsg statistic OutType3.
# TYPE node_netstat_IcmpMsg_OutType3 untyped
node_netstat_IcmpMsg_OutType3 32
# HELP node_netstat_Icmp_InAddrMaskReps Protocol Icmp statistic InAddrMaskReps.
# TYPE node_netstat_Icmp_InAddrMaskReps untyped
node_netstat_Icmp_InAddrMaskReps 0
# HELP node_netstat_Icmp_InAddrMasks Protocol Icmp statistic InAddrMasks.
# TYPE node_netstat_Icmp_InAddrMasks untyped
node_netstat_Icmp_InAddrMasks 0
# HELP node_netstat_Icmp_InCsumErrors Protocol Icmp statistic InCsumErrors.
# TYPE node_netstat_Icmp_InCsumErrors untyped
node_netstat_Icmp_InCsumErrors 0
# HELP node_netstat_Icmp_InDestUnreachs Protocol Icmp statistic InDestUnreachs.
# TYPE node_netstat_Icmp_InDestUnreachs untyped
node_netstat_Icmp_InDestUnreachs 32
# HELP node_netstat_Icmp_InEchoReps Protocol Icmp statistic InEchoReps.
# TYPE node_netstat_Icmp_InEchoReps untyped
node_netstat_Icmp_InEchoReps 0
# HELP node_netstat_Icmp_InEchos Protocol Icmp statistic InEchos.
# TYPE node_netstat_Icmp_InEchos untyped
node_netstat_Icmp_InEchos 0
# HELP node_netstat_Icmp_InErrors Protocol Icmp statistic InErrors.
# TYPE node_netstat_Icmp_InErrors untyped
node_netstat_Icmp_InErrors 0
# HELP node_netstat_Icmp_InMsgs Protocol Icmp statistic InMsgs.
# TYPE node_netstat_Icmp_InMsgs untyped
node_netstat_Icmp_InMsgs 32
# HELP node_netstat_Icmp_InParmProbs Protocol Icmp statistic InParmProbs.
# TYPE node_netstat_Icmp_InParmProbs untyped
node_netstat_Icmp_InParmProbs 0
# HELP node_netstat_Icmp_InRedirects Protocol Icmp statistic InRedirects.
# TYPE node_netstat_Icmp_InRedirects untyped
node_netstat_Icmp_InRedirects 0
# HELP node_netstat_Icmp_InSrcQuenchs Protocol Icmp statistic InSrcQuenchs.
# TYPE node_netstat_Icmp_InSrcQuenchs untyped
node_netstat_Icmp_InSrcQuenchs 0
# HELP node_netstat_Icmp_InTimeExcds Protocol Icmp statistic InTimeExcds.
# TYPE node_netstat_Icmp_InTimeExcds untyped
node_netstat_Icmp_InTimeExcds 0
# HELP node_netstat_Icmp_InTimestampReps Protocol Icmp statistic InTimestampReps.
# TYPE node_netstat_Icmp_InTimestampReps untyped
node_netstat_Icmp_InTimestampReps 0
# HELP node_netstat_Icmp_InTimestamps Protocol Icmp statistic InTimestamps.
# TYPE node_netstat_Icmp_InTimestamps untyped
node_netstat_Icmp_InTimestamps 0
# HELP node_netstat_Icmp_OutAddrMaskReps Protocol Icmp statistic OutAddrMaskReps.
# TYPE node_netstat_Icmp_OutAddrMaskReps untyped
node_netstat_Icmp_OutAddrMaskReps 0
# HELP node_netstat_Icmp_OutAddrMasks Protocol Icmp statistic OutAddrMasks.
# TYPE node_netstat_Icmp_OutAddrMasks untyped
node_netstat_Icmp_OutAddrMasks 0
# HELP node_netstat_Icmp_OutDestUnreachs Protocol Icmp statistic OutDestUnreachs.
# TYPE node_netstat_Icmp_OutDestUnreachs untyped
node_netstat_Icmp_OutDestUnreachs 32
# HELP node_netstat_Icmp_OutEchoReps Protocol Icmp statistic OutEchoReps.
# TYPE node_netstat_Icmp_OutEchoReps untyped
node_netstat_Icmp_OutEchoReps 0
# HELP node_netstat_Icmp_OutEchos Protocol Icmp statistic OutEchos.
# TYPE node_netstat_Icmp_OutEchos untyped
node_netstat_Icmp_OutEchos 0
# HELP node_netstat_Icmp_OutErrors Protocol Icmp statistic OutErrors.
# TYPE node_netstat_Icmp_OutErrors untyped
node_netstat_Icmp_OutErrors 0
# HELP node_netstat_Icmp_OutMsgs Protocol Icmp statistic OutMsgs.
# TYPE node_netstat_Icmp_OutMsgs untyped
node_netstat_Icmp_OutMsgs 32
# HELP node_netstat_Icmp_OutParmProbs Protocol Icmp statistic OutParmProbs.
# TYPE node_netstat_Icmp_OutParmProbs untyped
node_netstat_Icmp_OutParmProbs 0
# HELP node_netstat_Icmp_OutRedirects Protocol Icmp statistic OutRedirects.
# TYPE node_netstat_Icmp_OutRedirects untyped
node_netstat_Icmp_OutRedirects 0
# HELP node_netstat_Icmp_OutSrcQuenchs Protocol Icmp statistic OutSrcQuenchs.
# TYPE node_netstat_Icmp_OutSrcQuenchs untyped
node_netstat_Icmp_OutSrcQuenchs 0
# HELP node_netstat_Icmp_OutTimeExcds Protocol Icmp statistic OutTimeExcds.
# TYPE node_netstat_Icmp_OutTimeExcds untyped
node_netstat_Icmp_OutTimeExcds 0
# HELP node_netstat_Icmp_OutTimestampReps Protocol Icmp statistic OutTimestampReps.
# TYPE node_netstat_Icmp_OutTimestampReps untyped
node_netstat_Icmp_OutTimestampReps 0
# HELP node_netstat_Icmp_OutTimestamps Protocol Icmp statistic OutTimestamps.
# TYPE node_netstat_Icmp_OutTimestamps untyped
node_netstat_Icmp_OutTimestamps 0
# HELP node_netstat_IpExt_InBcastOctets Protocol IpExt statistic InBcastOctets.
# TYPE node_netstat_IpExt_InBcastOctets untyped
node_netstat_IpExt_InBcastOctets 16200
# HELP node_netstat_IpExt_InBcastPkts Protocol IpExt statistic InBcastPkts.
# TYPE node_netstat_IpExt_InBcastPkts untyped
node_netstat_IpExt_InBcastPkts 225
# HELP node_netstat_IpExt_InCEPkts Protocol IpExt statistic InCEPkts.
# TYPE node_netstat_IpExt_InCEPkts untyped
node_netstat_IpExt_InCEPkts 0
# HELP node_netstat_IpExt_InCsumErrors Protocol IpExt statistic InCsumErrors.
# TYPE node_netstat_IpExt_InCsumErrors untyped
node_netstat_IpExt_InCsumErrors 0
# HELP node_netstat_IpExt_InECT0Pkts Protocol IpExt statistic InECT0Pkts.
# TYPE node_netstat_IpExt_InECT0Pkts untyped
node_netstat_IpExt_InECT0Pkts 0
# HELP node_netstat_IpExt_InECT1Pkts Protocol IpExt statistic InECT1Pkts.
# TYPE node_netstat_IpExt_InECT1Pkts untyped
node_netstat_IpExt_InECT1Pkts 0
# HELP node_netstat_IpExt_InMcastOctets Protocol IpExt statistic InMcastOctets.
# TYPE node_netstat_IpExt_InMcastOctets untyped
node_netstat_IpExt_InMcastOctets 0
# HELP node_netstat_IpExt_InMcastPkts Protocol IpExt statistic InMcastPkts.
# TYPE node_netstat_IpExt_InMcastPkts untyped
node_netstat_IpExt_InMcastPkts 0
# HELP node_netstat_IpExt_InNoECTPkts Protocol IpExt statistic InNoECTPkts.
# TYPE node_netstat_IpExt_InNoECTPkts untyped
node_netstat_IpExt_InNoECTPkts 139536
# HELP node_netstat_IpExt_InNoRoutes Protocol IpExt statistic InNoRoutes.
# TYPE node_netstat_IpExt_InNoRoutes untyped
node_netstat_IpExt_InNoRoutes 1
# HELP node_netstat_IpExt_InOctets Protocol IpExt statistic InOctets.
# TYPE node_netstat_IpExt_InOctets untyped
node_netstat_IpExt_InOctets 3.4550209e+07
# HELP node_netstat_IpExt_InTruncatedPkts Protocol IpExt statistic InTruncatedPkts.
# TYPE node_netstat_IpExt_InTruncatedPkts untyped
node_netstat_IpExt_InTruncatedPkts 0
# HELP node_netstat_IpExt_OutBcastOctets Protocol IpExt statistic OutBcastOctets.
# TYPE node_netstat_IpExt_OutBcastOctets untyped
node_netstat_IpExt_OutBcastOctets 0
# HELP node_netstat_IpExt_OutBcastPkts Protocol IpExt statistic OutBcastPkts.
# TYPE node_netstat_IpExt_OutBcastPkts untyped
node_netstat_IpExt_OutBcastPkts 0
# HELP node_netstat_IpExt_OutMcastOctets Protocol IpExt statistic OutMcastOctets.
# TYPE node_netstat_IpExt_OutMcastOctets untyped
node_netstat_IpExt_OutMcastOctets 0
# HELP node_netstat_IpExt_OutMcastPkts Protocol IpExt statistic OutMcastPkts.
# TYPE node_netstat_IpExt_OutMcastPkts untyped
node_netstat_IpExt_OutMcastPkts 0
# HELP node_netstat_IpExt_OutOctets Protocol IpExt statistic OutOctets.
# TYPE node_netstat_IpExt_OutOctets untyped
node_netstat_IpExt_OutOctets 3.469735e+07
# HELP node_netstat_Ip_DefaultTTL Protocol Ip statistic DefaultTTL.
# TYPE node_netstat_Ip_DefaultTTL untyped
node_netstat_Ip_DefaultTTL 64
# HELP node_netstat_Ip_ForwDatagrams Protocol Ip statistic ForwDatagrams.
# TYPE node_netstat_Ip_ForwDatagrams untyped
node_netstat_Ip_ForwDatagrams 0
# HELP node_netstat_Ip_Forwarding Protocol Ip statistic Forwarding.
# TYPE node_netstat_Ip_Forwarding untyped
node_netstat_Ip_Forwarding 1
# HELP node_netstat_Ip_FragCreates Protocol Ip statistic FragCreates.
# TYPE node_netstat_Ip_FragCreates untyped
node_netstat_Ip_FragCreates 0
# HELP node_netstat_Ip_FragFails Protocol Ip statistic FragFails.
# TYPE node_netstat_Ip_FragFails untyped
node_netstat_Ip_FragFails 0
# HELP node_netstat_Ip_FragOKs Protocol Ip statistic FragOKs.
# TYPE node_netstat_Ip_FragOKs untyped
node_netstat_Ip_FragOKs 0
# HELP node_netstat_Ip_InAddrErrors Protocol Ip statistic InAddrErrors.
# TYPE node_netstat_Ip_InAddrErrors untyped
node_netstat_Ip_InAddrErrors 0
# HELP node_netstat_Ip_InDelivers Protocol Ip statistic InDelivers.
# TYPE node_netstat_Ip_InDelivers untyped
node_netstat_Ip_InDelivers 139535
# HELP node_netstat_Ip_InDiscards Protocol Ip statistic InDiscards.
# TYPE node_netstat_Ip_InDiscards untyped
node_netstat_Ip_InDiscards 0
# HELP node_netstat_Ip_InHdrErrors Protocol Ip statistic InHdrErrors.
# TYPE node_netstat_Ip_InHdrErrors untyped
node_netstat_Ip_InHdrErrors 0
# HELP node_netstat_Ip_InReceives Protocol Ip statistic InReceives.
# TYPE node_netstat_Ip_InReceives untyped
node_netstat_Ip_InReceives 139536
# HELP node_netstat_Ip_InUnknownProtos Protocol Ip statistic InUnknownProtos.
# TYPE node_netstat_Ip_InUnknownProtos untyped
node_netstat_Ip_InUnknownProtos 0
# HELP node_netstat_Ip_OutDiscards Protocol Ip statistic OutDiscards.
# TYPE node_netstat_Ip_OutDiscards untyped
node_netstat_Ip_OutDiscards 16
# HELP node_netstat_Ip_OutNoRoutes Protocol Ip statistic OutNoRoutes.
# TYPE node_netstat_Ip_OutNoRoutes untyped
node_netstat_Ip_OutNoRoutes 0
# HELP node_netstat_Ip_OutRequests Protocol Ip statistic OutRequests.
# TYPE node_netstat_Ip_OutRequests untyped
node_netstat_Ip_OutRequests 139089
# HELP node_netstat_Ip_ReasmFails Protocol Ip statistic ReasmFails.
# TYPE node_netstat_Ip_ReasmFails untyped
node_netstat_Ip_ReasmFails 0
# HELP node_netstat_Ip_ReasmOKs Protocol Ip statistic ReasmOKs.
# TYPE node_netstat_Ip_ReasmOKs untyped
node_netstat_Ip_ReasmOKs 0
# HELP node_netstat_Ip_ReasmReqds Protocol Ip statistic ReasmReqds.
# TYPE node_netstat_Ip_ReasmReqds untyped
node_netstat_Ip_ReasmReqds 0
# HELP node_netstat_Ip_ReasmTimeout Protocol Ip statistic ReasmTimeout.
# TYPE node_netstat_Ip_ReasmTimeout untyped
node_netstat_Ip_ReasmTimeout 0
# HELP node_netstat_TcpExt_ArpFilter Protocol TcpExt statistic ArpFilter.
# TYPE node_netstat_TcpExt_ArpFilter untyped
node_netstat_TcpExt_ArpFilter 0
# HELP node_netstat_TcpExt_BusyPollRxPackets Protocol TcpExt statistic BusyPollRxPackets.
# TYPE node_netstat_TcpExt_BusyPollRxPackets untyped
node_netstat_TcpExt_BusyPollRxPackets 0
# HELP node_netstat_TcpExt_DelayedACKLocked Protocol TcpExt statistic DelayedACKLocked.
# TYPE node_netstat_TcpExt_DelayedACKLocked untyped
node_netstat_TcpExt_DelayedACKLocked 0
# HELP node_netstat_TcpExt_DelayedACKLost Protocol TcpExt statistic DelayedACKLost.
# TYPE node_netstat_TcpExt_DelayedACKLost untyped
node_netstat_TcpExt_DelayedACKLost 0
# HELP node_netstat_TcpExt_DelayedACKs Protocol TcpExt statistic DelayedACKs.
# TYPE node_netstat_TcpExt_DelayedACKs untyped
node_netstat_TcpExt_DelayedACKs 3374
# HELP node_netstat_TcpExt_EmbryonicRsts Protocol TcpExt statistic EmbryonicRsts.
# TYPE node_netstat_TcpExt_EmbryonicRsts untyped
node_netstat_TcpExt_EmbryonicRsts 0
# HELP node_netstat_TcpExt_IPReversePathFilter Protocol TcpExt statistic IPReversePathFilter.
# TYPE node_netstat_TcpExt_IPReversePathFilter untyped
node_netstat_TcpExt_IPReversePathFilter 0
# HELP node_netstat_TcpExt_ListenDrops Protocol TcpExt statistic ListenDrops.
# TYPE node_netstat_TcpExt_ListenDrops untyped
node_netstat_TcpExt_ListenDrops 0
# HELP node_netstat_TcpExt_ListenOverflows Protocol TcpExt statistic ListenOverflows.
# TYPE node_netstat_TcpExt_ListenOverflows untyped
node_netstat_TcpExt_ListenOverflows 0
# HELP node_netstat_TcpExt_LockDroppedIcmps Protocol TcpExt statistic LockDroppedIcmps.
# TYPE node_netstat_TcpExt_LockDroppedIcmps untyped
node_netstat_TcpExt_LockDroppedIcmps 0
# HELP node_netstat_TcpExt_OfoPruned Protocol TcpExt statistic OfoPruned.
# TYPE node_netstat_TcpExt_OfoPruned untyped
node_netstat_TcpExt_OfoPruned 0
# HELP node_netstat_TcpExt_OutOfWindowIcmps Protocol TcpExt statistic OutOfWindowIcmps.
# TYPE node_netstat_TcpExt_OutOfWindowIcmps untyped
node_netstat_TcpExt_OutOfWindowIcmps 0
# HELP node_netstat_TcpExt_PAWSActive Protocol TcpExt statistic PAWSActive.
# TYPE node_netstat_TcpExt_PAWSActive untyped
node_netstat_TcpExt_PAWSActive 0
# HELP node_netstat_TcpExt_PAWSEstab Protocol TcpExt statistic PAWSEstab.
# TYPE node_netstat_TcpExt_PAWSEstab untyped
node_netstat_TcpExt_PAWSEstab 0
# HELP node_netstat_TcpExt_PAWSPassive Protocol TcpExt statistic PAWSPassive.
# TYPE node_netstat_TcpExt_PAWSPassive untyped
node_netstat_TcpExt_PAWSPassive 0
# HELP node_netstat_TcpExt_PruneCalled Protocol TcpExt statistic PruneCalled.
# TYPE node_netstat_TcpExt_PruneCalled untyped
node_netstat_TcpExt_PruneCalled 0
# HELP node_netstat_TcpExt_RcvPruned Protocol TcpExt statistic RcvPruned.
# TYPE node_netstat_TcpExt_RcvPruned untyped
node_netstat_TcpExt_RcvPruned 0
# HELP node_netstat_TcpExt_SyncookiesFailed Protocol TcpExt statistic SyncookiesFailed.
# TYPE node_netstat_TcpExt_SyncookiesFailed untyped
node_netstat_TcpExt_SyncookiesFailed 0
# HELP node_netstat_TcpExt_SyncookiesRecv Protocol TcpExt statistic SyncookiesRecv.
# TYPE node_netstat_TcpExt_SyncookiesRecv untyped
node_netstat_TcpExt_SyncookiesRecv 0
# HELP node_netstat_TcpExt_SyncookiesSent Protocol TcpExt statistic SyncookiesSent.
# TYPE node_netstat_TcpExt_SyncookiesSent untyped
node_netstat_TcpExt_SyncookiesSent 0
# HELP node_netstat_TcpExt_TCPACKSkippedChallenge Protocol TcpExt statistic TCPACKSkippedChallenge.
# TYPE node_netstat_TcpExt_TCPACKSkippedChallenge untyped
node_netstat_TcpExt_TCPACKSkippedChallenge 0
# HELP node_netstat_TcpExt_TCPACKSkippedFinWait2 Protocol TcpExt statistic TCPACKSkippedFinWait2.
# TYPE node_netstat_TcpExt_TCPACKSkippedFinWait2 untyped
node_netstat_TcpExt_TCPACKSkippedFinWait2 0
# HELP node_netstat_TcpExt_TCPACKSkippedPAWS Protocol TcpExt statistic TCPACKSkippedPAWS.
# TYPE node_netstat_TcpExt_TCPACKSkippedPAWS untyped
node_netstat_TcpExt_TCPACKSkippedPAWS 0
# HELP node_netstat_TcpExt_TCPACKSkippedSeq Protocol TcpExt statistic TCPACKSkippedSeq.
# TYPE node_netstat_TcpExt_TCPACKSkippedSeq untyped
node_netstat_TcpExt_TCPACKSkippedSeq 0
# HELP node_netstat_TcpExt_TCPACKSkippedSynRecv Protocol TcpExt statistic TCPACKSkippedSynRecv.
# TYPE node_netstat_TcpExt_TCPACKSkippedSynRecv untyped
node_netstat_TcpExt_TCPACKSkippedSynRecv 0
# HELP node_netstat_TcpExt_TCPACKSkippedTimeWait Protocol TcpExt statistic TCPACKSkippedTimeWait.
# TYPE node_netstat_TcpExt_TCPACKSkippedTimeWait untyped
node_netstat_TcpExt_TCPACKSkippedTimeWait 0
# HELP node_netstat_TcpExt_TCPAbortFailed Protocol TcpExt statistic TCPAbortFailed.
# TYPE node_netstat_TcpExt_TCPAbortFailed untyped
node_netstat_TcpExt_TCPAbortFailed 0
# HELP node_netstat_TcpExt_TCPAbortOnClose Protocol TcpExt statistic TCPAbortOnClose.
# TYPE node_netstat_TcpExt_TCPAbortOnClose untyped
node_netstat_TcpExt_TCPAbortOnClose 1
# HELP node_netstat_TcpExt_TCPAbortOnData Protocol TcpExt statistic TCPAbortOnData.
# TYPE node_netstat_TcpExt_TCPAbortOnData untyped
node_netstat_TcpExt_TCPAbortOnData 2
# HELP node_netstat_TcpExt_TCPAbortOnLinger Protocol TcpExt statistic TCPAbortOnLinger.
# TYPE node_netstat_TcpExt_TCPAbortOnLinger untyped
node_netstat_TcpExt_TCPAbortOnLinger 0
# HELP node_netstat_TcpExt_TCPAbortOnMemory Protocol TcpExt statistic TCPAbortOnMemory.
# TYPE node_netstat_TcpExt_TCPAbortOnMemory untyped
node_netstat_TcpExt_TCPAbortOnMemory 0
# HELP node_netstat_TcpExt_TCPAbortOnTimeout Protocol TcpExt statistic TCPAbortOnTimeout.
# TYPE node_netstat_TcpExt_TCPAbortOnTimeout untyped
node_netstat_TcpExt_TCPAbortOnTimeout 0
# HELP node_netstat_TcpExt_TCPAutoCorking Protocol TcpExt statistic TCPAutoCorking.
# TYPE node_netstat_TcpExt_TCPAutoCorking untyped
node_netstat_TcpExt_TCPAutoCorking 4
# HELP node_netstat_TcpExt_TCPBacklogDrop Protocol TcpExt statistic TCPBacklogDrop.
# TYPE node_netstat_TcpExt_TCPBacklogDrop untyped
node_netstat_TcpExt_TCPBacklogDrop 0
# HELP node_netstat_TcpExt_TCPChallengeACK Protocol TcpExt statistic TCPChallengeACK.
# TYPE node_netstat_TcpExt_TCPChallengeACK untyped
node_netstat_TcpExt_TCPChallengeACK 0
# HELP node_netstat_TcpExt_TCPDSACKIgnoredNoUndo Protocol TcpExt statistic TCPDSACKIgnoredNoUndo.
# TYPE node_netstat_TcpExt_TCPDSACKIgnoredNoUndo untyped
node_netstat_TcpExt_TCPDSACKIgnoredNoUndo 0
# HELP node_netstat_TcpExt_TCPDSACKIgnoredOld Protocol TcpExt statistic TCPDSACKIgnoredOld.
# TYPE node_netstat_TcpExt_TCPDSACKIgnoredOld untyped
node_netstat_TcpExt_TCPDSACKIgnoredOld 0
# HELP node_netstat_TcpExt_TCPDSACKOfoRecv Protocol TcpExt statistic TCPDSACKOfoRecv.
# TYPE node_netstat_TcpExt_TCPDSACKOfoRecv untyped
node_netstat_TcpExt_TCPDSACKOfoRecv 0
# HELP node_netstat_TcpExt_TCPDSACKOfoSent Protocol TcpExt statistic TCPDSACKOfoSent.
# TYPE node_netstat_TcpExt_TCPDSACKOfoSent untyped
node_netstat_TcpExt_TCPDSACKOfoSent 0
# HELP node_netstat_TcpExt_TCPDSACKOldSent Protocol TcpExt statistic TCPDSACKOldSent.
# TYPE node_netstat_TcpExt_TCPDSACKOldSent untyped
node_netstat_TcpExt_TCPDSACKOldSent 0
# HELP node_netstat_TcpExt_TCPDSACKRecv Protocol TcpExt statistic TCPDSACKRecv.
# TYPE node_netstat_TcpExt_TCPDSACKRecv untyped
node_netstat_TcpExt_TCPDSACKRecv 0
# HELP node_netstat_TcpExt_TCPDSACKUndo Protocol TcpExt statistic TCPDSACKUndo.
# TYPE node_netstat_TcpExt_TCPDSACKUndo untyped
node_netstat_TcpExt_TCPDSACKUndo 0
# HELP node_netstat_TcpExt_TCPDeferAcceptDrop Protocol TcpExt statistic TCPDeferAcceptDrop.
# TYPE node_netstat_TcpExt_TCPDeferAcceptDrop untyped
node_netstat_TcpExt_TCPDeferAcceptDrop 0
# HELP node_netstat_TcpExt_TCPDirectCopyFromBacklog Protocol TcpExt statistic TCPDirectCopyFromBacklog.
# TYPE node_netstat_TcpExt_TCPDirectCopyFromBacklog untyped
node_netstat_TcpExt_TCPDirectCopyFromBacklog 0
# HELP node_netstat_TcpExt_TCPDirectCopyFromPrequeue Protocol TcpExt statistic TCPDirectCopyFromPrequeue.
# TYPE node_netstat_TcpExt_TCPDirectCopyFromPrequeue untyped
node_netstat_TcpExt_TCPDirectCopyFromPrequeue 1.062e+06
# HELP node_netstat_TcpExt_TCPFACKReorder Protocol TcpExt statistic TCPFACKReorder.
# TYPE node_netstat_TcpExt_TCPFACKReorder untyped
node_netstat_TcpExt_TCPFACKReorder 0
# HELP node_netstat_TcpExt_TCPFastOpenActive Protocol TcpExt statistic TCPFastOpenActive.
# TYPE node_netstat_TcpExt_TCPFastOpenActive untyped
node_netstat_TcpExt_TCPFastOpenActive 0
# HELP node_netstat_TcpExt_TCPFastOpenActiveFail Protocol TcpExt statistic TCPFastOpenActiveFail.
# TYPE node_netstat_TcpExt_TCPFastOpenActiveFail untyped
node_netstat_TcpExt_TCPFastOpenActiveFail 0
# HELP node_netstat_TcpExt_TCPFastOpenCookieReqd Protocol TcpExt statistic TCPFastOpenCookieReqd.
# TYPE node_netstat_TcpExt_TCPFastOpenCookieReqd untyped
node_netstat_TcpExt_TCPFastOpenCookieReqd 0
# HELP node_netstat_TcpExt_TCPFastOpenListenOverflow Protocol TcpExt statistic TCPFastOpenListenOverflow.
# TYPE node_netstat_TcpExt_TCPFastOpenListenOverflow untyped
node_netstat_TcpExt_TCPFastOpenListenOverflow 0
# HELP node_netstat_TcpExt_TCPFastOpenPassive Protocol TcpExt statistic TCPFastOpenPassive.
# TYPE node_netstat_TcpExt_TCPFastOpenPassive untyped
node_netstat_TcpExt_TCPFastOpenPassive 0
# HELP node_netstat_TcpExt_TCPFastOpenPassiveFail Protocol TcpExt statistic TCPFastOpenPassiveFail.
# TYPE node_netstat_TcpExt_TCPFastOpenPassiveFail untyped
node_netstat_TcpExt_TCPFastOpenPassiveFail 0
# HELP node_netstat_TcpExt_TCPFastRetrans Protocol TcpExt statistic TCPFastRetrans.
# TYPE node_netstat_TcpExt_TCPFastRetrans untyped
node_netstat_TcpExt_TCPFastRetrans 0
# HELP node_netstat_TcpExt_TCPForwardRetrans Protocol TcpExt statistic TCPForwardRetrans.
# TYPE node_netstat_TcpExt_TCPForwardRetrans untyped
node_netstat_TcpExt_TCPForwardRetrans 0
# HELP node_netstat_TcpExt_TCPFromZeroWindowAdv Protocol TcpExt statistic TCPFromZeroWindowAdv.
# TYPE node_netstat_TcpExt_TCPFromZeroWindowAdv untyped
node_netstat_TcpExt_TCPFromZeroWindowAdv 0
# HELP node_netstat_TcpExt_TCPFullUndo Protocol TcpExt statistic TCPFullUndo.
# TYPE node_netstat_TcpExt_TCPFullUndo untyped
node_netstat_TcpExt_TCPFullUndo 0
# HELP node_netstat_TcpExt_TCPHPAcks Protocol TcpExt statistic TCPHPAcks.
# TYPE node_netstat_TcpExt_TCPHPAcks untyped
node_netstat_TcpExt_TCPHPAcks 21314
# HELP node_netstat_TcpExt_TCPHPHits Protocol TcpExt statistic TCPHPHits.
# TYPE node_netstat_TcpExt_TCPHPHits untyped
node_netstat_TcpExt_TCPHPHits 3607
# HELP node_netstat_TcpExt_TCPHPHitsToUser Protocol TcpExt statistic TCPHPHitsToUser.
# TYPE node_netstat_TcpExt_TCPHPHitsToUser untyped
node_netstat_TcpExt_TCPHPHitsToUser 0
# HELP node_netstat_TcpExt_TCPHystartDelayCwnd Protocol TcpExt statistic TCPHystartDelayCwnd.
# TYPE node_netstat_TcpExt_TCPHystartDelayCwnd untyped
node_netstat_TcpExt_TCPHystartDelayCwnd 0
# HELP node_netstat_TcpExt_TCPHystartDelayDetect Protocol TcpExt statistic TCPHystartDelayDetect.
# TYPE node_netstat_TcpExt_TCPHystartDelayDetect untyped
node_netstat_TcpExt_TCPHystartDelayDetect 0
# HELP node_netstat_TcpExt_TCPHystartTrainCwnd Protocol TcpExt statistic TCPHystartTrainCwnd.
# TYPE node_netstat_TcpExt_TCPHystartTrainCwnd untyped
node_netstat_TcpExt_TCPHystartTrainCwnd 0
# HELP node_netstat_TcpExt_TCPHystartTrainDetect Protocol TcpExt statistic TCPHystartTrainDetect.
# TYPE node_netstat_TcpExt_TCPHystartTrainDetect untyped
node_netstat_TcpExt_TCPHystartTrainDetect 0
# HELP node_netstat_TcpExt_TCPLossFailures Protocol TcpExt statistic TCPLossFailures.
# TYPE node_netstat_TcpExt_TCPLossFailures untyped
node_netstat_TcpExt_TCPLossFailures 0
# HELP node_netstat_TcpExt_TCPLossProbeRecovery Protocol TcpExt statistic TCPLossProbeRecovery.
# TYPE node_netstat_TcpExt_TCPLossProbeRecovery untyped
node_netstat_TcpExt_TCPLossProbeRecovery 0
# HELP node_netstat_TcpExt_TCPLossProbes Protocol TcpExt statistic TCPLossProbes.
# TYPE node_netstat_TcpExt_TCPLossProbes untyped
node_netstat_TcpExt_TCPLossProbes 0
# HELP node_netstat_TcpExt_TCPLossUndo Protocol TcpExt statistic TCPLossUndo.
# TYPE node_netstat_TcpExt_TCPLossUndo untyped
node_netstat_TcpExt_TCPLossUndo 0
# HELP node_netstat_TcpExt_TCPLostRetransmit Protocol TcpExt statistic TCPLostRetransmit.
# TYPE node_netstat_TcpExt_TCPLostRetransmit untyped
node_netstat_TcpExt_TCPLostRetransmit 0
# HELP node_netstat_TcpExt_TCPMD5NotFound Protocol TcpExt statistic TCPMD5NotFound.
# TYPE node_netstat_TcpExt_TCPMD5NotFound untyped
node_netstat_TcpExt_TCPMD5NotFound 0
# HELP node_netstat_TcpExt_TCPMD5Unexpected Protocol TcpExt statistic TCPMD5Unexpected.
# TYPE node_netstat_TcpExt_TCPMD5Unexpected untyped
node_netstat_TcpExt_TCPMD5Unexpected 0
# HELP node_netstat_TcpExt_TCPMemoryPressures Protocol TcpExt statistic TCPMemoryPressures.
# TYPE node_netstat_TcpExt_TCPMemoryPressures untyped
node_netstat_TcpExt_TCPMemoryPressures 0
# HELP node_netstat_TcpExt_TCPMinTTLDrop Protocol TcpExt statistic TCPMinTTLDrop.
# TYPE node_netstat_TcpExt_TCPMinTTLDrop untyped
node_netstat_TcpExt_TCPMinTTLDrop 0
# HELP node_netstat_TcpExt_TCPOFODrop Protocol TcpExt statistic TCPOFODrop.
# TYPE node_netstat_TcpExt_TCPOFODrop untyped
node_netstat_TcpExt_TCPOFODrop 0
# HELP node_netstat_TcpExt_TCPOFOMerge Protocol TcpExt statistic TCPOFOMerge.
# TYPE node_netstat_TcpExt_TCPOFOMerge untyped
node_netstat_TcpExt_TCPOFOMerge 0
# HELP node_netstat_TcpExt_TCPOFOQueue Protocol TcpExt statistic TCPOFOQueue.
# TYPE node_netstat_TcpExt_TCPOFOQueue untyped
node_netstat_TcpExt_TCPOFOQueue 0
# HELP node_netstat_TcpExt_TCPOrigDataSent Protocol TcpExt statistic TCPOrigDataSent.
# TYPE node_netstat_TcpExt_TCPOrigDataSent untyped
node_netstat_TcpExt_TCPOrigDataSent 99807
# HELP node_netstat_TcpExt_TCPPartialUndo Protocol TcpExt statistic TCPPartialUndo.
# TYPE node_netstat_TcpExt_TCPPartialUndo untyped
node_netstat_TcpExt_TCPPartialUndo 0
# HELP node_netstat_TcpExt_TCPPrequeueDropped Protocol TcpExt statistic TCPPrequeueDropped.
# TYPE node_netstat_TcpExt_TCPPrequeueDropped untyped
node_netstat_TcpExt_TCPPrequeueDropped 0
# HELP node_netstat_TcpExt_TCPPrequeued Protocol TcpExt statistic TCPPrequeued.
# TYPE node_netstat_TcpExt_TCPPrequeued untyped
node_netstat_TcpExt_TCPPrequeued 68753
# HELP node_netstat_TcpExt_TCPPureAcks Protocol TcpExt statistic TCPPureAcks.
# TYPE node_netstat_TcpExt_TCPPureAcks untyped
node_netstat_TcpExt_TCPPureAcks 17834
# HELP node_netstat_TcpExt_TCPRcvCoalesce Protocol TcpExt statistic TCPRcvCoalesce.
# TYPE node_netstat_TcpExt_TCPRcvCoalesce untyped
node_netstat_TcpExt_TCPRcvCoalesce 2
# HELP node_netstat_TcpExt_TCPRcvCollapsed Protocol TcpExt statistic TCPRcvCollapsed.
# TYPE node_netstat_TcpExt_TCPRcvCollapsed untyped
node_netstat_TcpExt_TCPRcvCollapsed 0
# HELP node_netstat_TcpExt_TCPRenoFailures Protocol TcpExt statistic TCPRenoFailures.
# TYPE node_netstat_TcpExt_TCPRenoFailures untyped
node_netstat_TcpExt_TCPRenoFailures 0
# HELP node_netstat_TcpExt_TCPRenoRecovery Protocol TcpExt statistic TCPRenoRecovery.
# TYPE node_netstat_TcpExt_TCPRenoRecovery untyped
node_netstat_TcpExt_TCPRenoRecovery 0
# HELP node_netstat_TcpExt_TCPRenoRecoveryFail Protocol TcpExt statistic TCPRenoRecoveryFail.
# TYPE node_netstat_TcpExt_TCPRenoRecoveryFail untyped
node_netstat_TcpExt_TCPRenoRecoveryFail 0
# HELP node_netstat_TcpExt_TCPRenoReorder Protocol TcpExt statistic TCPRenoReorder.
# TYPE node_netstat_TcpExt_TCPRenoReorder untyped
node_netstat_TcpExt_TCPRenoReorder 0
# HELP node_netstat_TcpExt_TCPReqQFullDoCookies Protocol TcpExt statistic TCPReqQFullDoCookies.
# TYPE node_netstat_TcpExt_TCPReqQFullDoCookies untyped
node_netstat_TcpExt_TCPReqQFullDoCookies 0
# HELP node_netstat_TcpExt_TCPReqQFullDrop Protocol TcpExt statistic TCPReqQFullDrop.
# TYPE node_netstat_TcpExt_TCPReqQFullDrop untyped
node_netstat_TcpExt_TCPReqQFullDrop 0
# HELP node_netstat_TcpExt_TCPRetransFail Protocol TcpExt statistic TCPRetransFail.
# TYPE node_netstat_TcpExt_TCPRetransFail untyped
node_netstat_TcpExt_TCPRetransFail 0
# HELP node_netstat_TcpExt_TCPSACKDiscard Protocol TcpExt statistic TCPSACKDiscard.
# TYPE node_netstat_TcpExt_TCPSACKDiscard untyped
node_netstat_TcpExt_TCPSACKDiscard 0
# HELP node_netstat_TcpExt_TCPSACKReneging Protocol TcpExt statistic TCPSACKReneging.
# TYPE node_netstat_TcpExt_TCPSACKReneging untyped
node_netstat_TcpExt_TCPSACKReneging 0
# HELP node_netstat_TcpExt_TCPSACKReorder Protocol TcpExt statistic TCPSACKReorder.
# TYPE node_netstat_TcpExt_TCPSACKReorder untyped
node_netstat_TcpExt_TCPSACKReorder 0
# HELP node_netstat_TcpExt_TCPSYNChallenge Protocol TcpExt statistic TCPSYNChallenge.
# TYPE node_netstat_TcpExt_TCPSYNChallenge untyped
node_netstat_TcpExt_TCPSYNChallenge 0
# HELP node_netstat_TcpExt_TCPSackFailures Protocol TcpExt statistic TCPSackFailures.
# TYPE node_netstat_TcpExt_TCPSackFailures untyped
node_netstat_TcpExt_TCPSackFailures 0
# HELP node_netstat_TcpExt_TCPSackMerged Protocol TcpExt statistic TCPSackMerged.
# TYPE node_netstat_TcpExt_TCPSackMerged untyped
node_netstat_TcpExt_TCPSackMerged 0
# HELP node_netstat_TcpExt_TCPSackRecovery Protocol TcpExt statistic TCPSackRecovery.
# TYPE node_netstat_TcpExt_TCPSackRecovery untyped
node_netstat_TcpExt_TCPSackRecovery 0
# HELP node_netstat_TcpExt_TCPSackRecoveryFail Protocol TcpExt statistic TCPSackRecoveryFail.
# TYPE node_netstat_TcpExt_TCPSackRecoveryFail untyped
node_netstat_TcpExt_TCPSackRecoveryFail 0
# HELP node_netstat_TcpExt_TCPSackShiftFallback Protocol TcpExt statistic TCPSackShiftFallback.
# TYPE node_netstat_TcpExt_TCPSackShiftFallback untyped
node_netstat_TcpExt_TCPSackShiftFallback 0
# HELP node_netstat_TcpExt_TCPSackShifted Protocol TcpExt statistic TCPSackShifted.
# TYPE node_netstat_TcpExt_TCPSackShifted untyped
node_netstat_TcpExt_TCPSackShifted 0
# HELP node_netstat_TcpExt_TCPSchedulerFailed Protocol TcpExt statistic TCPSchedulerFailed.
# TYPE node_netstat_TcpExt_TCPSchedulerFailed untyped
node_netstat_TcpExt_TCPSchedulerFailed 0
# HELP node_netstat_TcpExt_TCPSlowStartRetrans Protocol TcpExt statistic TCPSlowStartRetrans.
# TYPE node_netstat_TcpExt_TCPSlowStartRetrans untyped
node_netstat_TcpExt_TCPSlowStartRetrans 0
# HELP node_netstat_TcpExt_TCPSpuriousRTOs Protocol TcpExt statistic TCPSpuriousRTOs.
# TYPE node_netstat_TcpExt_TCPSpuriousRTOs untyped
node_netstat_TcpExt_TCPSpuriousRTOs 0
# HELP node_netstat_TcpExt_TCPSpuriousRtxHostQueues Protocol TcpExt statistic TCPSpuriousRtxHostQueues.
# TYPE node_netstat_TcpExt_TCPSpuriousRtxHostQueues untyped
node_netstat_TcpExt_TCPSpuriousRtxHostQueues 0
# HELP node_netstat_TcpExt_TCPSynRetrans Protocol TcpExt statistic TCPSynRetrans.
# TYPE node_netstat_TcpExt_TCPSynRetrans untyped
node_netstat_TcpExt_TCPSynRetrans 0
# HELP node_netstat_TcpExt_TCPTSReorder Protocol TcpExt statistic TCPTSReorder.
# TYPE node_netstat_TcpExt_TCPTSReorder untyped
node_netstat_TcpExt_TCPTSReorder 0
# HELP node_netstat_TcpExt_TCPTimeWaitOverflow Protocol TcpExt statistic TCPTimeWaitOverflow.
# TYPE node_netstat_TcpExt_TCPTimeWaitOverflow untyped
node_netstat_TcpExt_TCPTimeWaitOverflow 0
# HELP node_netstat_TcpExt_TCPTimeouts Protocol TcpExt statistic TCPTimeouts.
# TYPE node_netstat_TcpExt_TCPTimeouts untyped
node_netstat_TcpExt_TCPTimeouts 0
# HELP node_netstat_TcpExt_TCPToZeroWindowAdv Protocol TcpExt statistic TCPToZeroWindowAdv.
# TYPE node_netstat_TcpExt_TCPToZeroWindowAdv untyped
node_netstat_TcpExt_TCPToZeroWindowAdv 0
# HELP node_netstat_TcpExt_TCPWantZeroWindowAdv Protocol TcpExt statistic TCPWantZeroWindowAdv.
# TYPE node_netstat_TcpExt_TCPWantZeroWindowAdv untyped
node_netstat_TcpExt_TCPWantZeroWindowAdv 0
# HELP node_netstat_TcpExt_TW Protocol TcpExt statistic TW.
# TYPE node_netstat_TcpExt_TW untyped
node_netstat_TcpExt_TW 35
# HELP node_netstat_TcpExt_TWKilled Protocol TcpExt statistic TWKilled.
# TYPE node_netstat_TcpExt_TWKilled untyped
node_netstat_TcpExt_TWKilled 0
# HELP node_netstat_TcpExt_TWRecycled Protocol TcpExt statistic TWRecycled.
# TYPE node_netstat_TcpExt_TWRecycled untyped
node_netstat_TcpExt_TWRecycled 0
# HELP node_netstat_Tcp_ActiveOpens Protocol Tcp statistic ActiveOpens.
# TYPE node_netstat_Tcp_ActiveOpens untyped
node_netstat_Tcp_ActiveOpens 122
# HELP node_netstat_Tcp_AttemptFails Protocol Tcp statistic AttemptFails.
# TYPE node_netstat_Tcp_AttemptFails untyped
node_netstat_Tcp_AttemptFails 48
# HELP node_netstat_Tcp_CurrEstab Protocol Tcp statistic CurrEstab.
# TYPE node_netstat_Tcp_CurrEstab untyped
node_netstat_Tcp_CurrEstab 67
# HELP node_netstat_Tcp_EstabResets Protocol Tcp statistic EstabResets.
# TYPE node_netstat_Tcp_EstabResets untyped
node_netstat_Tcp_EstabResets 3
# HELP node_netstat_Tcp_InCsumErrors Protocol Tcp statistic InCsumErrors.
# TYPE node_netstat_Tcp_InCsumErrors untyped
node_netstat_Tcp_InCsumErrors 0
# HELP node_netstat_Tcp_InErrs Protocol Tcp statistic InErrs.
# TYPE node_netstat_Tcp_InErrs untyped
node_netstat_Tcp_InErrs 0
# HELP node_netstat_Tcp_InSegs Protocol Tcp statistic InSegs.
# TYPE node_netstat_Tcp_InSegs untyped
node_netstat_Tcp_InSegs 138959
# HELP node_netstat_Tcp_MaxConn Protocol Tcp statistic MaxConn.
# TYPE node_netstat_Tcp_MaxConn untyped
node_netstat_Tcp_MaxConn -1
# HELP node_netstat_Tcp_OutRsts Protocol Tcp statistic OutRsts.
# TYPE node_netstat_Tcp_OutRsts untyped
node_netstat_Tcp_OutRsts 51
# HELP node_netstat_Tcp_OutSegs Protocol Tcp statistic OutSegs.
# TYPE node_netstat_Tcp_OutSegs untyped
node_netstat_Tcp_OutSegs 138800
# HELP node_netstat_Tcp_PassiveOpens Protocol Tcp statistic PassiveOpens.
# TYPE node_netstat_Tcp_PassiveOpens untyped
node_netstat_Tcp_PassiveOpens 75
# HELP node_netstat_Tcp_RetransSegs Protocol Tcp statistic RetransSegs.
# TYPE node_netstat_Tcp_RetransSegs untyped
node_netstat_Tcp_RetransSegs 0
# HELP node_netstat_Tcp_RtoAlgorithm Protocol Tcp statistic RtoAlgorithm.
# TYPE node_netstat_Tcp_RtoAlgorithm untyped
node_netstat_Tcp_RtoAlgorithm 1
# HELP node_netstat_Tcp_RtoMax Protocol Tcp statistic RtoMax.
# TYPE node_netstat_Tcp_RtoMax untyped
node_netstat_Tcp_RtoMax 120000
# HELP node_netstat_Tcp_RtoMin Protocol Tcp statistic RtoMin.
# TYPE node_netstat_Tcp_RtoMin untyped
node_netstat_Tcp_RtoMin 200
# HELP node_netstat_UdpLite_InCsumErrors Protocol UdpLite statistic InCsumErrors.
# TYPE node_netstat_UdpLite_InCsumErrors untyped
node_netstat_UdpLite_InCsumErrors 0
# HELP node_netstat_UdpLite_InDatagrams Protocol UdpLite statistic InDatagrams.
# TYPE node_netstat_UdpLite_InDatagrams untyped
node_netstat_UdpLite_InDatagrams 0
# HELP node_netstat_UdpLite_InErrors Protocol UdpLite statistic InErrors.
# TYPE node_netstat_UdpLite_InErrors untyped
node_netstat_UdpLite_InErrors 0
# HELP node_netstat_UdpLite_NoPorts Protocol UdpLite statistic NoPorts.
# TYPE node_netstat_UdpLite_NoPorts untyped
node_netstat_UdpLite_NoPorts 0
# HELP node_netstat_UdpLite_OutDatagrams Protocol UdpLite statistic OutDatagrams.
# TYPE node_netstat_UdpLite_OutDatagrams untyped
node_netstat_UdpLite_OutDatagrams 0
# HELP node_netstat_UdpLite_RcvbufErrors Protocol UdpLite statistic RcvbufErrors.
# TYPE node_netstat_UdpLite_RcvbufErrors untyped
node_netstat_UdpLite_RcvbufErrors 0
# HELP node_netstat_UdpLite_SndbufErrors Protocol UdpLite statistic SndbufErrors.
# TYPE node_netstat_UdpLite_SndbufErrors untyped
node_netstat_UdpLite_SndbufErrors 0
# HELP node_netstat_Udp_InCsumErrors Protocol Udp statistic InCsumErrors.
# TYPE node_netstat_Udp_InCsumErrors untyped
node_netstat_Udp_InCsumErrors 0
# HELP node_netstat_Udp_InDatagrams Protocol Udp statistic InDatagrams.
# TYPE node_netstat_Udp_InDatagrams untyped
node_netstat_Udp_InDatagrams 347
# HELP node_netstat_Udp_InErrors Protocol Udp statistic InErrors.
# TYPE node_netstat_Udp_InErrors untyped
node_netstat_Udp_InErrors 0
# HELP node_netstat_Udp_NoPorts Protocol Udp statistic NoPorts.
# TYPE node_netstat_Udp_NoPorts untyped
node_netstat_Udp_NoPorts 32
# HELP node_netstat_Udp_OutDatagrams Protocol Udp statistic OutDatagrams.
# TYPE node_netstat_Udp_OutDatagrams untyped
node_netstat_Udp_OutDatagrams 388
# HELP node_netstat_Udp_RcvbufErrors Protocol Udp statistic RcvbufErrors.
# TYPE node_netstat_Udp_RcvbufErrors untyped
node_netstat_Udp_RcvbufErrors 0
# HELP node_netstat_Udp_SndbufErrors Protocol Udp statistic SndbufErrors.
# TYPE node_netstat_Udp_SndbufErrors untyped
node_netstat_Udp_SndbufErrors 0
# HELP node_network_receive_bytes Network device statistic receive_bytes.
# TYPE node_network_receive_bytes gauge
node_network_receive_bytes{device="docker0"} 0
node_network_receive_bytes{device="enp0s3"} 36421
node_network_receive_bytes{device="enp0s8"} 80451
node_network_receive_bytes{device="lo"} 3.4463599e+07
# HELP node_network_receive_compressed Network device statistic receive_compressed.
# TYPE node_network_receive_compressed gauge
node_network_receive_compressed{device="docker0"} 0
node_network_receive_compressed{device="enp0s3"} 0
node_network_receive_compressed{device="enp0s8"} 0
node_network_receive_compressed{device="lo"} 0
# HELP node_network_receive_drop Network device statistic receive_drop.
# TYPE node_network_receive_drop gauge
node_network_receive_drop{device="docker0"} 0
node_network_receive_drop{device="enp0s3"} 0
node_network_receive_drop{device="enp0s8"} 0
node_network_receive_drop{device="lo"} 0
# HELP node_network_receive_errs Network device statistic receive_errs.
# TYPE node_network_receive_errs gauge
node_network_receive_errs{device="docker0"} 0
node_network_receive_errs{device="enp0s3"} 0
node_network_receive_errs{device="enp0s8"} 0
node_network_receive_errs{device="lo"} 0
# HELP node_network_receive_fifo Network device statistic receive_fifo.
# TYPE node_network_receive_fifo gauge
node_network_receive_fifo{device="docker0"} 0
node_network_receive_fifo{device="enp0s3"} 0
node_network_receive_fifo{device="enp0s8"} 0
node_network_receive_fifo{device="lo"} 0
# HELP node_network_receive_frame Network device statistic receive_frame.
# TYPE node_network_receive_frame gauge
node_network_receive_frame{device="docker0"} 0
node_network_receive_frame{device="enp0s3"} 0
node_network_receive_frame{device="enp0s8"} 0
node_network_receive_frame{device="lo"} 0
# HELP node_network_receive_multicast Network device statistic receive_multicast.
# TYPE node_network_receive_multicast gauge
node_network_receive_multicast{device="docker0"} 0
node_network_receive_multicast{device="enp0s3"} 0
node_network_receive_multicast{device="enp0s8"} 0
node_network_receive_multicast{device="lo"} 0
# HELP node_network_receive_packets Network device statistic receive_packets.
# TYPE node_network_receive_packets gauge
node_network_receive_packets{device="docker0"} 0
node_network_receive_packets{device="enp0s3"} 421
node_network_receive_packets{device="enp0s8"} 997
node_network_receive_packets{device="lo"} 138313
# HELP node_network_transmit_bytes Network device statistic transmit_bytes.
# TYPE node_network_transmit_bytes gauge
node_network_transmit_bytes{device="docker0"} 0
node_network_transmit_bytes{device="enp0s3"} 37074
node_network_transmit_bytes{device="enp0s8"} 330717
node_network_transmit_bytes{device="lo"} 3.4463599e+07
# HELP node_network_transmit_compressed Network device statistic transmit_compressed.
# TYPE node_network_transmit_compressed gauge
node_network_transmit_compressed{device="docker0"} 0
node_network_transmit_compressed{device="enp0s3"} 0
node_network_transmit_compressed{device="enp0s8"} 0
node_network_transmit_compressed{device="lo"} 0
# HELP node_network_transmit_drop Network device statistic transmit_drop.
# TYPE node_network_transmit_drop gauge
node_network_transmit_drop{device="docker0"} 0
node_network_transmit_drop{device="enp0s3"} 0
node_network_transmit_drop{device="enp0s8"} 0
node_network_transmit_drop{device="lo"} 0
# HELP node_network_transmit_errs Network device statistic transmit_errs.
# TYPE node_network_transmit_errs gauge
node_network_transmit_errs{device="docker0"} 0
node_network_transmit_errs{device="enp0s3"} 0
node_network_transmit_errs{device="enp0s8"} 0
node_network_transmit_errs{device="lo"} 0
# HELP node_network_transmit_fifo Network device statistic transmit_fifo.
# TYPE node_network_transmit_fifo gauge
node_network_transmit_fifo{device="docker0"} 0
node_network_transmit_fifo{device="enp0s3"} 0
node_network_transmit_fifo{device="enp0s8"} 0
node_network_transmit_fifo{device="lo"} 0
# HELP node_network_transmit_frame Network device statistic transmit_frame.
# TYPE node_network_transmit_frame gauge
node_network_transmit_frame{device="docker0"} 0
node_network_transmit_frame{device="enp0s3"} 0
node_network_transmit_frame{device="enp0s8"} 0
node_network_transmit_frame{device="lo"} 0
# HELP node_network_transmit_multicast Network device statistic transmit_multicast.
# TYPE node_network_transmit_multicast gauge
node_network_transmit_multicast{device="docker0"} 0
node_network_transmit_multicast{device="enp0s3"} 0
node_network_transmit_multicast{device="enp0s8"} 0
node_network_transmit_multicast{device="lo"} 0
# HELP node_network_transmit_packets Network device statistic transmit_packets.
# TYPE node_network_transmit_packets gauge
node_network_transmit_packets{device="docker0"} 0
node_network_transmit_packets{device="enp0s3"} 438
node_network_transmit_packets{device="enp0s8"} 627
node_network_transmit_packets{device="lo"} 138313
# HELP node_procs_blocked Number of processes blocked waiting for I/O to complete.
# TYPE node_procs_blocked gauge
node_procs_blocked 0
# HELP node_procs_running Number of processes in runnable state.
# TYPE node_procs_running gauge
node_procs_running 2
# HELP node_time System time in seconds since epoch (1970).
# TYPE node_time counter
node_time 1.48891092e+09
# HELP node_uname_info Labeled system information as provided by the uname system call.
# TYPE node_uname_info gauge
node_uname_info{domainname="(none)",machine="x86_64",nodename="centos7",release="3.10.0-514.10.2.el7.x86_64",sysname="Linux",version="#1 SMP Fri Mar 3 00:04:05 UTC 2017"} 1
# HELP node_vmstat_allocstall /proc/vmstat information field allocstall.
# TYPE node_vmstat_allocstall untyped
node_vmstat_allocstall 0
# HELP node_vmstat_balloon_deflate /proc/vmstat information field balloon_deflate.
# TYPE node_vmstat_balloon_deflate untyped
node_vmstat_balloon_deflate 0
# HELP node_vmstat_balloon_inflate /proc/vmstat information field balloon_inflate.
# TYPE node_vmstat_balloon_inflate untyped
node_vmstat_balloon_inflate 0
# HELP node_vmstat_balloon_migrate /proc/vmstat information field balloon_migrate.
# TYPE node_vmstat_balloon_migrate untyped
node_vmstat_balloon_migrate 0
# HELP node_vmstat_compact_fail /proc/vmstat information field compact_fail.
# TYPE node_vmstat_compact_fail untyped
node_vmstat_compact_fail 0
# HELP node_vmstat_compact_free_scanned /proc/vmstat information field compact_free_scanned.
# TYPE node_vmstat_compact_free_scanned untyped
node_vmstat_compact_free_scanned 0
# HELP node_vmstat_compact_isolated /proc/vmstat information field compact_isolated.
# TYPE node_vmstat_compact_isolated untyped
node_vmstat_compact_isolated 0
# HELP node_vmstat_compact_migrate_scanned /proc/vmstat information field compact_migrate_scanned.
# TYPE node_vmstat_compact_migrate_scanned untyped
node_vmstat_compact_migrate_scanned 0
# HELP node_vmstat_compact_stall /proc/vmstat information field compact_stall.
# TYPE node_vmstat_compact_stall untyped
node_vmstat_compact_stall 0
# HELP node_vmstat_compact_success /proc/vmstat information field compact_success.
# TYPE node_vmstat_compact_success untyped
node_vmstat_compact_success 0
# HELP node_vmstat_drop_pagecache /proc/vmstat information field drop_pagecache.
# TYPE node_vmstat_drop_pagecache untyped
node_vmstat_drop_pagecache 0
# HELP node_vmstat_drop_slab /proc/vmstat information field drop_slab.
# TYPE node_vmstat_drop_slab untyped
node_vmstat_drop_slab 0
# HELP node_vmstat_htlb_buddy_alloc_fail /proc/vmstat information field htlb_buddy_alloc_fail.
# TYPE node_vmstat_htlb_buddy_alloc_fail untyped
node_vmstat_htlb_buddy_alloc_fail 0
# HELP node_vmstat_htlb_buddy_alloc_success /proc/vmstat information field htlb_buddy_alloc_success.
# TYPE node_vmstat_htlb_buddy_alloc_success untyped
node_vmstat_htlb_buddy_alloc_success 0
# HELP node_vmstat_kswapd_high_wmark_hit_quickly /proc/vmstat information field kswapd_high_wmark_hit_quickly.
# TYPE node_vmstat_kswapd_high_wmark_hit_quickly untyped
node_vmstat_kswapd_high_wmark_hit_quickly 0
# HELP node_vmstat_kswapd_inodesteal /proc/vmstat information field kswapd_inodesteal.
# TYPE node_vmstat_kswapd_inodesteal untyped
node_vmstat_kswapd_inodesteal 0
# HELP node_vmstat_kswapd_low_wmark_hit_quickly /proc/vmstat information field kswapd_low_wmark_hit_quickly.
# TYPE node_vmstat_kswapd_low_wmark_hit_quickly untyped
node_vmstat_kswapd_low_wmark_hit_quickly 0
# HELP node_vmstat_nr_active_anon /proc/vmstat information field nr_active_anon.
# TYPE node_vmstat_nr_active_anon untyped
node_vmstat_nr_active_anon 88528
# HELP node_vmstat_nr_active_file /proc/vmstat information field nr_active_file.
# TYPE node_vmstat_nr_active_file untyped
node_vmstat_nr_active_file 28848
# HELP node_vmstat_nr_alloc_batch /proc/vmstat information field nr_alloc_batch.
# TYPE node_vmstat_nr_alloc_batch untyped
node_vmstat_nr_alloc_batch 2642
# HELP node_vmstat_nr_anon_pages /proc/vmstat information field nr_anon_pages.
# TYPE node_vmstat_nr_anon_pages untyped
node_vmstat_nr_anon_pages 85414
# HELP node_vmstat_nr_anon_transparent_hugepages /proc/vmstat information field nr_anon_transparent_hugepages.
# TYPE node_vmstat_nr_anon_transparent_hugepages untyped
node_vmstat_nr_anon_transparent_hugepages 6
# HELP node_vmstat_nr_bounce /proc/vmstat information field nr_bounce.
# TYPE node_vmstat_nr_bounce untyped
node_vmstat_nr_bounce 0
# HELP node_vmstat_nr_dirtied /proc/vmstat information field nr_dirtied.
# TYPE node_vmstat_nr_dirtied untyped
node_vmstat_nr_dirtied 29852
# HELP node_vmstat_nr_dirty /proc/vmstat information field nr_dirty.
# TYPE node_vmstat_nr_dirty untyped
node_vmstat_nr_dirty 17
# HELP node_vmstat_nr_dirty_background_threshold /proc/vmstat information field nr_dirty_background_threshold.
# TYPE node_vmstat_nr_dirty_background_threshold untyped
node_vmstat_nr_dirty_background_threshold 39900
# HELP node_vmstat_nr_dirty_threshold /proc/vmstat information field nr_dirty_threshold.
# TYPE node_vmstat_nr_dirty_threshold untyped
node_vmstat_nr_dirty_threshold 119701
# HELP node_vmstat_nr_file_pages /proc/vmstat information field nr_file_pages.
# TYPE node_vmstat_nr_file_pages untyped
node_vmstat_nr_file_pages 76687
# HELP node_vmstat_nr_free_cma /proc/vmstat information field nr_free_cma.
# TYPE node_vmstat_nr_free_cma untyped
node_vmstat_nr_free_cma 0
# HELP node_vmstat_nr_free_pages /proc/vmstat information field nr_free_pages.
# TYPE node_vmstat_nr_free_pages untyped
node_vmstat_nr_free_pages 769929
# HELP node_vmstat_nr_inactive_anon /proc/vmstat information field nr_inactive_anon.
# TYPE node_vmstat_nr_inactive_anon untyped
node_vmstat_nr_inactive_anon 2091
# HELP node_vmstat_nr_inactive_file /proc/vmstat information field nr_inactive_file.
# TYPE node_vmstat_nr_inactive_file untyped
node_vmstat_nr_inactive_file 45688
# HELP node_vmstat_nr_isolated_anon /proc/vmstat information field nr_isolated_anon.
# TYPE node_vmstat_nr_isolated_anon untyped
node_vmstat_nr_isolated_anon 0
# HELP node_vmstat_nr_isolated_file /proc/vmstat information field nr_isolated_file.
# TYPE node_vmstat_nr_isolated_file untyped
node_vmstat_nr_isolated_file 0
# HELP node_vmstat_nr_kernel_stack /proc/vmstat information field nr_kernel_stack.
# TYPE node_vmstat_nr_kernel_stack untyped
node_vmstat_nr_kernel_stack 707
# HELP node_vmstat_nr_mapped /proc/vmstat information field nr_mapped.
# TYPE node_vmstat_nr_mapped untyped
node_vmstat_nr_mapped 16487
# HELP node_vmstat_nr_mlock /proc/vmstat information field nr_mlock.
# TYPE node_vmstat_nr_mlock untyped
node_vmstat_nr_mlock 0
# HELP node_vmstat_nr_page_table_pages /proc/vmstat information field nr_page_table_pages.
# TYPE node_vmstat_nr_page_table_pages untyped
node_vmstat_nr_page_table_pages 2223
# HELP node_vmstat_nr_shmem /proc/vmstat information field nr_shmem.
# TYPE node_vmstat_nr_shmem untyped
node_vmstat_nr_shmem 2151
# HELP node_vmstat_nr_slab_reclaimable /proc/vmstat information field nr_slab_reclaimable.
# TYPE node_vmstat_nr_slab_reclaimable untyped
node_vmstat_nr_slab_reclaimable 10898
# HELP node_vmstat_nr_slab_unreclaimable /proc/vmstat information field nr_slab_unreclaimable.
# TYPE node_vmstat_nr_slab_unreclaimable untyped
node_vmstat_nr_slab_unreclaimable 7827
# HELP node_vmstat_nr_unevictable /proc/vmstat information field nr_unevictable.
# TYPE node_vmstat_nr_unevictable untyped
node_vmstat_nr_unevictable 0
# HELP node_vmstat_nr_unstable /proc/vmstat information field nr_unstable.
# TYPE node_vmstat_nr_unstable untyped
node_vmstat_nr_unstable 0
# HELP node_vmstat_nr_vmscan_immediate_reclaim /proc/vmstat information field nr_vmscan_immediate_reclaim.
# TYPE node_vmstat_nr_vmscan_immediate_reclaim untyped
node_vmstat_nr_vmscan_immediate_reclaim 0
# HELP node_vmstat_nr_vmscan_write /proc/vmstat information field nr_vmscan_write.
# TYPE node_vmstat_nr_vmscan_write untyped
node_vmstat_nr_vmscan_write 0
# HELP node_vmstat_nr_writeback /proc/vmstat information field nr_writeback.
# TYPE node_vmstat_nr_writeback untyped
node_vmstat_nr_writeback 0
# HELP node_vmstat_nr_writeback_temp /proc/vmstat information field nr_writeback_temp.
# TYPE node_vmstat_nr_writeback_temp untyped
node_vmstat_nr_writeback_temp 0
# HELP node_vmstat_nr_written /proc/vmstat information field nr_written.
# TYPE node_vmstat_nr_written untyped
node_vmstat_nr_written 20344
# HELP node_vmstat_numa_foreign /proc/vmstat information field numa_foreign.
# TYPE node_vmstat_numa_foreign untyped
node_vmstat_numa_foreign 0
# HELP node_vmstat_numa_hint_faults /proc/vmstat information field numa_hint_faults.
# TYPE node_vmstat_numa_hint_faults untyped
node_vmstat_numa_hint_faults 0
# HELP node_vmstat_numa_hint_faults_local /proc/vmstat information field numa_hint_faults_local.
# TYPE node_vmstat_numa_hint_faults_local untyped
node_vmstat_numa_hint_faults_local 0
# HELP node_vmstat_numa_hit /proc/vmstat information field numa_hit.
# TYPE node_vmstat_numa_hit untyped
node_vmstat_numa_hit 824961
# HELP node_vmstat_numa_huge_pte_updates /proc/vmstat information field numa_huge_pte_updates.
# TYPE node_vmstat_numa_huge_pte_updates untyped
node_vmstat_numa_huge_pte_updates 0
# HELP node_vmstat_numa_interleave /proc/vmstat information field numa_interleave.
# TYPE node_vmstat_numa_interleave untyped
node_vmstat_numa_interleave 14563
# HELP node_vmstat_numa_local /proc/vmstat information field numa_local.
# TYPE node_vmstat_numa_local untyped
node_vmstat_numa_local 824961
# HELP node_vmstat_numa_miss /proc/vmstat information field numa_miss.
# TYPE node_vmstat_numa_miss untyped
node_vmstat_numa_miss 0
# HELP node_vmstat_numa_other /proc/vmstat information field numa_other.
# TYPE node_vmstat_numa_other untyped
node_vmstat_numa_other 0
# HELP node_vmstat_numa_pages_migrated /proc/vmstat information field numa_pages_migrated.
# TYPE node_vmstat_numa_pages_migrated untyped
node_vmstat_numa_pages_migrated 0
# HELP node_vmstat_numa_pte_updates /proc/vmstat information field numa_pte_updates.
# TYPE node_vmstat_numa_pte_updates untyped
node_vmstat_numa_pte_updates 0
# HELP node_vmstat_pageoutrun /proc/vmstat information field pageoutrun.
# TYPE node_vmstat_pageoutrun untyped
node_vmstat_pageoutrun 1
# HELP node_vmstat_pgactivate /proc/vmstat information field pgactivate.
# TYPE node_vmstat_pgactivate untyped
node_vmstat_pgactivate 30452
# HELP node_vmstat_pgalloc_dma /proc/vmstat information field pgalloc_dma.
# TYPE node_vmstat_pgalloc_dma untyped
node_vmstat_pgalloc_dma 487
# HELP node_vmstat_pgalloc_dma32 /proc/vmstat information field pgalloc_dma32.
# TYPE node_vmstat_pgalloc_dma32 untyped
node_vmstat_pgalloc_dma32 750526
# HELP node_vmstat_pgalloc_movable /proc/vmstat information field pgalloc_movable.
# TYPE node_vmstat_pgalloc_movable untyped
node_vmstat_pgalloc_movable 0
# HELP node_vmstat_pgalloc_normal /proc/vmstat information field pgalloc_normal.
# TYPE node_vmstat_pgalloc_normal untyped
node_vmstat_pgalloc_normal 124283
# HELP node_vmstat_pgdeactivate /proc/vmstat information field pgdeactivate.
# TYPE node_vmstat_pgdeactivate untyped
node_vmstat_pgdeactivate 0
# HELP node_vmstat_pgfault /proc/vmstat information field pgfault.
# TYPE node_vmstat_pgfault untyped
node_vmstat_pgfault 1.37842e+06
# HELP node_vmstat_pgfree /proc/vmstat information field pgfree.
# TYPE node_vmstat_pgfree untyped
node_vmstat_pgfree 1.645789e+06
# HELP node_vmstat_pginodesteal /proc/vmstat information field pginodesteal.
# TYPE node_vmstat_pginodesteal untyped
node_vmstat_pginodesteal 0
# HELP node_vmstat_pgmajfault /proc/vmstat information field pgmajfault.
# TYPE node_vmstat_pgmajfault untyped
node_vmstat_pgmajfault 4286
# HELP node_vmstat_pgmigrate_fail /proc/vmstat information field pgmigrate_fail.
# TYPE node_vmstat_pgmigrate_fail untyped
node_vmstat_pgmigrate_fail 0
# HELP node_vmstat_pgmigrate_success /proc/vmstat information field pgmigrate_success.
# TYPE node_vmstat_pgmigrate_success untyped
node_vmstat_pgmigrate_success 0
# HELP node_vmstat_pgpgin /proc/vmstat information field pgpgin.
# TYPE node_vmstat_pgpgin untyped
node_vmstat_pgpgin 289776
# HELP node_vmstat_pgpgout /proc/vmstat information field pgpgout.
# TYPE node_vmstat_pgpgout untyped
node_vmstat_pgpgout 139684
# HELP node_vmstat_pgrefill_dma /proc/vmstat information field pgrefill_dma.
# TYPE node_vmstat_pgrefill_dma untyped
node_vmstat_pgrefill_dma 0
# HELP node_vmstat_pgrefill_dma32 /proc/vmstat information field pgrefill_dma32.
# TYPE node_vmstat_pgrefill_dma32 untyped
node_vmstat_pgrefill_dma32 0
# HELP node_vmstat_pgrefill_movable /proc/vmstat information field pgrefill_movable.
# TYPE node_vmstat_pgrefill_movable untyped
node_vmstat_pgrefill_movable 0
# HELP node_vmstat_pgrefill_normal /proc/vmstat information field pgrefill_normal.
# TYPE node_vmstat_pgrefill_normal untyped
node_vmstat_pgrefill_normal 0
# HELP node_vmstat_pgrotated /proc/vmstat information field pgrotated.
# TYPE node_vmstat_pgrotated untyped
node_vmstat_pgrotated 0
# HELP node_vmstat_pgscan_direct_dma /proc/vmstat information field pgscan_direct_dma.
# TYPE node_vmstat_pgscan_direct_dma untyped
node_vmstat_pgscan_direct_dma 0
# HELP node_vmstat_pgscan_direct_dma32 /proc/vmstat information field pgscan_direct_dma32.
# TYPE node_vmstat_pgscan_direct_dma32 untyped
node_vmstat_pgscan_direct_dma32 0
# HELP node_vmstat_pgscan_direct_movable /proc/vmstat information field pgscan_direct_movable.
# TYPE node_vmstat_pgscan_direct_movable untyped
node_vmstat_pgscan_direct_movable 0
# HELP node_vmstat_pgscan_direct_normal /proc/vmstat information field pgscan_direct_normal.
# TYPE node_vmstat_pgscan_direct_normal untyped
node_vmstat_pgscan_direct_normal 0
# HELP node_vmstat_pgscan_direct_throttle /proc/vmstat information field pgscan_direct_throttle.
# TYPE node_vmstat_pgscan_direct_throttle untyped
node_vmstat_pgscan_direct_throttle 0
# HELP node_vmstat_pgscan_kswapd_dma /proc/vmstat information field pgscan_kswapd_dma.
# TYPE node_vmstat_pgscan_kswapd_dma untyped
node_vmstat_pgscan_kswapd_dma 0
# HELP node_vmstat_pgscan_kswapd_dma32 /proc/vmstat information field pgscan_kswapd_dma32.
# TYPE node_vmstat_pgscan_kswapd_dma32 untyped
node_vmstat_pgscan_kswapd_dma32 0
# HELP node_vmstat_pgscan_kswapd_movable /proc/vmstat information field pgscan_kswapd_movable.
# TYPE node_vmstat_pgscan_kswapd_movable untyped
node_vmstat_pgscan_kswapd_movable 0
# HELP node_vmstat_pgscan_kswapd_normal /proc/vmstat information field pgscan_kswapd_normal.
# TYPE node_vmstat_pgscan_kswapd_normal untyped
node_vmstat_pgscan_kswapd_normal 0
# HELP node_vmstat_pgsteal_direct_dma /proc/vmstat information field pgsteal_direct_dma.
# TYPE node_vmstat_pgsteal_direct_dma untyped
node_vmstat_pgsteal_direct_dma 0
# HELP node_vmstat_pgsteal_direct_dma32 /proc/vmstat information field pgsteal_direct_dma32.
# TYPE node_vmstat_pgsteal_direct_dma32 untyped
node_vmstat_pgsteal_direct_dma32 0
# HELP node_vmstat_pgsteal_direct_movable /proc/vmstat information field pgsteal_direct_movable.
# TYPE node_vmstat_pgsteal_direct_movable untyped
node_vmstat_pgsteal_direct_movable 0
# HELP node_vmstat_pgsteal_direct_normal /proc/vmstat information field pgsteal_direct_normal.
# TYPE node_vmstat_pgsteal_direct_normal untyped
node_vmstat_pgsteal_direct_normal 0
# HELP node_vmstat_pgsteal_kswapd_dma /proc/vmstat information field pgsteal_kswapd_dma.
# TYPE node_vmstat_pgsteal_kswapd_dma untyped
node_vmstat_pgsteal_kswapd_dma 0
# HELP node_vmstat_pgsteal_kswapd_dma32 /proc/vmstat information field pgsteal_kswapd_dma32.
# TYPE node_vmstat_pgsteal_kswapd_dma32 untyped
node_vmstat_pgsteal_kswapd_dma32 0
# HELP node_vmstat_pgsteal_kswapd_movable /proc/vmstat information field pgsteal_kswapd_movable.
# TYPE node_vmstat_pgsteal_kswapd_movable untyped
node_vmstat_pgsteal_kswapd_movable 0
# HELP node_vmstat_pgsteal_kswapd_normal /proc/vmstat information field pgsteal_kswapd_normal.
# TYPE node_vmstat_pgsteal_kswapd_normal untyped
node_vmstat_pgsteal_kswapd_normal 0
# HELP node_vmstat_pswpin /proc/vmstat information field pswpin.
# TYPE node_vmstat_pswpin untyped
node_vmstat_pswpin 0
# HELP node_vmstat_pswpout /proc/vmstat information field pswpout.
# TYPE node_vmstat_pswpout untyped
node_vmstat_pswpout 0
# HELP node_vmstat_slabs_scanned /proc/vmstat information field slabs_scanned.
# TYPE node_vmstat_slabs_scanned untyped
node_vmstat_slabs_scanned 0
# HELP node_vmstat_thp_collapse_alloc /proc/vmstat information field thp_collapse_alloc.
# TYPE node_vmstat_thp_collapse_alloc untyped
node_vmstat_thp_collapse_alloc 0
# HELP node_vmstat_thp_collapse_alloc_failed /proc/vmstat information field thp_collapse_alloc_failed.
# TYPE node_vmstat_thp_collapse_alloc_failed untyped
node_vmstat_thp_collapse_alloc_failed 0
# HELP node_vmstat_thp_fault_alloc /proc/vmstat information field thp_fault_alloc.
# TYPE node_vmstat_thp_fault_alloc untyped
node_vmstat_thp_fault_alloc 53
# HELP node_vmstat_thp_fault_fallback /proc/vmstat information field thp_fault_fallback.
# TYPE node_vmstat_thp_fault_fallback untyped
node_vmstat_thp_fault_fallback 0
# HELP node_vmstat_thp_split /proc/vmstat information field thp_split.
# TYPE node_vmstat_thp_split untyped
node_vmstat_thp_split 1
# HELP node_vmstat_thp_zero_page_alloc /proc/vmstat information field thp_zero_page_alloc.
# TYPE node_vmstat_thp_zero_page_alloc untyped
node_vmstat_thp_zero_page_alloc 0
# HELP node_vmstat_thp_zero_page_alloc_failed /proc/vmstat information field thp_zero_page_alloc_failed.
# TYPE node_vmstat_thp_zero_page_alloc_failed untyped
node_vmstat_thp_zero_page_alloc_failed 0
# HELP node_vmstat_unevictable_pgs_cleared /proc/vmstat information field unevictable_pgs_cleared.
# TYPE node_vmstat_unevictable_pgs_cleared untyped
node_vmstat_unevictable_pgs_cleared 0
# HELP node_vmstat_unevictable_pgs_culled /proc/vmstat information field unevictable_pgs_culled.
# TYPE node_vmstat_unevictable_pgs_culled untyped
node_vmstat_unevictable_pgs_culled 8348
# HELP node_vmstat_unevictable_pgs_mlocked /proc/vmstat information field unevictable_pgs_mlocked.
# TYPE node_vmstat_unevictable_pgs_mlocked untyped
node_vmstat_unevictable_pgs_mlocked 11226
# HELP node_vmstat_unevictable_pgs_munlocked /proc/vmstat information field unevictable_pgs_munlocked.
# TYPE node_vmstat_unevictable_pgs_munlocked untyped
node_vmstat_unevictable_pgs_munlocked 9966
# HELP node_vmstat_unevictable_pgs_rescued /proc/vmstat information field unevictable_pgs_rescued.
# TYPE node_vmstat_unevictable_pgs_rescued untyped
node_vmstat_unevictable_pgs_rescued 7446
# HELP node_vmstat_unevictable_pgs_scanned /proc/vmstat information field unevictable_pgs_scanned.
# TYPE node_vmstat_unevictable_pgs_scanned untyped
node_vmstat_unevictable_pgs_scanned 0
# HELP node_vmstat_unevictable_pgs_stranded /proc/vmstat information field unevictable_pgs_stranded.
# TYPE node_vmstat_unevictable_pgs_stranded untyped
node_vmstat_unevictable_pgs_stranded 0
# HELP node_vmstat_workingset_activate /proc/vmstat information field workingset_activate.
# TYPE node_vmstat_workingset_activate untyped
node_vmstat_workingset_activate 0
# HELP node_vmstat_workingset_nodereclaim /proc/vmstat information field workingset_nodereclaim.
# TYPE node_vmstat_workingset_nodereclaim untyped
node_vmstat_workingset_nodereclaim 0
# HELP node_vmstat_workingset_refault /proc/vmstat information field workingset_refault.
# TYPE node_vmstat_workingset_refault untyped
node_vmstat_workingset_refault 0
# HELP node_vmstat_zone_reclaim_failed /proc/vmstat information field zone_reclaim_failed.
# TYPE node_vmstat_zone_reclaim_failed untyped
node_vmstat_zone_reclaim_failed 0

percona/mongodb_exporter (mongodb:metrics)

Behind the scenes of the “mongodb:metric” Percona Monitoring and Management service, percona/mongodb_exporter is the Prometheus exporter that provides detailed database metrics for graphing on the Percona Monitoring and Management server. percona/mongodb_exporter is a Percona fork of the project dcu/mongodb_exporter, with some valuable additional metrics for MongoDB sharding, storage engines, etc.

The mongodb_exporter process is designed to automatically detect the MongoDB node type, storage engine, etc., without any special configuration aside from the connection string.

As of Percona Monitoring and Management 1.1.1, here are the number of metrics collected from a single replication-enabled mongodb instance on a single ‘pull’ from Prometheus:

  • Percona Server for MongoDB 3.2 w/WiredTiger: 173 metrics per “pull”
  • Percona Server for MongoDB 3.2 w/RocksDB (with 3 x levels): 239 metrics per “pul”
  • Percona Server for MongoDB 3.2 w/MMAPv1: 172 metrics per “pul”‘

Note: each additional replica set member adds an additional seven metrics to the list of metrics.

On the sharding side of things, a “mongos” process within a cluster with one shard reports 58 metrics, with one extra metric added for each additional cluster shard and 3-4 extra metrics added for each additional “mongos” instance.

Below is a full example of a single “pull” of metrics from one RocksDB instance. Prometheus exporters provide metrics at the HTTP URL: “/metrics”. This is the exact same payload Prometheus would poll from the exporter:

$ curl -sk | grep mongodb
# HELP mongodb_mongod_asserts_total The asserts document reports the number of asserts on the database. While assert errors are typically uncommon, if there are non-zero values for the asserts, you should check the log file for the mongod process for more information. In many cases these errors are trivial, but are worth investigating.
# TYPE mongodb_mongod_asserts_total counter
mongodb_mongod_asserts_total{type="msg"} 0
mongodb_mongod_asserts_total{type="regular"} 0
mongodb_mongod_asserts_total{type="rollovers"} 0
mongodb_mongod_asserts_total{type="user"} 1
mongodb_mongod_asserts_total{type="warning"} 0
# HELP mongodb_mongod_connections The connections sub document data regarding the current status of incoming connections and availability of the database server. Use these values to assess the current load and capacity requirements of the server
# TYPE mongodb_mongod_connections gauge
mongodb_mongod_connections{state="available"} 811
mongodb_mongod_connections{state="current"} 8
# HELP mongodb_mongod_connections_metrics_created_total totalCreated provides a count of all incoming connections created to the server. This number includes connections that have since closed
# TYPE mongodb_mongod_connections_metrics_created_total counter
mongodb_mongod_connections_metrics_created_total 1977
# HELP mongodb_mongod_extra_info_heap_usage_bytes The heap_usage_bytes field is only available on Unix/Linux systems, and reports the total size in bytes of heap space used by the database process
# TYPE mongodb_mongod_extra_info_heap_usage_bytes gauge
mongodb_mongod_extra_info_heap_usage_bytes 7.77654e+07
# HELP mongodb_mongod_extra_info_page_faults_total The page_faults Reports the total number of page faults that require disk operations. Page faults refer to operations that require the database server to access data which isn’t available in active memory. The page_faults counter may increase dramatically during moments of poor performance and may correlate with limited memory environments and larger data sets. Limited and sporadic page faults do not necessarily indicate an issue
# TYPE mongodb_mongod_extra_info_page_faults_total gauge
mongodb_mongod_extra_info_page_faults_total 47
# HELP mongodb_mongod_global_lock_client The activeClients data structure provides more granular information about the number of connected clients and the operation types (e.g. read or write) performed by these clients
# TYPE mongodb_mongod_global_lock_client gauge
mongodb_mongod_global_lock_client{type="reader"} 0
mongodb_mongod_global_lock_client{type="writer"} 0
# HELP mongodb_mongod_global_lock_current_queue The currentQueue data structure value provides more granular information concerning the number of operations queued because of a lock
# TYPE mongodb_mongod_global_lock_current_queue gauge
mongodb_mongod_global_lock_current_queue{type="reader"} 0
mongodb_mongod_global_lock_current_queue{type="writer"} 0
# HELP mongodb_mongod_global_lock_ratio The value of ratio displays the relationship between lockTime and totalTime. Low values indicate that operations have held the globalLock frequently for shorter periods of time. High values indicate that operations have held globalLock infrequently for longer periods of time
# TYPE mongodb_mongod_global_lock_ratio gauge
mongodb_mongod_global_lock_ratio 0
# HELP mongodb_mongod_global_lock_total The value of totalTime represents the time, in microseconds, since the database last started and creation of the globalLock. This is roughly equivalent to total server uptime
# TYPE mongodb_mongod_global_lock_total counter
mongodb_mongod_global_lock_total 0
# HELP mongodb_mongod_instance_local_time The localTime value is the current time, according to the server, in UTC specified in an ISODate format.
# TYPE mongodb_mongod_instance_local_time counter
mongodb_mongod_instance_local_time 1.488834066e+09
# HELP mongodb_mongod_instance_uptime_estimate_seconds uptimeEstimate provides the uptime as calculated from MongoDB's internal course-grained time keeping system.
# TYPE mongodb_mongod_instance_uptime_estimate_seconds counter
mongodb_mongod_instance_uptime_estimate_seconds 15881
# HELP mongodb_mongod_instance_uptime_seconds The value of the uptime field corresponds to the number of seconds that the mongos or mongod process has been active.
# TYPE mongodb_mongod_instance_uptime_seconds counter
mongodb_mongod_instance_uptime_seconds 15881
# HELP mongodb_mongod_locks_time_acquiring_global_microseconds_total amount of time in microseconds that any database has spent waiting for the global lock
# TYPE mongodb_mongod_locks_time_acquiring_global_microseconds_total counter
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="Collection",type="read"} 0
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="Collection",type="write"} 0
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="Database",type="read"} 0
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="Database",type="write"} 0
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="Global",type="read"} 0
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="Global",type="write"} 5005
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="Metadata",type="read"} 0
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="Metadata",type="write"} 0
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="oplog",type="read"} 0
mongodb_mongod_locks_time_acquiring_global_microseconds_total{database="oplog",type="write"} 0
# HELP mongodb_mongod_locks_time_locked_global_microseconds_total amount of time in microseconds that any database has held the global lock
# TYPE mongodb_mongod_locks_time_locked_global_microseconds_total counter
mongodb_mongod_locks_time_locked_global_microseconds_total{database="Collection",type="read"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="Collection",type="write"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="Database",type="read"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="Database",type="write"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="Global",type="read"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="Global",type="write"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="Metadata",type="read"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="Metadata",type="write"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="oplog",type="read"} 0
mongodb_mongod_locks_time_locked_global_microseconds_total{database="oplog",type="write"} 0
# HELP mongodb_mongod_locks_time_locked_local_microseconds_total amount of time in microseconds that any database has held the local lock
# TYPE mongodb_mongod_locks_time_locked_local_microseconds_total counter
mongodb_mongod_locks_time_locked_local_microseconds_total{database="Collection",type="read"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="Collection",type="write"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="Database",type="read"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="Database",type="write"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="Global",type="read"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="Global",type="write"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="Metadata",type="read"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="Metadata",type="write"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="oplog",type="read"} 0
mongodb_mongod_locks_time_locked_local_microseconds_total{database="oplog",type="write"} 0
# HELP mongodb_mongod_memory The mem data structure holds information regarding the target system architecture of mongod and current memory use
# TYPE mongodb_mongod_memory gauge
mongodb_mongod_memory{type="mapped"} 0
mongodb_mongod_memory{type="mapped_with_journal"} 0
mongodb_mongod_memory{type="resident"} 205
mongodb_mongod_memory{type="virtual"} 1064
# HELP mongodb_mongod_metrics_cursor_open The open is an embedded document that contains data regarding open cursors
# TYPE mongodb_mongod_metrics_cursor_open gauge
mongodb_mongod_metrics_cursor_open{state="noTimeout"} 0
mongodb_mongod_metrics_cursor_open{state="pinned"} 1
mongodb_mongod_metrics_cursor_open{state="total"} 1
# HELP mongodb_mongod_metrics_cursor_timed_out_total timedOut provides the total number of cursors that have timed out since the server process started. If this number is large or growing at a regular rate, this may indicate an application error
# TYPE mongodb_mongod_metrics_cursor_timed_out_total counter
mongodb_mongod_metrics_cursor_timed_out_total 0
# HELP mongodb_mongod_metrics_document_total The document holds a document of that reflect document access and modification patterns and data use. Compare these values to the data in the opcounters document, which track total number of operations
# TYPE mongodb_mongod_metrics_document_total counter
mongodb_mongod_metrics_document_total{state="deleted"} 0
mongodb_mongod_metrics_document_total{state="inserted"} 140396
mongodb_mongod_metrics_document_total{state="returned"} 74105
mongodb_mongod_metrics_document_total{state="updated"} 0
# HELP mongodb_mongod_metrics_get_last_error_wtime_num_total num reports the total number of getLastError operations with a specified write concern (i.e. w) that wait for one or more members of a replica set to acknowledge the write operation (i.e. a w value greater than 1.)
# TYPE mongodb_mongod_metrics_get_last_error_wtime_num_total gauge
mongodb_mongod_metrics_get_last_error_wtime_num_total 0
# HELP mongodb_mongod_metrics_get_last_error_wtime_total_milliseconds total_millis reports the total amount of time in milliseconds that the mongod has spent performing getLastError operations with write concern (i.e. w) that wait for one or more members of a replica set to acknowledge the write operation (i.e. a w value greater than 1.)
# TYPE mongodb_mongod_metrics_get_last_error_wtime_total_milliseconds counter
mongodb_mongod_metrics_get_last_error_wtime_total_milliseconds 0
# HELP mongodb_mongod_metrics_get_last_error_wtimeouts_total wtimeouts reports the number of times that write concern operations have timed out as a result of the wtimeout threshold to getLastError.
# TYPE mongodb_mongod_metrics_get_last_error_wtimeouts_total counter
mongodb_mongod_metrics_get_last_error_wtimeouts_total 0
# HELP mongodb_mongod_metrics_operation_total operation is a sub-document that holds counters for several types of update and query operations that MongoDB handles using special operation types
# TYPE mongodb_mongod_metrics_operation_total counter
mongodb_mongod_metrics_operation_total{type="fastmod"} 0
mongodb_mongod_metrics_operation_total{type="idhack"} 0
mongodb_mongod_metrics_operation_total{type="scan_and_order"} 0
# HELP mongodb_mongod_metrics_query_executor_total queryExecutor is a document that reports data from the query execution system
# TYPE mongodb_mongod_metrics_query_executor_total counter
mongodb_mongod_metrics_query_executor_total{state="scanned"} 0
mongodb_mongod_metrics_query_executor_total{state="scanned_objects"} 74105
# HELP mongodb_mongod_metrics_record_moves_total moves reports the total number of times documents move within the on-disk representation of the MongoDB data set. Documents move as a result of operations that increase the size of the document beyond their allocated record size
# TYPE mongodb_mongod_metrics_record_moves_total counter
mongodb_mongod_metrics_record_moves_total 0
# HELP mongodb_mongod_metrics_repl_apply_batches_num_total num reports the total number of batches applied across all databases
# TYPE mongodb_mongod_metrics_repl_apply_batches_num_total counter
mongodb_mongod_metrics_repl_apply_batches_num_total 0
# HELP mongodb_mongod_metrics_repl_apply_batches_total_milliseconds total_millis reports the total amount of time the mongod has spent applying operations from the oplog
# TYPE mongodb_mongod_metrics_repl_apply_batches_total_milliseconds counter
mongodb_mongod_metrics_repl_apply_batches_total_milliseconds 0
# HELP mongodb_mongod_metrics_repl_apply_ops_total ops reports the total number of oplog operations applied
# TYPE mongodb_mongod_metrics_repl_apply_ops_total counter
mongodb_mongod_metrics_repl_apply_ops_total 0
# HELP mongodb_mongod_metrics_repl_buffer_count count reports the current number of operations in the oplog buffer
# TYPE mongodb_mongod_metrics_repl_buffer_count gauge
mongodb_mongod_metrics_repl_buffer_count 0
# HELP mongodb_mongod_metrics_repl_buffer_max_size_bytes maxSizeBytes reports the maximum size of the buffer. This value is a constant setting in the mongod, and is not configurable
# TYPE mongodb_mongod_metrics_repl_buffer_max_size_bytes counter
mongodb_mongod_metrics_repl_buffer_max_size_bytes 2.68435456e+08
# HELP mongodb_mongod_metrics_repl_buffer_size_bytes sizeBytes reports the current size of the contents of the oplog buffer
# TYPE mongodb_mongod_metrics_repl_buffer_size_bytes gauge
mongodb_mongod_metrics_repl_buffer_size_bytes 0
# HELP mongodb_mongod_metrics_repl_network_bytes_total bytes reports the total amount of data read from the replication sync source
# TYPE mongodb_mongod_metrics_repl_network_bytes_total counter
mongodb_mongod_metrics_repl_network_bytes_total 0
# HELP mongodb_mongod_metrics_repl_network_getmores_num_total num reports the total number of getmore operations, which are operations that request an additional set of operations from the replication sync source.
# TYPE mongodb_mongod_metrics_repl_network_getmores_num_total counter
mongodb_mongod_metrics_repl_network_getmores_num_total 0
# HELP mongodb_mongod_metrics_repl_network_getmores_total_milliseconds total_millis reports the total amount of time required to collect data from getmore operations
# TYPE mongodb_mongod_metrics_repl_network_getmores_total_milliseconds counter
mongodb_mongod_metrics_repl_network_getmores_total_milliseconds 0
# HELP mongodb_mongod_metrics_repl_network_ops_total ops reports the total number of operations read from the replication source.
# TYPE mongodb_mongod_metrics_repl_network_ops_total counter
mongodb_mongod_metrics_repl_network_ops_total 0
# HELP mongodb_mongod_metrics_repl_network_readers_created_total readersCreated reports the total number of oplog query processes created. MongoDB will create a new oplog query any time an error occurs in the connection, including a timeout, or a network operation. Furthermore, readersCreated will increment every time MongoDB selects a new source fore replication.
# TYPE mongodb_mongod_metrics_repl_network_readers_created_total counter
mongodb_mongod_metrics_repl_network_readers_created_total 1
# HELP mongodb_mongod_metrics_repl_oplog_insert_bytes_total insertBytes the total size of documents inserted into the oplog.
# TYPE mongodb_mongod_metrics_repl_oplog_insert_bytes_total counter
mongodb_mongod_metrics_repl_oplog_insert_bytes_total 0
# HELP mongodb_mongod_metrics_repl_oplog_insert_num_total num reports the total number of items inserted into the oplog.
# TYPE mongodb_mongod_metrics_repl_oplog_insert_num_total counter
mongodb_mongod_metrics_repl_oplog_insert_num_total 0
# HELP mongodb_mongod_metrics_repl_oplog_insert_total_milliseconds total_millis reports the total amount of time spent for the mongod to insert data into the oplog.
# TYPE mongodb_mongod_metrics_repl_oplog_insert_total_milliseconds counter
mongodb_mongod_metrics_repl_oplog_insert_total_milliseconds 0
# HELP mongodb_mongod_metrics_repl_preload_docs_num_total num reports the total number of documents loaded during the pre-fetch stage of replication
# TYPE mongodb_mongod_metrics_repl_preload_docs_num_total counter
mongodb_mongod_metrics_repl_preload_docs_num_total 0
# HELP mongodb_mongod_metrics_repl_preload_docs_total_milliseconds total_millis reports the total amount of time spent loading documents as part of the pre-fetch stage of replication
# TYPE mongodb_mongod_metrics_repl_preload_docs_total_milliseconds counter
mongodb_mongod_metrics_repl_preload_docs_total_milliseconds 0
# HELP mongodb_mongod_metrics_repl_preload_indexes_num_total num reports the total number of index entries loaded by members before updating documents as part of the pre-fetch stage of replication
# TYPE mongodb_mongod_metrics_repl_preload_indexes_num_total counter
mongodb_mongod_metrics_repl_preload_indexes_num_total 0
# HELP mongodb_mongod_metrics_repl_preload_indexes_total_milliseconds total_millis reports the total amount of time spent loading index entries as part of the pre-fetch stage of replication
# TYPE mongodb_mongod_metrics_repl_preload_indexes_total_milliseconds counter
mongodb_mongod_metrics_repl_preload_indexes_total_milliseconds 0
# HELP mongodb_mongod_metrics_storage_freelist_search_total metrics about searching records in the database.
# TYPE mongodb_mongod_metrics_storage_freelist_search_total counter
mongodb_mongod_metrics_storage_freelist_search_total{type="bucket_exhausted"} 0
mongodb_mongod_metrics_storage_freelist_search_total{type="requests"} 0
mongodb_mongod_metrics_storage_freelist_search_total{type="scanned"} 0
# HELP mongodb_mongod_metrics_ttl_deleted_documents_total deletedDocuments reports the total number of documents deleted from collections with a ttl index.
# TYPE mongodb_mongod_metrics_ttl_deleted_documents_total counter
mongodb_mongod_metrics_ttl_deleted_documents_total 0
# HELP mongodb_mongod_metrics_ttl_passes_total passes reports the number of times the background process removes documents from collections with a ttl index
# TYPE mongodb_mongod_metrics_ttl_passes_total counter
mongodb_mongod_metrics_ttl_passes_total 0
# HELP mongodb_mongod_network_bytes_total The network data structure contains data regarding MongoDB’s network use
# TYPE mongodb_mongod_network_bytes_total counter
mongodb_mongod_network_bytes_total{state="in_bytes"} 1.323188842e+09
mongodb_mongod_network_bytes_total{state="out_bytes"} 1.401963024e+09
# HELP mongodb_mongod_network_metrics_num_requests_total The numRequests field is a counter of the total number of distinct requests that the server has received. Use this value to provide context for the bytesIn and bytesOut values to ensure that MongoDB’s network utilization is consistent with expectations and application use
# TYPE mongodb_mongod_network_metrics_num_requests_total counter
mongodb_mongod_network_metrics_num_requests_total 313023
# HELP mongodb_mongod_op_counters_repl_total The opcountersRepl data structure, similar to the opcounters data structure, provides an overview of database replication operations by type and makes it possible to analyze the load on the replica in more granular manner. These values only appear when the current host has replication enabled
# TYPE mongodb_mongod_op_counters_repl_total counter
mongodb_mongod_op_counters_repl_total{type="command"} 0
mongodb_mongod_op_counters_repl_total{type="delete"} 0
mongodb_mongod_op_counters_repl_total{type="getmore"} 0
mongodb_mongod_op_counters_repl_total{type="insert"} 0
mongodb_mongod_op_counters_repl_total{type="query"} 0
mongodb_mongod_op_counters_repl_total{type="update"} 0
# HELP mongodb_mongod_op_counters_total The opcounters data structure provides an overview of database operations by type and makes it possible to analyze the load on the database in more granular manner. These numbers will grow over time and in response to database use. Analyze these values over time to track database utilization
# TYPE mongodb_mongod_op_counters_total counter
mongodb_mongod_op_counters_total{type="command"} 164008
mongodb_mongod_op_counters_total{type="delete"} 0
mongodb_mongod_op_counters_total{type="getmore"} 74912
mongodb_mongod_op_counters_total{type="insert"} 70198
mongodb_mongod_op_counters_total{type="query"} 3907
mongodb_mongod_op_counters_total{type="update"} 0
# HELP mongodb_mongod_replset_heatbeat_interval_millis The frequency in milliseconds of the heartbeats
# TYPE mongodb_mongod_replset_heatbeat_interval_millis gauge
mongodb_mongod_replset_heatbeat_interval_millis{set="test1"} 2000
# HELP mongodb_mongod_replset_member_config_version The configVersion value is the replica set configuration version.
# TYPE mongodb_mongod_replset_member_config_version gauge
mongodb_mongod_replset_member_config_version{name="localhost:27017",set="test1",state="PRIMARY"} 2
mongodb_mongod_replset_member_config_version{name="localhost:27027",set="test1",state="SECONDARY"} 2
# HELP mongodb_mongod_replset_member_election_date The timestamp the node was elected as replica leader
# TYPE mongodb_mongod_replset_member_election_date gauge
mongodb_mongod_replset_member_election_date{name="localhost:27017",set="test1",state="PRIMARY"} 1.488818198e+09
# HELP mongodb_mongod_replset_member_health This field conveys if the member is up (1) or down (0).
# TYPE mongodb_mongod_replset_member_health gauge
mongodb_mongod_replset_member_health{name="localhost:27017",set="test1",state="PRIMARY"} 1
mongodb_mongod_replset_member_health{name="localhost:27027",set="test1",state="SECONDARY"} 1
# HELP mongodb_mongod_replset_member_last_heartbeat The lastHeartbeat value provides an ISODate formatted date and time of the transmission time of last heartbeat received from this member
# TYPE mongodb_mongod_replset_member_last_heartbeat gauge
mongodb_mongod_replset_member_last_heartbeat{name="localhost:27027",set="test1",state="SECONDARY"} 1.488834065e+09
# HELP mongodb_mongod_replset_member_last_heartbeat_recv The lastHeartbeatRecv value provides an ISODate formatted date and time that the last heartbeat was received from this member
# TYPE mongodb_mongod_replset_member_last_heartbeat_recv gauge
mongodb_mongod_replset_member_last_heartbeat_recv{name="localhost:27027",set="test1",state="SECONDARY"} 1.488834066e+09
# HELP mongodb_mongod_replset_member_optime_date The timestamp of the last oplog entry that this member applied.
# TYPE mongodb_mongod_replset_member_optime_date gauge
mongodb_mongod_replset_member_optime_date{name="localhost:27017",set="test1",state="PRIMARY"} 1.488833953e+09
mongodb_mongod_replset_member_optime_date{name="localhost:27027",set="test1",state="SECONDARY"} 1.488833953e+09
# HELP mongodb_mongod_replset_member_ping_ms The pingMs represents the number of milliseconds (ms) that a round-trip packet takes to travel between the remote member and the local instance.
# TYPE mongodb_mongod_replset_member_ping_ms gauge
mongodb_mongod_replset_member_ping_ms{name="localhost:27027",set="test1",state="SECONDARY"} 0
# HELP mongodb_mongod_replset_member_state The value of state is an integer between 0 and 10 that represents the replica state of the member.
# TYPE mongodb_mongod_replset_member_state gauge
mongodb_mongod_replset_member_state{name="localhost:27017",set="test1",state="PRIMARY"} 1
mongodb_mongod_replset_member_state{name="localhost:27027",set="test1",state="SECONDARY"} 2
# HELP mongodb_mongod_replset_member_uptime The uptime field holds a value that reflects the number of seconds that this member has been online.
# TYPE mongodb_mongod_replset_member_uptime counter
mongodb_mongod_replset_member_uptime{name="localhost:27017",set="test1",state="PRIMARY"} 15881
mongodb_mongod_replset_member_uptime{name="localhost:27027",set="test1",state="SECONDARY"} 15855
# HELP mongodb_mongod_replset_my_name The replica state name of the current member
# TYPE mongodb_mongod_replset_my_name gauge
mongodb_mongod_replset_my_name{name="localhost:27017",set="test1"} 1
# HELP mongodb_mongod_replset_my_state An integer between 0 and 10 that represents the replica state of the current member
# TYPE mongodb_mongod_replset_my_state gauge
mongodb_mongod_replset_my_state{set="test1"} 1
# HELP mongodb_mongod_replset_number_of_members The number of replica set mebers
# TYPE mongodb_mongod_replset_number_of_members gauge
mongodb_mongod_replset_number_of_members{set="test1"} 2
# HELP mongodb_mongod_replset_oplog_head_timestamp The timestamp of the newest change in the oplog
# TYPE mongodb_mongod_replset_oplog_head_timestamp gauge
mongodb_mongod_replset_oplog_head_timestamp 1.488833953e+09
# HELP mongodb_mongod_replset_oplog_items_total The total number of changes in the oplog
# TYPE mongodb_mongod_replset_oplog_items_total gauge
mongodb_mongod_replset_oplog_items_total 70202
# HELP mongodb_mongod_replset_oplog_size_bytes Size of oplog in bytes
# TYPE mongodb_mongod_replset_oplog_size_bytes gauge
mongodb_mongod_replset_oplog_size_bytes{type="current"} 1.273140034e+09
mongodb_mongod_replset_oplog_size_bytes{type="storage"} 1.273139968e+09
# HELP mongodb_mongod_replset_oplog_tail_timestamp The timestamp of the oldest change in the oplog
# TYPE mongodb_mongod_replset_oplog_tail_timestamp gauge
mongodb_mongod_replset_oplog_tail_timestamp 1.488818198e+09
# HELP mongodb_mongod_replset_term The election count for the replica set, as known to this replica set member
# TYPE mongodb_mongod_replset_term gauge
mongodb_mongod_replset_term{set="test1"} 1
# HELP mongodb_mongod_rocksdb_background_errors The total number of background errors in RocksDB
# TYPE mongodb_mongod_rocksdb_background_errors gauge
mongodb_mongod_rocksdb_background_errors 0
# HELP mongodb_mongod_rocksdb_block_cache_bytes The current bytes used in the RocksDB Block Cache
# TYPE mongodb_mongod_rocksdb_block_cache_bytes gauge
mongodb_mongod_rocksdb_block_cache_bytes 2.5165824e+07
# HELP mongodb_mongod_rocksdb_block_cache_hits_total The total number of hits to the RocksDB Block Cache
# TYPE mongodb_mongod_rocksdb_block_cache_hits_total counter
mongodb_mongod_rocksdb_block_cache_hits_total 519483
# HELP mongodb_mongod_rocksdb_block_cache_misses_total The total number of misses to the RocksDB Block Cache
# TYPE mongodb_mongod_rocksdb_block_cache_misses_total counter
mongodb_mongod_rocksdb_block_cache_misses_total 219214
# HELP mongodb_mongod_rocksdb_bloom_filter_useful_total The total number of times the RocksDB Bloom Filter was useful
# TYPE mongodb_mongod_rocksdb_bloom_filter_useful_total counter
mongodb_mongod_rocksdb_bloom_filter_useful_total 130251
# HELP mongodb_mongod_rocksdb_bytes_read_total The total number of bytes read by RocksDB
# TYPE mongodb_mongod_rocksdb_bytes_read_total counter
mongodb_mongod_rocksdb_bytes_read_total{type="compation"} 7.917332498e+09
mongodb_mongod_rocksdb_bytes_read_total{type="iteration"} 3.94838428e+09
mongodb_mongod_rocksdb_bytes_read_total{type="point_lookup"} 2.655612381e+09
# HELP mongodb_mongod_rocksdb_bytes_written_total The total number of bytes written by RocksDB
# TYPE mongodb_mongod_rocksdb_bytes_written_total counter
mongodb_mongod_rocksdb_bytes_written_total{type="compaction"} 9.531638929e+09
mongodb_mongod_rocksdb_bytes_written_total{type="flush"} 2.525874187e+09
mongodb_mongod_rocksdb_bytes_written_total{type="total"} 2.548843861e+09
# HELP mongodb_mongod_rocksdb_compaction_average_seconds The average time per compaction between levels N and N+1 in RocksDB
# TYPE mongodb_mongod_rocksdb_compaction_average_seconds gauge
mongodb_mongod_rocksdb_compaction_average_seconds{level="L0"} 0.505
mongodb_mongod_rocksdb_compaction_average_seconds{level="L5"} 3.606
mongodb_mongod_rocksdb_compaction_average_seconds{level="L6"} 14.579
mongodb_mongod_rocksdb_compaction_average_seconds{level="total"} 4.583
# HELP mongodb_mongod_rocksdb_compaction_bytes_per_second The rate at which data is processed during compaction between levels N and N+1 in RocksDB
# TYPE mongodb_mongod_rocksdb_compaction_bytes_per_second gauge
mongodb_mongod_rocksdb_compaction_bytes_per_second{level="L0",type="write"} 1.248854016e+08
mongodb_mongod_rocksdb_compaction_bytes_per_second{level="L5",type="read"} 7.45537536e+07
mongodb_mongod_rocksdb_compaction_bytes_per_second{level="L5",type="write"} 7.45537536e+07
mongodb_mongod_rocksdb_compaction_bytes_per_second{level="L6",type="read"} 2.44318208e+07
mongodb_mongod_rocksdb_compaction_bytes_per_second{level="L6",type="write"} 2.06569472e+07
mongodb_mongod_rocksdb_compaction_bytes_per_second{level="total",type="read"} 2.70532608e+07
mongodb_mongod_rocksdb_compaction_bytes_per_second{level="total",type="write"} 3.2505856e+07
# HELP mongodb_mongod_rocksdb_compaction_file_threads The number of threads currently doing compaction for levels in RocksDB
# TYPE mongodb_mongod_rocksdb_compaction_file_threads gauge
mongodb_mongod_rocksdb_compaction_file_threads{level="L0"} 0
mongodb_mongod_rocksdb_compaction_file_threads{level="L5"} 0
mongodb_mongod_rocksdb_compaction_file_threads{level="L6"} 0
mongodb_mongod_rocksdb_compaction_file_threads{level="total"} 0
# HELP mongodb_mongod_rocksdb_compaction_score The compaction score of RocksDB levels
# TYPE mongodb_mongod_rocksdb_compaction_score gauge
mongodb_mongod_rocksdb_compaction_score{level="L0"} 0
mongodb_mongod_rocksdb_compaction_score{level="L5"} 0.9
mongodb_mongod_rocksdb_compaction_score{level="L6"} 0
mongodb_mongod_rocksdb_compaction_score{level="total"} 0
# HELP mongodb_mongod_rocksdb_compaction_seconds_total The time spent doing compactions between levels N and N+1 in RocksDB
# TYPE mongodb_mongod_rocksdb_compaction_seconds_total counter
mongodb_mongod_rocksdb_compaction_seconds_total{level="L0"} 20
mongodb_mongod_rocksdb_compaction_seconds_total{level="L5"} 25
mongodb_mongod_rocksdb_compaction_seconds_total{level="L6"} 248
mongodb_mongod_rocksdb_compaction_seconds_total{level="total"} 293
# HELP mongodb_mongod_rocksdb_compaction_write_amplification The write amplification factor from compaction between levels N and N+1 in RocksDB
# TYPE mongodb_mongod_rocksdb_compaction_write_amplification gauge
mongodb_mongod_rocksdb_compaction_write_amplification{level="L5"} 1
mongodb_mongod_rocksdb_compaction_write_amplification{level="L6"} 1.7
mongodb_mongod_rocksdb_compaction_write_amplification{level="total"} 3.8
# HELP mongodb_mongod_rocksdb_compactions_total The total number of compactions between levels N and N+1 in RocksDB
# TYPE mongodb_mongod_rocksdb_compactions_total counter
mongodb_mongod_rocksdb_compactions_total{level="L0"} 40
mongodb_mongod_rocksdb_compactions_total{level="L5"} 7
mongodb_mongod_rocksdb_compactions_total{level="L6"} 17
mongodb_mongod_rocksdb_compactions_total{level="total"} 64
# HELP mongodb_mongod_rocksdb_estimate_table_readers_memory_bytes The estimate RocksDB table-reader memory bytes
# TYPE mongodb_mongod_rocksdb_estimate_table_readers_memory_bytes gauge
mongodb_mongod_rocksdb_estimate_table_readers_memory_bytes 2.10944e+06
# HELP mongodb_mongod_rocksdb_files The number of files in a RocksDB level
# TYPE mongodb_mongod_rocksdb_files gauge
mongodb_mongod_rocksdb_files{level="L0"} 0
mongodb_mongod_rocksdb_files{level="L5"} 2
mongodb_mongod_rocksdb_files{level="L6"} 23
mongodb_mongod_rocksdb_files{level="total"} 25
# HELP mongodb_mongod_rocksdb_immutable_memtables The total number of immutable MemTables in RocksDB
# TYPE mongodb_mongod_rocksdb_immutable_memtables gauge
mongodb_mongod_rocksdb_immutable_memtables 0
# HELP mongodb_mongod_rocksdb_iterations_total The total number of iterations performed by RocksDB
# TYPE mongodb_mongod_rocksdb_iterations_total counter
mongodb_mongod_rocksdb_iterations_total{type="backward"} 1955
mongodb_mongod_rocksdb_iterations_total{type="forward"} 206563
# HELP mongodb_mongod_rocksdb_keys_total The total number of RocksDB key operations
# TYPE mongodb_mongod_rocksdb_keys_total counter
mongodb_mongod_rocksdb_keys_total{type="read"} 209546
mongodb_mongod_rocksdb_keys_total{type="written"} 281332
# HELP mongodb_mongod_rocksdb_live_versions The current number of live versions in RocksDB
# TYPE mongodb_mongod_rocksdb_live_versions gauge
mongodb_mongod_rocksdb_live_versions 1
# HELP mongodb_mongod_rocksdb_memtable_bytes The current number of MemTable bytes in RocksDB
# TYPE mongodb_mongod_rocksdb_memtable_bytes gauge
mongodb_mongod_rocksdb_memtable_bytes{type="active"} 2.5165824e+07
mongodb_mongod_rocksdb_memtable_bytes{type="total"} 2.5165824e+07
# HELP mongodb_mongod_rocksdb_memtable_entries The current number of Memtable entries in RocksDB
# TYPE mongodb_mongod_rocksdb_memtable_entries gauge
mongodb_mongod_rocksdb_memtable_entries{type="active"} 4553
mongodb_mongod_rocksdb_memtable_entries{type="immutable"} 0
# HELP mongodb_mongod_rocksdb_oldest_snapshot_timestamp The timestamp of the oldest snapshot in RocksDB
# TYPE mongodb_mongod_rocksdb_oldest_snapshot_timestamp gauge
mongodb_mongod_rocksdb_oldest_snapshot_timestamp 0
# HELP mongodb_mongod_rocksdb_pending_compactions The total number of compactions pending in RocksDB
# TYPE mongodb_mongod_rocksdb_pending_compactions gauge
mongodb_mongod_rocksdb_pending_compactions 0
# HELP mongodb_mongod_rocksdb_pending_memtable_flushes The total number of MemTable flushes pending in RocksDB
# TYPE mongodb_mongod_rocksdb_pending_memtable_flushes gauge
mongodb_mongod_rocksdb_pending_memtable_flushes 0
# HELP mongodb_mongod_rocksdb_read_latency_microseconds The read latency in RocksDB in microseconds by level
# TYPE mongodb_mongod_rocksdb_read_latency_microseconds gauge
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="P50"} 9.33
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="P75"} 17.03
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="P99"} 1972.31
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="P99.9"} 9061.44
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="P99.99"} 18098.3
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="avg"} 87.3861
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="max"} 26434
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="median"} 9.3328
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="min"} 0
mongodb_mongod_rocksdb_read_latency_microseconds{level="L0",type="stddev"} 607.16
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="P50"} 10.85
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="P75"} 28.46
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="P99"} 4208.33
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="P99.9"} 13079.55
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="P99.99"} 23156.25
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="avg"} 216.3736
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="max"} 34585
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="median"} 10.8461
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="min"} 0
mongodb_mongod_rocksdb_read_latency_microseconds{level="L5",type="stddev"} 978.48
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="P50"} 9.42
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="P75"} 30.69
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="P99"} 6037.87
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="P99.9"} 15482.63
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="P99.99"} 30125.5
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="avg"} 294.9574
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="max"} 61051
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="median"} 9.4245
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="min"} 0
mongodb_mongod_rocksdb_read_latency_microseconds{level="L6",type="stddev"} 1264.12
# HELP mongodb_mongod_rocksdb_reads_total The total number of read operations in RocksDB
# TYPE mongodb_mongod_rocksdb_reads_total counter
mongodb_mongod_rocksdb_reads_total{level="L0"} 59017
mongodb_mongod_rocksdb_reads_total{level="L5"} 42125
mongodb_mongod_rocksdb_reads_total{level="L6"} 118745
# HELP mongodb_mongod_rocksdb_seeks_total The total number of seeks performed by RocksDB
# TYPE mongodb_mongod_rocksdb_seeks_total counter
mongodb_mongod_rocksdb_seeks_total 156184
# HELP mongodb_mongod_rocksdb_size_bytes The total byte size of levels in RocksDB
# TYPE mongodb_mongod_rocksdb_size_bytes gauge
mongodb_mongod_rocksdb_size_bytes{level="L0"} 0
mongodb_mongod_rocksdb_size_bytes{level="L5"} 1.2969836544e+08
mongodb_mongod_rocksdb_size_bytes{level="L6"} 1.46175688704e+09
mongodb_mongod_rocksdb_size_bytes{level="total"} 1.59145525248e+09
# HELP mongodb_mongod_rocksdb_snapshots The current number of snapshots in RocksDB
# TYPE mongodb_mongod_rocksdb_snapshots gauge
mongodb_mongod_rocksdb_snapshots 0
# HELP mongodb_mongod_rocksdb_stall_percent The percentage of time RocksDB has been stalled
# TYPE mongodb_mongod_rocksdb_stall_percent gauge
mongodb_mongod_rocksdb_stall_percent 0
# HELP mongodb_mongod_rocksdb_stalled_seconds_total The total number of seconds RocksDB has spent stalled
# TYPE mongodb_mongod_rocksdb_stalled_seconds_total counter
mongodb_mongod_rocksdb_stalled_seconds_total 0
# HELP mongodb_mongod_rocksdb_stalls_total The total number of stalls in RocksDB
# TYPE mongodb_mongod_rocksdb_stalls_total counter
mongodb_mongod_rocksdb_stalls_total{type="level0_numfiles"} 0
mongodb_mongod_rocksdb_stalls_total{type="level0_numfiles_with_compaction"} 0
mongodb_mongod_rocksdb_stalls_total{type="level0_slowdown"} 0
mongodb_mongod_rocksdb_stalls_total{type="level0_slowdown_with_compaction"} 0
mongodb_mongod_rocksdb_stalls_total{type="memtable_compaction"} 0
mongodb_mongod_rocksdb_stalls_total{type="memtable_slowdown"} 0
# HELP mongodb_mongod_rocksdb_total_live_recovery_units The total number of live recovery units in RocksDB
# TYPE mongodb_mongod_rocksdb_total_live_recovery_units gauge
mongodb_mongod_rocksdb_total_live_recovery_units 5
# HELP mongodb_mongod_rocksdb_transaction_engine_keys The current number of transaction engine keys in RocksDB
# TYPE mongodb_mongod_rocksdb_transaction_engine_keys gauge
mongodb_mongod_rocksdb_transaction_engine_keys 0
# HELP mongodb_mongod_rocksdb_transaction_engine_snapshots The current number of transaction engine snapshots in RocksDB
# TYPE mongodb_mongod_rocksdb_transaction_engine_snapshots gauge
mongodb_mongod_rocksdb_transaction_engine_snapshots 0
# HELP mongodb_mongod_rocksdb_write_ahead_log_bytes_per_second The number of bytes written per second by the Write-Ahead-Log in RocksDB
# TYPE mongodb_mongod_rocksdb_write_ahead_log_bytes_per_second gauge
mongodb_mongod_rocksdb_write_ahead_log_bytes_per_second 157286.4
# HELP mongodb_mongod_rocksdb_write_ahead_log_writes_per_sync The number of writes per Write-Ahead-Log sync in RocksDB
# TYPE mongodb_mongod_rocksdb_write_ahead_log_writes_per_sync gauge
mongodb_mongod_rocksdb_write_ahead_log_writes_per_sync 70538
# HELP mongodb_mongod_rocksdb_writes_per_batch The number of writes per batch in RocksDB
# TYPE mongodb_mongod_rocksdb_writes_per_batch gauge
mongodb_mongod_rocksdb_writes_per_batch 2.54476812288e+09
# HELP mongodb_mongod_rocksdb_writes_per_second The number of writes per second in RocksDB
# TYPE mongodb_mongod_rocksdb_writes_per_second gauge
mongodb_mongod_rocksdb_writes_per_second 157286.4
# HELP mongodb_mongod_storage_engine The storage engine used by the MongoDB instance
# TYPE mongodb_mongod_storage_engine counter
mongodb_mongod_storage_engine{engine="rocksdb"} 1


PMM uses Grafana to visualize metrics stored in Prometheus. Grafana uses a concept of “dashboards” to store the definitions of what it should visualize. PMM’s dashboards are hosted under the GitHub project: percona/grafana-dashboards.

Grafana can support multiple “data sources” for metrics. As Percona Monitoring and Management uses Prometheus for metric storage, this is the “data source” used in Grafana.

Adding Custom Graphs to Grafana

In this section I will create an example graph to help indicate the “efficiency” of queries on a Mongod instance over the last 5 minutes. This is a good example to use because we plan to add this exact metric in an upcoming version of PMM.

This graph will rely on two metrics that are already provided to PMM via percona/mongodb_exporter since at least version 1.0.0 (“$host” = the hostname of a given node – explained later):

  1. mongodb_mongod_metrics_query_executor_total{instance=”$host”, state=”scanned_objects”} – A metric representing the total number of documents scanned by the server.
  2. mongodb_mongod_metrics_document_total{instance=”$host”, state=”returned”} – A metric representing the total number of documents returned by the server.

The graph will compute the change in these two metrics over five minutes and create a percentage/ratio of scanned vs. returned documents. A host with ten scanned documents and one document returned would have 10% efficiency and a host that scanned 100 documents to return 100 documents would have 100% efficiency (a 1:1 ratio).

Often you will encounter Prometheus metrics that are “total” metric counters that increment from the time the server is (re)started. Both of the metrics our new graph requires are incremented counters and thus need to be “scaled” or “trimmed” to only show the last five minutes of metrics (in this example), not the total since the server was (re)started.

Prometheus offers a very useful query function for incremented counters called “increase()”: The Prometheus increase() function allows queries to return the amount a metric counter has increased over a given time period, making this trivial to do! It is also unaffected by counter “resets” due to server restarts as increase() only returns increases in counters.

The increase() syntax requires a time range to be specified before the closing round-bracket. In our case we will as increase() to consider the last five minutes, which is expressed with “[5m]” at the end of the increase() function, seen in the following example.

The full Prometheus query I will use to create query efficiency graph is:

    increase(mongodb_mongod_metrics_query_executor_total{instance="$host", state="scanned_objects"}[5m])
    increase(mongodb_mongod_metrics_document_total{instance="$host", state="returned"}[5m])

Note: sum() is required around the increase() functions when dividing two numbers in Prometheus queries.

Now, let’s make this a new graph! To do this you can create a new dashboard in PMM’s Grafana or add to an existing dashboard. In this example I’ll create a new dashboard with a single graph.

To add a new dashboard, press the Dashboard selector in PMM’s Grafana and select “Create New”:

This will create a new dashboard named “New Dashboard”.

Most of PMM’s graphing uses a variable named “$host” in place of a hostname/IP. You’ll notice “$host” was used in the “query efficiency” Prometheus query earlier. The variable is set using a Grafana feature called Templating.

Let’s add a “$host” variable to our new dashboard so we can change what host we graph without modifying our queries. First, press the gear icon at the top of the dashboard and select “Templating”:

Then press “New” to create a new Templating variable.

Set “Name” to be host, set “Data Source” to Prometheus and set “Query” to label_values(instance). Leave all other settings default:

Press “Add” to add the template variable, then save and reload the dashboard.

This will add a drop-down of unique hosts in Prometheus like this:

On the first dashboard row let’s add a new graph by opening the row menu on the far left of the first row and then select “Add Panel”:

Select “Graph” as the type. Click the title of the blank graph to open a menu, press “Edit”:

This opens Grafana’s graph editor with an empty graph and some input boxes seen below.

Next, let’s add our “query efficiency” Prometheus query (earlier in this article) to the “Query” input field and add a legend name to “Legend format”:

Now we have some graph data, but the Y-axis and title don’t explain very much about the metric. What does “1.0K” on the Y-Axis mean?

As our metric is a ratio, let’s display the Y-axis as a percentage by switching to the “Axes” tab, then selecting “percent (0.0-1.0)” as the “Unit” selection for the “Left Y” axis, like so:

Next let’s set a graph title. To do this, go to the “General” tab of the graph editor and set the “Title” field:

And now we have a “Query Efficiency” graph with an accurate Y-axis and Title(!):

“Back to Dashboard” on the top-right will take you back to the dashboard view. Always remember to save your dashboards after making changes!

Adding Custom Metrics to percona/mongodb_exporter

For those familiar with the Go programming language, adding custom metrics to percona/mongodb_exporter is fairly straightforward. The percona/mongodb_exporter uses the Prometheus Go client to export metrics gathered from queries to MongoDB.

Adding a completely new metric to the exporter is unfortunately too open-ended to explain in a blog. Instead, I will cover how an existing metric is exported by percona/mongodb_exporter. The process for a new metric will be similar.

To follow our previous example, here is an simplified example of how the metric: “mongodb_mongod_metrics_query_executor_total” is exported via percona/mongodb_exporter. This source of this metric is “db.serverStatus().metrics.queryExecutor” from MongoDB shell perspective.

First a new Prometheus metric is defined as a go ‘var’:

  1. var (
    	metricsQueryExecutorTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
    		Namespace: Namespace,
    		Name:      "metrics_query_executor_total",
    		Help:      "queryExecutor is a document that reports data from the query execution system",
    	}, []string{"state"})

  2. A struct for marshaling the BSON response from MongoDB and an “.Export()” function is defined for the struct (.Export() is called on the metric structs):
    // QueryExecutorStats are the stats associated with a query execution.
    type QueryExecutorStats struct {
    	Scanned        float64 `bson:"scanned"`
    	ScannedObjects float64 `bson:"scannedObjects"`
    // Export exports the query executor stats.
    func (queryExecutorStats *QueryExecutorStats) Export(ch chan<- prometheus.Metric) {

    Notice that the float64 values unmarshaled from the BSON are used in the .Set() for the Prometheus metric. All Prometheus values must be float64.

  3. In this case the “QueryExecutorStats” is a sub-struct of a larger “MetricStats” struct above it, also with its own “.Export()” function:
    // MetricsStats are all stats associated with metrics of the system
    type MetricsStats struct {
    	Document      *DocumentStats      `bson:"document"`
    	GetLastError  *GetLastErrorStats  `bson:"getLastError"`
    	Operation     *OperationStats     `bson:"operation"`
    	QueryExecutor *QueryExecutorStats `bson:"queryExecutor"`
    	Record        *RecordStats        `bson:"record"`
    	Repl          *ReplStats          `bson:"repl"`
    	Storage       *StorageStats       `bson:"storage"`
    	Cursor        *CursorStats        `bson:"cursor"`
    // Export exports the metrics stats.
    func (metricsStats *MetricsStats) Export(ch chan<- prometheus.Metric) {
    	if metricsStats.Document != nil {
    	if metricsStats.GetLastError != nil {
    	if metricsStats.Operation != nil {
    	if metricsStats.QueryExecutor != nil {

  4. Finally a .Collect() and  .Describe()” (also required functions) is called on the metric to collect and describe it:
  5. Later on in this code, “MetricStats” is passed the result of the query “db.serverStatus().metrics”. This can be seen at:

For those unfamiliar with Go and/or unable to contribute new metrics to the project, we suggest you open a JIRA ticket for any feature requests for new metrics here:


With the flexibility of the monitoring components of Percona Monitoring and Management, the sky is the limit on what can be done with database monitoring! Hopefully this blog gives you a taste of what is possible if you need to add a new graph, a new metric or both to Percona Monitoring and Management. Also, it is worth repeating that a large number of metrics gathered in Percona Monitoring and Management are not graphed. Perhaps what you’re looking for is already collected. See “http://<pmm-server>/prometheus” for more details on what metrics are stored in Prometheus.

We are always open to improving our dashboards, and we would love to hear about any custom graphs you create and how they help solve problems!


Percona Monitoring and Management (PMM) Graphs Explained: MongoDB MMAPv1

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM)This post is part of the series of Percona’s MongoDB 3.4 bundle release blogs. In this blog post, I hope to cover some areas to watch with Percona Monitoring and Management (PMM) when running MMAPv1. The graph examples from this article are from the MMAPv1 dashboard that will be released for the first time in PMM 1.1.2.

Since the very beginning of MongoDB, the MMAPv1 storage engine has existed. MongoDB 3.0 added a pluggable storage engine API. You could only use MMAPv1 with MongoDB before that. While MMAPv1 often offers good read performance, it has become famous for its poor write performance and fragmentation at scale. This means there are many areas to watch for regarding performance and monitoring.

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB. It was developed by Percona on top of open-source technology. Behind the scenes, the graphing features this article covers use Prometheus (a popular time-series data store), Grafana (a popular visualization tool), mongodb_exporter (our MongoDB database metric exporter) plus other technologies to provide database and operating system metric graphs for your database instances.

(Beware of) MMAPv1

mmap() is a system-level call that causes the operating system kernel to map on-disk files to memory while it is being read and written by a program.

As mmap() is a core feature of the Unix/Linux operating system kernel (and not the MongoDB code base), I’ve always felt that calling MMAPv1 a “storage engine” is quite misleading, although it does allow for a simpler explanation. The distinction and drawbacks of the storage logic being in the operating system kernel vs. the actual database code (like most database storage engines) becomes very important when monitoring MMAPv1.

As Unix/Linux are general-purpose operating systems that can have many processes, users and uses cases, they offer limited OS-level metrics in terms of activity, latency and performance of mmap(). Those metrics are for the entire operating system, not just for the MongoDB processes.

mmap() uses memory from available OS-level buffers/caches for mapping the MMAPv1 data to RAM — memory that can be “stolen” away by any other operating system process that asks for it. As many deployments “micro-shard” MMAPv1 to reduce write locks, this statement can become exponentially more important. If 3 x MongoDB instances run on a single host, the kernel fights to cache and evict memory pages created by 3 x different instances with no priority or queuing, essentially at random, while creating contention. This causes inefficiencies and less-meaningful monitoring values.

When monitoring MMAPv1, you should consider MongoDB AND the operating system as one “component” more than most engines. Due to this, it is critical that a database host runs a single MongoDB instance with no other processes except database monitoring tools such as PMM’s client. This allows MongoDB to be the only user of the operating system filesystem cache that MMAPv1 relies on. This also makes OS-level memory metrics more accurate because MongoDB is the only user of memory. If you need to “micro-shard” instances, I recommend using containers (Docker or plain cgroups) or virtualization to separate your memory for each MongoDB instance, with just one MongoDB instance per container.


MMAPv1’s has locks for both reads and writes. In the early days the lock was global only. Locking became per-database in v2.2 and per-collection in v3.0.

Locking is the leading cause of the performance issues we see on MMAPv1 systems, particularly write locking. To measure how much locking an MMAPv1 instance is waiting on, first we look at the “MMAPv1 Lock Ratio”:

Another important metric to watch is “MongoDB Lock Wait Time”, breaking down a number of time operations spend waiting on locks:

Three factors in combination influence locking:

  1. Data hotspots — if every query hits the same collection or database, locking increases
  2. Query performance — a lock is held for the duration of an operation; if that operation is slow, lock time increases
  3. Volume of queries — self-explanatory

Page Faults

Page faults happen when MMAPv1 data is not available in the cache and needs to be fetched from disk. On systems with data that is smaller than memory page faults usually only occur on reboot, or if the file system cache is dumped. On systems where data exceeds memory, this happens more frequently — MongoDB is asked for data not in memory.

How often this happens depends on how your application accesses your data. If it accesses new or frequently-queried data, it is more likely to be in memory. If it accesses old or infrequent data, more page faults occur.

If page faults suddenly start occurring, check to see if your data set has grown beyond the size of memory. You may be able to reduce your data set by removing fragmentation (explained later).


As MMAPv1 eventually flushes changes to disk in batches, journaling is essential for running MongoDB with any real data integrity guarantees. As well as being included in the lock statistic graphs mentioned above, there are some good metrics for journaling (which is a heavy consumer of disk writes).

Here we have “MMAPv1 Journal Write Activity”, showing the data rates of journaling (max 19MB/sec):

“MMAPv1 Journal Commit Activity” measures the commits to the journal ops/second:

A very useful metric for write query performance is “MMAPv1 Journaling Time” (there is another graph with 99th percentile times):

This is important to watch, as write operations need to wait for a journal commit. In the above example, “write_to_journal” and “write_to_data_files” are the main metrics I tend to look at. “write_to_journal” is the rate of changes being written to the journal, and “write_to_data_files” is the rate that changes are written to on-disk data.

If you see very high journal write times, you may need faster disks or in-sharding scenarios. Adding more shards spreads out the disk write load.

Background Flushing

“MMAPv1 Background Flushing Time” graphs the background operation that calls flushes to disk:

This process does not block the database, but does cause more disk activity.


Due to the way MMAPv1 writes to disk, it creates a high rate of fragmentation (or holes) in its data files. Fragmentation slows down scan operations, wastes some filesystem cache memory and can use much more disk space than there is actual data. On many systems I’ve seen, the size of MMAPv1 data files on disk take over twice the true data size.

Currently, our Percona Monitoring and Management MMAPv1 support does not track this, but we plan to add it in the future.

To track it manually, look at the output of the “.stats()” command for a given collection (replace “sbtest1” with your collection name):

> 1 - ( db.sbtest1.stats().size / db.sbtest1.stats().storageSize )

Here we can see this collection is about 14% fragmented on disk. To fix fragmentation, the most common fix is dropping and recreating the collection using a backup. Many just remove a replication member, clear the data and let it do a new replication initial sync.

Operating System Memory

In PMM we have graphed the operating system cached memory as it acts as the primary cache for MMAPv1:

For the most part, “Cached” is the value showing the amount of data that is cached MMAPv1 data (assuming the host is only running MongoDB).

We also graph the dirty memory pages:

It is important that dirty pages do not exceed the hard dirty page limit (which causes pauses). It is also important that dirty pages don’t accumulate (which wastes cache memory). The “soft” dirty page limit is the limit that starts dirty page cleanup without pausing.

On this host, you could probably lower the soft limit to clean up memory faster, assuming the increase in disk activity is acceptable. This topic is covered in this post:

What’s Missing?

As mentioned earlier, fragmentation rates are missing for MMAPv1 (this would be a useful addition). Due to the limited nature of the metrics offered for MMAPv1, PMM probably won’t provide the same level of graphs for MMAPv1 compared to what we provide for WiredTiger or RocksDB. There will likely be fewer additions to the graphing capabilities going forward.

If you are using a highly concurrent system, we highly recommend you upgrade to WiredTiger or RocksDB (both also covered in this monitoring series). These engines provide several solutions to MMAPv1 headaches: document-level locking, built-in compression, checkpointing that cause near-zero fragmentation on disk and much-improved visibility for monitoring. We just released Percona Server for MongoDB 3.4, and it provides many exciting features (including these engines).

Look out for more monitoring posts from this series!


Installing Percona Monitoring and Management (PMM) for the First Time

Percona Monitoring and Management 2

Percona Monitoring and ManagementThis post is another in the series on Percona’s MongoDB 3.4 bundle release. This post is meant to walk a prospective user through the benefits of Percona Monitoring and Management (PMM), how it’s architected and the simple install process. By the end of this post, you should have a good idea of what PMM is, where it can add value in your environment and how you can get PMM going quickly.

Percona Monitoring and Management (PMM) is Percona’s open-source tool for monitoring and alerting on database performance and the components that contribute to it. PMM monitors MySQL (Percona Server and MySQL CE), Amazon RDS/Aurora, MongoDB (Percona Server and MongoDB CE), Percona XtraDB/Galera Cluster, ProxySQL, and Linux.

What is it?

Percona Monitoring and Management is an amalgamation of exciting, best in class, open-source tools and Percona “engineering wizardry,” designed to make it easier to monitor and manage your environment. The real value to our users is the amount of time we’ve spent integrating the tools, plus the pre-built dashboards we’ve constructed that leverage the ten years of performance optimization experience. What you get is a tool that is ready to go out of the box, and installs in minutes. If you’re still not convinced, like ALL Percona software it’s completely FREE!

Sound good? I can hear you nodding your head. Let’s take a quick look at the architecture.

What’s it made of?

PMM, at a high-level, is made up of two basic components: the client and the server. The PMM Client is installed on the database servers themselves and is used to collect metrics. The client contains technology specific exporters (which collect and export data), and an “admin interface” (which makes the management of the PMM platform very simple). The PMM server is a “pre-integrated unit” (Docker, VM or AWS AMI) that contains four components that gather the metrics from the exporters on the PMM client(s). The PMM server contains Consul, Grafana, Prometheus and a Query Analytics Engine that Percona has developed. Here is a graphic from the architecture section of our documentation. In order to keep this post to a manageable length, please refer to that page if you’d like a more “in-depth” explanation.

How do I use it?

PMM is very easy to access once it has been installed (more on the install process below). You will simply open up the web browser of your choice and connect to the PMM Landing Page by typing

http://<ip_address_of _PMM_server>

. That takes you to the PMM landing page, where you can access all of PMM’s tools. If you’d like to get a look into the user experience, we’ve set up a great demo site so you can easily test it out.

Where should I use it?

There’s a good chance that you already have a monitoring/alerting platform for your production workloads. If not, you should set one up immediately and start analyzing trends in your environment. If you’re confident in your production monitoring solution, there is still a use for PMM in an often overlooked area: development and testing.

When speaking with users, we often hear that their development and test environments run their most demanding workloads. This is often due to stress testing and benchmarking. The goal of these workloads is usually to break something. This allows you to set expectations for normal, and thus abnormal, behavior in your production environment. Once you have a good idea of what’s “normal” and the critical factors involved, you can alert around those parameters to identify “abnormal” patterns before they cause user issues in production. The reason that monitoring is critical in your dev/test environment(s) is that you want to easily spot inflection points in your workload, which signal impending disaster. Dashboards are the easiest way for humans to consume and analyze this data.

Are you sold? Let’s get to the easiest part: installation.

How do you install it?

PMM is very easy to install and configure for two main reasons. The first is that the components (mentioned above) take some time to install, so we spent the time to integrate everything and ship it as a unit: one server install and a client install per host. The second is that we’re targeting customers looking to monitor MySQL and MongoDB installations for high-availability and performance. The fact that it’s a targeted solution makes pre-configuring it to monitor for best practices much easier. I believe we’ve all seen a particular solution that tries to do a little of everything, and thus actually does no particular thing well. This is the type of tool that we DO NOT want PMM to be. Now, onto the installation procedure.

There are four basic steps to get PMM monitoring your infrastructure. I do not want to recreate the Deployment Guide in order to maintain the future relevancy of this post. However, I’ll link to the relevant sections of the documentation so you can cut to the chase. Also, underneath each step, I’ll list some key takeaways that will save you time now and in the future.

  1. Install the integrated PMM server in the flavor of your choice (Docker, VM or AWS AMI)
    1. Percona recommends Docker to deploy PMM server as of v1.1
      1. As of right now, using Docker will make the PMM server upgrade experience seamless.
      2. Using the default version of Docker from your package manager may cause unexpected behavior. We recommend using the latest stable version from Docker’s repositories (instructions from Docker).
    2. PMM server AMI and VM are “experimental” in PMM v1.1
    3. When you open the “Metrics Monitor” for the first time, it will ask for credentials (user: admin pwd: admin).
  2. Install the PMM client on every database instance that you want to monitor.
    1. Install with your package manager for easier upgrades when a new version of PMM is released.
  3. Connect the PMM client to the PMM Server.
    1. Think of this step as sending configuration information from the client to the server. This means you are telling the client the address of the PMM server, not the other way around.
  4. Start data collection services on the PMM client.
    1. Collection services are enabled per database technology (MySQL, MongoDB, ProxySQL, etc.) on each database host.
    2. Make sure to set permissions for PMM client to monitor the database: Cannot connect to MySQL: Error 1045: Access denied for user ‘jon’@’localhost’ (using password: NO)
      1. Setting proper credentials uses this syntax sudo pmm-admin add <service_type> –user xxxx –password xxxx
    3. There’s good information about PMM client options in the “Managing PMM Client” section of the documentation for advanced configurations/troubleshooting.

What’s next?

That’s really up to you, and what makes sense for your needs. However, here are a few suggestions to get the most out of PMM.

  1. Set up alerting in Grafana on the PMM server. This is still an experimental function in Grafana, but it works. I’d start with Barrett Chambers’ post on setting up email alerting, and refine it with  Peter Zaitsev’s post.
  2. Set up more hosts to test the full functionality of PMM. We have completely free, high-performance versions of MySQL, MongoDB, Percona XtraDB Cluster (PXC) and ProxySQL (for MySQL proxy/load balancing).
  3. Start load testing the database with benchmarking tools to build your troubleshooting skills. Try to break something to learn what troubling trends look like. When you find them, set up alerts to give you enough time to fix them.

Percona Monitoring and Management (PMM) Graphs Explained: MongoDB with RocksDB

Percona Monitoring and Management

Percona Monitoring and ManagementThis post is part of the series of Percona’s MongoDB 3.4 bundle release blogs. In mid-2016, Percona Monitoring and Management (PMM) added support for RocksDB with MongoDB, also known as “MongoRocks.” In this blog, we will go over the Percona Monitoring and Management (PMM) 1.1.0 version of the MongoDB RocksDB dashboard, how PMM is useful in the day-to-day monitoring of MongoDB and what we plan to add and extend.

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB, developed by Percona on top of open-source technology. Behind the scenes, the graphing features this article covers use Prometheus (a popular time-series data store), Grafana (a popular visualization tool), mongodb_exporter (our MongoDB database metric exporter) plus other technologies to provide database and operating system metric graphs for your database instances.

The mongodb_exporter tool, which provides our monitoring platform with MongoDB metrics, uses RocksDB status output and optional counters to provide detailed insight into RocksDB performance. Percona’s MongoDB 3.4 release enables RocksDB’s optional counters by default. On 3.2, however, you must set the following in /etc/mongod.conf to enable this:

storage.rocksdb.counters: true


This article shows a live demo of our MongoDB RocksDB graphs:


RocksDB is a storage engine available since version 3.2 in Percona’s fork of MongoDB: Percona Server for MongoDB.

The first thing to know about monitoring RocksDB is compaction. RocksDB stores its data on disk using several tiered levels of immutable files. Changes written to disk are written to the first RocksDB level (Level0). Later the internal compactions merge the changes down to the next RocksDB level when Level0 fills. Each level before the last is essentially deltas to the resting data set that soon merges down to the bottom.

We can see the effect of the tiered levels in our “RocksDB Compaction Level Size” graph, which reflects the size of each level in RocksDB on-disk:


Note that most of the database data is in the final level “L6” (Level 6). Levels L0, L4 and L5 hold relatively smaller amounts of data changes. These get merged down to L6 via compaction.

More about this design is explained in detail by the developers of MongoRocks, here:

RocksDB Compaction

Most importantly, RocksDB compactions try to happen in the background. They generally do not “block” the database. However, the additional resource usage of compactions can potentially cause some spikes in latency, making compaction important to watch. When compactions occur, between levels L4 and L5 for example, L4 and L5 are read and merged with the result being written out as a new L5.

The memtable in MongoRocks is a 64mb in-memory table. Changes initially get written to the memtable. Reads check the memtable to see if there are unwritten changes to consider. When the memtable has filled to 100%, RocksDB performs a compaction of the memtable data to Level0, the first on-disk level in RocksDB.

In PMM we have added a single-stat panel for the percentage of the memtable usage. This is very useful in indicating when you can expect a memtable-to-level0 compaction to occur:

Above we can see the memtable is 125% used, which means RocksDB is late to finish (or start) a compaction due to high activity. Shortly after taking this screenshot above, however, our test system began a compaction of the memtable and this can be seen at the drop in active memtable entries below:


Following this compaction further through PMM’s graphs, we can see from the (very useful) “RocksDB Compaction Time” graph that this compaction took 5 seconds.

In the graph above, I have singled-out “L0” to show Level0’s compaction time. However, any level can be selected either per-graph (by clicking on the legend-item) or dashboard-wide (by using the RocksDB Level drop-down at the top of the page).

In terms of throughput, we can see from our “RocksDB Write Activity” graph (Read Activity is also graphed) that this compaction required about 33MBps of disk write activity:

On top of additional resource consumption such as the write activity above, compactions cause caches to get cleared. One example is the OS cache due to new level files being written. These factors can cause some increases to read latencies, demonstrated in this example below by the bump in L4 read latency (top graph) caused by the L4 compaction (bottom graph):

This pattern above is one area to check if you see latency spikes in RocksDB.

RocksDB Stalls

When RocksDB is unable to perform compaction promptly, it uses a feature called “stalls” to try and slow down the amount of data coming into the engine. In my experience, stalls almost always mean something below RocksDB is not up to the task (likely the storage system).

Here is the “RocksDB Stall Time” graph of a host experiencing frequent stalls:

PMM can graph the different types of RocksDB stalls in the “RocksDB Stalls” graph. In our case here, we have 0.3-0.5 stalls per second due to “level0_slowdown” and “level0_slowdown_with_compaction.” This happens when Level0 stalls the engine due to slow compaction performance below its level.

Another metric reflecting the poor compaction performance is the pending compactions in “RocksDB Pending Operations”:

As I mentioned earlier, this almost always means something below RocksDB itself cannot keep up. In the top-right of PMM, we have OS-level metrics in a drop-down, I recommend you look at “Disk Performance” in these scenarios:

On the “Disk Performance” dashboard you can see the “sda” disk has an average write time of 212ms, and a max of 1100ms (1.1 seconds). This is fairly slow.

Further, on the same dashboard I can see the CPU is waiting on disk I/O 98.70% of the time on average. This explains why RocksDB needs to stall to hold back some of the load!

The disks seem too busy to keep up! Looking at the “Mongod – Document Activity” graph, it explains the cause of the high disk usage: 10,000-60,000 inserts per second:

Here we can draw the conclusion that this volume of inserts on this system configuration causes some stalling in RocksDB.

RocksDB Block Cache

The RocksDB Block Cache is the in-heap cache RocksDB uses to cache uncompressed pages. Generally, deployments benefit from dedicating most of their memory to the Linux file system cache vs. the RocksDB Block Cache. We recommend using only 20-30% of the host RAM for block cache.

PMM can take away some of the guesswork with the “RocksDB Block Cache Hit Ratio” graph, showing the efficiency of the block cache:

It is difficult to define a “good” and “bad” number for this metric, as the number varies for every deployment. However, one important thing to look for is significant changes in this graph. In this example, the Block Cache has a page in cache 3000 times for every 1 time it does not.

If you wanted to test increasing your block cache, this graph becomes very useful. If you increase your block cache and do not see an improvement in the hit ratio after a lengthy period of testing, this usually means more block cache memory is not necessary.

RocksDB Read Latency Graphs

PMM graphs Read Latency metrics for RocksDB in several different graphs, one dedicated to Level0:

And three other graphs display Average, 99th Percentile and Maximum latencies for each RocksDB level. Here is an example from the 99th Percentile latency metrics:

Coming Soon

Percona Monitoring and Management needs to add some more metrics that explain the performance of the engine. The rate of deletes/tombstones in the system affects RocksDB’s performance. Currently, this metric is not something our system can easily gather like other engine metrics. Percona Monitoring and Management can’t easily graph the efficiency of the Bloom filter yet, either. These are currently open feature requests to the MongoRocks (and likely RocksDB) team(s) to add in future versions.

Percona’s release of Percona Server for MongoDB 3.4 includes a new, improved version of MongoRocks and RocksDB. More is available in the release notes!


Percona Monitoring and Management (PMM) Upgrade Guide

Percona Monitoring and Management

Percona Monitoring and ManagementThis post is part of a series of Percona’s MongoDB 3.4 bundle release blogs. The purpose of this blog post is to demonstrate current best-practices for an in-place Percona Monitoring and Management (PMM) upgrade. Following this method allows you to retain data previously collected by PMM in your MySQL or MongoDB environment, while upgrading to the latest version.

Step 1: Housekeeping

Before beginning this process, I recommend that you use a package manager that installs directly from Percona’s official software repository. The install instructions vary by distro, but for Ubuntu users the commands are:

wget$(lsb_release -sc)_all.deb

sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb

Step 2: PMM Server Upgrade

Now that we have ensured we’re using Percona’s official software repository, we can continue with the upgrade. To check which version of PMM server is running, execute the following command on your PMM server host:

docker ps

This command shows a list of all running Docker containers. The version of PMM server you are running is found in the image description.


Once you’ve verified you are on an older version, it’s time to upgrade!

The first step is to stop and remove your docker pmm-server container with the following command:

docker stop pmm-server && docker rm pmm-server

Please note that this command may take several seconds to complete.

The next step is to create and run the image with the new version tag. In this case, we are installing version 1.1.0. Please make sure to verify the correct image name in the install instructions.

Run the command below to create and run the new image.

docker run -d
&nbsp;&nbsp;-p 80:80
&nbsp;&nbsp;--volumes-from pmm-data
&nbsp;&nbsp;--name pmm-server
&nbsp;&nbsp;--restart always


We can confirm our new image is running with the following command:

docker ps


As you can see, the latest version of PMM server is installed. The final step in the process is to update the PMM client on each host to be monitored.

Step 3: PMM Client Upgrade

The GA version of Percona Monitoring and Management supports in-place upgrades. Instructions can be found in our documentationOn the client side, update the local apt cache, and upgrade to the new version of pmm-client by running the following commands:

apt-get update


apt-get install pmm-client

Congrats! We’ve successfully upgraded to the latest PMM version. As you can tell from the graph below, there is a slight gap in our polling data due to the downtime necessary to upgrade the version. However, we have verified that the data that existed prior to the upgrade is still available and new data is being gathered.



I hope this blog post has given you the confidence to do an in-place Percona Monitoring and Management upgrade. As always, please submit your feedback on our forums with regards to any PMM-related suggestions or questions. Our goal is to make PMM the best-available open-source MySQL and MongoDB monitoring tool.

Powered by WordPress | Theme: Aeros 2.0 by