Sep
16
2020
--

Grafana 7 Arrives with Percona Monitoring and Management 2.10.0

Grafana 7 Arrives with Percona Monitoring and Management

Grafana 7 Arrives with Percona Monitoring and ManagementAs you may know, Percona Monitoring and Management (PMM) is based on Grafana, and all our frontend applications are either Grafana plugins or dashboards.

We’ve just released PMM version 2.10 and it includes a lot of improvements and new functionality. In this article, I would like to highlight some. They are quite important for users and developers despite this being just a version update.

The biggest change is the upgrade to a major new release of Grafana. Why is this important? Because we have a lot of interaction with the underlying mechanisms of Grafana. Consequently, we’ve made a lot of changes related to styling, platform core, and infrastructure. A downside of this is that sometimes changes are incompatible with previously used functionality, and we need to follow and mitigate these.

Consistent UI

Some time ago, Grafana switched to developing its own library for components on the frontend. In the 7th version, they made an updated UI kit. This brought a more complete look and feel, and overall, the interface is much neater, better looking, and more universal. Along with the transition to the 7th version, we also started using new UI kit interface elements in many places. The interface has become noticeably more consistent and holistic and presents a lighter and cleaner style.

Better React Support

Grafana has simultaneously been developing with Angular and, more lately, ReactJS, and for some time have been supporting both. In version 7 a turning point has come and support for development on ReactJS has come to the fore. We have long since switched to using ReactJS, so these changes were very useful for us.

What this means for PMM: With better support for ReactJS plugins, we can develop faster and remove some workarounds.

API Changes

Significant changes were made to how plugins interact with Grafana. Many of the methods that allowed us to interact with low-level functions seem to be gone. For example, there is no longer the ability to redefine or add your own listeners to Grafana core events, such as time range changes.

As we used such functionality for different purposes, we were faced with some complications:

  • In QAN we have filters that need to be stored as Grafana variables and be synchronized with panel states.
  • We have complex interactions with the time range and refresh controls.
  • Many other things, such as open tabs in details, added columns, and so on. All these are part of the URL. This data must be stored somewhere because it is occasionally needed in other dashboards.

Use Grafana Themes

In conjunction with this update, we converted almost every part of PMM to using the Grafana themes factory. Of course, there is still some old CSS code, but the conversion work is ongoing and will be finished soon.

The Grafana themes factory approach gives us an easier way to manage styles and make interfaces more consistent, which corresponds more closely to how Grafana itself works.

Stability Improvements

Grafana is running a mix of Angular and ReactJS, and in recent years there has been a migration to pure ReactJS. Now, the majority of Grafana system components are implemented with ReactJS. A lot of legacy code has been cleaned up, reducing the number of errors and made the work easier.

Infrastructure Enhancements

We use the grafana-toolkit to build the front-end of PMM. As we use it within a CI process, it is crucially important that it works as quickly and smoothly as possible. Grafana version 7 also brought these advantages:

  1. Improved typescript support
  2. Faster build time
  3. Fewer bugs

Download PMM 2.10.0 Today!

Also, learn more about the new Percona Customer Portal rolling out starting with the 2.10.0 release of Percona Monitoring and Management.

Oct
24
2019
--

Grafana Labs nabs $24M Series A for open source-based data analytics stack

Grafana Labs, the commercial company built to support the open-source Grafana project, announced a healthy $24 million Series A investment today. Lightspeed Venture Partners led the round with participation from Lead Edge Capital.

Company CEO and co-founder Raj Dutt says the startup started life as a way to offer a commercial layer on top of the open-source Grafana tool, but it has expanded and now supports other projects, including Loki, an open-source monitoring tool not unlike Prometheus, which the company developed last year.

All of this in the service of connecting to data sources and monitoring data. “Grafana has always been about connecting data together no matter where it lives, whether it’s in a proprietary database, on-prem database or cloud database. There are over 42 data sources that Grafana connects together,” Dutt explained.

But the company has expanded far beyond that. As it describes the product set, “Our products have begun to evolve to unify into a single offering: the world’s first composable open-source observability platform for metrics, logs and traces. Centered around Grafana.” This is exactly where other monitoring and logging tools like Elastic, New Relic and Splunk have been heading this year. The term “observability” is a term that’s been used often to describe these combined capabilities of metrics, logging and tracing.

Grafana Labs is the commercial arm of the open-source projects, and offers a couple of products built on top of these tools. First of all it has Grafana Enterprise, a package that includes enterprise-focused data connectors, enhanced authentication and security and enterprise-class support over and above what the open-source Grafana tool offers.

The company also offers a SaaS version of the Grafana tool stack, which is fully managed and takes away a bunch of the headaches of trying to download raw open-source code, install it, manage it and deal with updates and patches. In the SaaS version, all of that is taken care of for the customer for a monthly fee.

Dutt says the startup took just $4 million in external investment over the first five years, and has been able to build a business with 100 employees and 500 customers. He is particularly proud of the fact that the company is cash flow break-even at this point.

Grafana Labs decided the time was right to take this hefty investment and accelerate the startup’s growth, something they couldn’t really do without a big cash infusion. “We’ve seen this really virtuous cycle going with value creation in the community through these open-source projects that builds mind share, and that can translate into building a sustainable business. So we really want to accelerate that, and that’s the main reason behind the raise.”

May
29
2019
--

Logz.io lands $52M to keep growing open source-based logging tools

Logz.io announced a $52 million Series D investment today. The round was led by General Catalyst.

Other investors participating in the round included OpenView Ventures, 83North, Giza Venture Capital, Vintage Investment Partners, Greenspring Associates and Next47. Today’s investment brings the total raised to nearly $100 million, according to Crunchbase data.

Logz.io is a company built on top of the open-source tools Elasticsearch, Logstash and Kibana (collectively known by the acronym ELK) and Grafana. It’s taking those tools in a typical open-source business approach, packaging them up and offering them as a service. This approach enables large organizations to take advantage of these tools without having to deal with the raw open-source projects.

The company’s solutions intelligently scan logs looking for anomalies. When it finds them, it surfaces the problem and informs IT or security, depending on the scenario, using a tool like PagerDuty. This area of the market has been dominated in recent years by vendors like Splunk and Sumo Logic, but company founder and CEO Tomer Levy saw a chance to disrupt that space by packaging a set of open-source logging tools that were rapidly increasing in popularity. They believed they could build on that growing popularity, while solving a pain point the founders had actually experienced in previous positions, which is always a good starting point for a startup idea.

Screenshot: Logz.io

“We saw that the majority of the market is actually using open source. So we said, we want to solve this problem, a problem we have faced in the past and didn’t have a solution. What we’re going to do is we’re going to provide you with an easy-to-use cloud service that is offering an open-source compatible solution,” Levy explained. In other words, they wanted to build on that open-source idea, but offer it in a form that was easier to consume.

Larry Bohn, who is leading the investment for General Catalyst, says that his firm liked the idea of a company building on top of open source because it provides a built-in community of developers to drive the startup’s growth — and it appears to be working. “The numbers here were staggering in terms of how quickly people were adopting this and how quickly it was growing. It was very clear to us that the company was enjoying great success without much of a commercial orientation,” Bohn explained.

In fact, Logz.io already has 700 customers, including large names like Schneider Electric, The Economist and British Airways. The company has 175 employees today, but Levy says they expect to grow that by 250 by the end of this year, as they use this money to accelerate their overall growth.

May
29
2019
--

Logz.io lands $52M to keep growing open source-based logging tools

Logz.io announced a $52 million Series D investment today. The round was led by General Catalyst.

Other investors participating in the round included OpenView Ventures, 83North, Giza Venture Capital, Vintage Investment Partners, Greenspring Associates and Next47. Today’s investment brings the total raised to nearly $100 million, according to Crunchbase data.

Logz.io is a company built on top of the open-source tools Elasticsearch, Logstash and Kibana (collectively known by the acronym ELK) and Grafana. It’s taking those tools in a typical open-source business approach, packaging them up and offering them as a service. This approach enables large organizations to take advantage of these tools without having to deal with the raw open-source projects.

The company’s solutions intelligently scan logs looking for anomalies. When it finds them, it surfaces the problem and informs IT or security, depending on the scenario, using a tool like PagerDuty. This area of the market has been dominated in recent years by vendors like Splunk and Sumo Logic, but company founder and CEO Tomer Levy saw a chance to disrupt that space by packaging a set of open-source logging tools that were rapidly increasing in popularity. They believed they could build on that growing popularity, while solving a pain point the founders had actually experienced in previous positions, which is always a good starting point for a startup idea.

Screenshot: Logz.io

“We saw that the majority of the market is actually using open source. So we said, we want to solve this problem, a problem we have faced in the past and didn’t have a solution. What we’re going to do is we’re going to provide you with an easy-to-use cloud service that is offering an open-source compatible solution,” Levy explained. In other words, they wanted to build on that open-source idea, but offer it in a form that was easier to consume.

Larry Bohn, who is leading the investment for General Catalyst, says that his firm liked the idea of a company building on top of open source because it provides a built-in community of developers to drive the startup’s growth — and it appears to be working. “The numbers here were staggering in terms of how quickly people were adopting this and how quickly it was growing. It was very clear to us that the company was enjoying great success without much of a commercial orientation,” Bohn explained.

In fact, Logz.io already has 700 customers, including large names like Schneider Electric, The Economist and British Airways. The company has 175 employees today, but Levy says they expect to grow that by 250 by the end of this year, as they use this money to accelerate their overall growth.

Mar
12
2019
--

PMM’s Custom Queries in Action: Adding a Graph for InnoDB mutex waits

PMM mutex wait graph

One of the great things about Percona Monitoring and Management (PMM) is its flexibility. An example of that is how one can go beyond the exporters to collect data. One approach to achieve that is using textfile collectors, as explained in  Extended Metrics for Percona Monitoring and Management without modifying the Code. Another method, which is the subject matter of this post, is to use custom queries.

While working on a customer’s contention issue I wanted to check the behaviour of InnoDB Mutexes over time. Naturally, I went straight to PMM and didn’t find a graph suitable for my needs. No graph, no problem! Luckily anyone can enhance PMM. So here’s how I made the graph I needed.

The final result will looks like this:

Custom Queries

What is it?

Starting from the version 1.15.0, PMM provides user the ability to take a SQL SELECT statement and turn the resultset into a metric series in PMM. That is custom queries.

How do I enable that feature?

This feature is ON by default. You only need to edit the configuration file using YAML syntax

Where is the configuration file located?

Config file location is /usr/local/percona/pmm-client/queries-mysqld.yml by default. You can change it when adding mysql metrics via pmm-admin:

pmm-admin add mysql:metrics ... -- --queries-file-name=/usr/local/percona/pmm-client/query.yml

How often is data being collected?

The queries are executed at the LOW RESOLUTION level, which by default is every 60 seconds.

InnoDB Mutex monitoring

The method used to gather Mutex status is querying the PERFORMANCE SCHEMA, as explained here: https://dev.mysql.com/doc/refman/5.7/en/monitor-innodb-mutex-waits-performance-schema.html but intentionally removed the SUM_TIMER_WAIT > 0 condition, so the query used looks like this:

SELECT
EVENT_NAME, COUNT_STAR, SUM_TIMER_WAIT
FROM performance_schema.events_waits_summary_global_by_event_name
WHERE EVENT_NAME LIKE 'wait/synch/mutex/innodb/%'

For this query to return data, some requirements need to be met:

  • The most important one: Performance Schema needs to be enabled
  • Consumers for “event_waits” enabled
  • Instruments for ‘wait/synch/mutex/innodb’ enabled.

If performance schema is enabled, the other two requirements are met by running these two queries:

update performance_schema.setup_instruments set enabled='YES' where name like 'wait/synch/mutex/innodb%';
update performance_schema.setup_consumers set enabled='YES' where name like 'events_waits%';

YAML Configuration File

This is where the magic happens. Explanation of the YAML syntax is covered in deep on the documentation: https://www.percona.com/doc/percona-monitoring-and-management/conf-mysql.html#pmm-conf-mysql-executing-custom-queries

The one used for this issue is:

---
mysql_global_status_innodb_mutex:
    query: "SELECT EVENT_NAME, COUNT_STAR, SUM_TIMER_WAIT FROM performance_schema.events_waits_summary_global_by_event_name WHERE EVENT_NAME LIKE 'wait/synch/mutex/innodb/%'"
    metrics:
      - EVENT_NAME:
          usage: "LABEL"
          description: "Name of the mutex"
      - COUNT_STAR:
          usage: "COUNTER"
          description: "Number of calls"
      - SUM_TIMER_WAIT:
          usage: "GAUGE"
          description: "Duration"

The key info is:

  • The metric name is mysql_global_status_innodb_mutex
  • Since EVENT_NAME is used as a label, it will be possible to have values per event

Remember that this should be in the queries-mysql.yml file. Full path /usr/local/percona/pmm-client/queries-mysqld.yml  inside the db node.

Once that is done, you will start to have those metrics available in Prometheus. Now, we have a graph to do!

Creating the graph in Grafana

Before jumping to grafana to add the graph, we need a proper Prometheus Query (A.K.A: PromQL). I came up with these two (one for the count_star, one for the sum_timer_wait):

topk(5, label_replace(rate(mysql_global_status_innodb_mutex_COUNT_STAR{instance="$host"}[$interval]), "mutex", "$2", "EVENT_NAME", "(.*)/(.*)" ) or label_replace(irate(mysql_global_status_innodb_mutex_COUNT_STAR{instance="$host"}[5m]), "mutex", "$2", "EVENT_NAME", "(.*)/(.*)" ))

and

topk(5, label_replace(rate(mysql_global_status_innodb_mutex_SUM_TIMER_WAIT{instance="$host"}[$interval]), "mutex", "$2", "EVENT_NAME", "(.*)/(.*)" ) or label_replace(irate(mysql_global_status_innodb_mutex_SUM_TIMER_WAIT{instance="$host"}[5m]), "mutex", "$2", "EVENT_NAME", "(.*)/(.*)" ))

These queries are basically: Return the rate values of each mutex event for a specific host. And make some regex to return only the name of the event, and discard whatever is before the last slash character.

Once we are good with our PromQL queries, we can go and add the graph.

Finally, I got the graph that I needed with a very small effort.

The dashboard is also published on the Grafana Labs Community dashboards site.

Summary

PMM’s collection of graphs and dashboard is quite complete, but it is also natural that there are specific metrics that might not be there. For those cases, you can count on the flexibility and ease usage of PMM to collect metrics and create custom graphs. So go ahead, embrace PMM, customize it, make it yours!

The JSON for this graph, so it can be imported easily, is:

{
  "aliasColors": {},
  "bars": false,
  "dashLength": 10,
  "dashes": false,
  "datasource": "Prometheus",
  "fill": 0,
  "gridPos": {
    "h": 18,
    "w": 24,
    "x": 0,
    "y": 72
  },
  "id": null,
  "legend": {
    "alignAsTable": true,
    "avg": true,
    "current": false,
    "max": true,
    "min": true,
    "rightSide": false,
    "show": true,
    "sideWidth": 0,
    "sort": "avg",
    "sortDesc": true,
    "total": false,
    "values": true
  },
  "lines": true,
  "linewidth": 2,
  "links": [],
  "nullPointMode": "null",
  "percentage": false,
  "pointradius": 0.5,
  "points": false,
  "renderer": "flot",
  "seriesOverrides": [
    {
      "alias": "/Timer Wait/i",
      "yaxis": 2
    }
  ],
  "spaceLength": 10,
  "stack": false,
  "steppedLine": false,
  "targets": [
    {
      "expr": "topk(5, label_replace(rate(mysql_global_status_innodb_mutex_COUNT_STAR{instance=\"$host\"}[$interval]), \"mutex\", \"$2\", \"EVENT_NAME\", \"(.*)/(.*)\" )) or topk(5,label_replace(irate(mysql_global_status_innodb_mutex_COUNT_STAR{instance=\"$host\"}[5m]), \"mutex\", \"$2\", \"EVENT_NAME\", \"(.*)/(.*)\" ))",
      "format": "time_series",
      "interval": "$interval",
      "intervalFactor": 1,
      "legendFormat": "{{ mutex }} calls",
      "refId": "A",
      "hide": false
    },
    {
      "expr": "topk(5, label_replace(rate(mysql_global_status_innodb_mutex_SUM_TIMER_WAIT{instance=\"$host\"}[$interval]), \"mutex\", \"$2\", \"EVENT_NAME\", \"(.*)/(.*)\" )) or topk(5, label_replace(irate(mysql_global_status_innodb_mutex_SUM_TIMER_WAIT{instance=\"$host\"}[5m]), \"mutex\", \"$2\", \"EVENT_NAME\", \"(.*)/(.*)\" ))",
      "format": "time_series",
      "interval": "$interval",
      "intervalFactor": 1,
      "legendFormat": "{{ mutex }} timer wait",
      "refId": "B",
      "hide": false
    }
  ],
  "thresholds": [],
  "timeFrom": null,
  "timeShift": null,
  "title": "InnoDB Mutex",
  "tooltip": {
    "shared": true,
    "sort": 2,
    "value_type": "individual"
  },
  "transparent": false,
  "type": "graph",
  "xaxis": {
    "buckets": null,
    "mode": "time",
    "name": null,
    "show": true,
    "values": []
  },
  "yaxes": [
    {
      "format": "short",
      "label": "",
      "logBase": 1,
      "max": null,
      "min": null,
      "show": true
    },
    {
      "decimals": null,
      "format": "ns",
      "label": "",
      "logBase": 1,
      "max": null,
      "min": "0",
      "show": true
    }
  ],
  "yaxis": {
    "align": false,
    "alignLevel": null
  }
}

Jun
14
2018
--

Percona Monitoring and Management: Look After Your pmm-data Container

looking after pmm-datamcontainers

looking after pmm-datamcontainersIf you have already deployed PMM server using Docker you might be aware that we begin by creating a special container for persistent PMM data. In this post, I aim to explain the importance of pmm-data container when you deploy PMM server with Docker. By the end of this post, you will have a fair idea of why this Docker container is needed.

Percona Monitoring and Management (PMM) is a free and open-source solution for database troubleshooting and performance optimization that you can run in your own environment. It provides time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

What is the purpose of pmm-data?

Well, as simple as its name suggests, when PMM Server runs via Docker its data is stored in the pmm-data container. It’s a dedicated data only container which you create with bind mounts using -v i.e data volumes for holding persistent PMM data. We use pmm-data to compartmentalize the persistent data so you can more easily backup up and move data consistently across instances or containers. It acts as a single access point from which other running containers (in this case pmm-server) can access data volumes.

pmm-data container does not run, but data from the container is used by pmm-server to build graphs. PMM Server is the core of PMM that aggregates collected data and presents it in the form of tables, dashboards, and graphs in a web interface.

Why do we use docker create ?

The

docker create

  command instructs the Docker daemon to create a writable container layer over the docker image. When you execute

docker create

  using the steps shown, it will create a Docker container named pmm-data and initialize data volumes using the -v flag in conjunction with the create command. (e.g. /opt/prometheus/data).

Option -v is used multiple times in current versions of PMM to mount multiple data volumes. This allows you to create the data volume containers, and then use them from another container i.e pmm-server. We do not want to run the pmm-data container, but only to create it. nb: the number of data volumes bind mounted may change with versions of PMM

$ docker create \
   -v /opt/prometheus/data
   -v /opt/consul-data \
   -v /var/lib/mysql \
   -v /var/lib/grafana \
   --name pmm-data \
   percona/pmm-server:latest /bin/true

Make sure that the data volumes you initialize with the -v option match those given in the example. PMM Server expects you to have bind mounted those directories exactly as demonstrated in the deployment steps. For using different mount points for PMM deployment, please refer to this blog post. Data volumes are very useful as once designated and created you can share them and be include them as part of other containers. If you use -v or –volume to bind-mount a file or directory that does not yet exist on the Docker host, -v creates the endpoint for you. It is always created as a directory. Data in the pmm-data volume are actually hosted on the host’s filesystem.

Why does pmm-data not run ?

As we used

docker create

  container and not

docker run

  for pmm-data, this container does not run. It simply exists to make sure you retain all PMM data when you upgrade to a newer PMM Server image. Data volumes bind mounted on pmm-data container are shared to the running pmm-server container as the

--volumes-from

  option is used for pmm-server launch. Here we persisted data using Docker without binding it to the pmm-server by storing files in the host machine. As long as pmm-data exists, the data exists.

You can stop, destroy, or replace a container. When a non-running container is using a volume, the volume is still available to Docker and is not removed automatically. You can easily replace the pmm-server of the running container by a newer version without any impact or loss of data. For that reason, because of the need to store persistent data, we do it in a data volume. In our case, pmm-data container does not write to the same volumes as it could cause possible corruption.

Why can’t I remove pmm-data container ? What happens if I delete it ?

Removing pmm-data container results in the loss of collected metrics data.

If you remove containers that mount volumes, including the initial pmm-server container, or any subsequent containers mounted, such as pmm-server-2, you do not delete the volumes. This allows you to upgrade — or effectively migrate — data volumes between containers. Your data container might be based on an old version of container, with known security problems. It is not a big problem since it doesn’t actually run anything, but it doesn’t feel right.

As noted earlier, pmm-data stores metrics data as per the retention. You should not remove or recreate pmm-data container unless you need to wipe out all PMM data and start again. To delete the volume from disk, you must explicitly call docker rm -v against the container with a reference to the volume.

Some do’s and don’ts

  • Allocate enough disk space on the host for pmm-data to retain data.
    By default, Prometheus stores time-series data for 30 days, and QAN stores query data for 8 days.
  • Manage data retention appropriately as per your disk space available.
    You can take backup of pmm-data by extracting data from container to avoid data-loss in any situation by using steps mentioned here.

In case of any issues with metrics, here’s a good blog post regarding troubleshooting.

The post Percona Monitoring and Management: Look After Your pmm-data Container appeared first on Percona Database Performance Blog.

Jan
31
2018
--

Percona Monitoring and Management 1.7.0 (PMM) Is Now Available

Experimental Percona Monitoring and Management

Percona Monitoring and Management 1.7.0Percona announces the release of Percona Monitoring and Management 1.7.0. (PMM ) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

This release features improved support for external services, which enables a PMM Server to store and display metrics for any available Prometheus exporter. For example, you could deploy the postgres_exporter and use PMM’s external services feature to store PostgreSQL metrics in PMM. Immediately, you’ll see these new metrics in the Advanced Data Exploration dashboard. Then you could leverage many of the pre-developed PostgreSQL dashboards available on Grafana.com, and with a minimal amount of edits have a working PostgreSQL dashboard in PMM! Watch for an upcoming blog post to demonstrate a walk-through of this unlocked functionality.

New Percona Monitoring and Management 1.7.0 Features

  • PMM-1949: New dashboard: MySQL Amazon Aurora Metrics.
    Percona Monitoring and Management 1.7.0

Improvements

  • PMM-1712: Improve external exporters to let you easily add data monitoring from an arbitrary Prometheus exporter you have running on your host.
  • PMM-1510: Rename swap in and swap out labels to be more specific and help clearly see the direction of data flow for Swap In and Swap Out. The new labels are Swap In (Reads) and Swap Out (Writes) accordingly.
  • PMM-1966: Remove Grafana from a list of exporters on the dashboard to eliminate confusion with existing Grafana in the list of exporters on the current version of the dashboard.
  • PMM-1974: Add the mongodb_up in the Exporter Status dashboard. The new graph is added to maintain consistency of information about exporters. This is done based on new metrics implemented in PMM-1586.

Bug fixes

  • PMM-1967: Inconsistent formulas in Prometheus dashboards.
  • PMM-1986: Signing out with HTTP auth enabled leaves the browser signed in.
Jan
19
2018
--

Percona Monitoring and Management (PMM) 1.6.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management (PMM) 1.6.0. In this release, Percona Monitoring and Management Grafana metrics are available in the Advanced Data Exploration dashboard. We’ve improved the integration with MyRocks, and its data is now collected from SHOW GLOBAL STATUS.

The MongoDB Exporter now features two new metrics: mongodb_up to inform if the MongoDB Server is running and mongodb_scrape_errors_total reporting the total number of errors when scaping MongoDB.

In this release, we’ve greatly improved the performance of the mongodb:metrics monitoring service.

Percona Monitoring and Management (PMM) 1.6.0 also includes version 4.6.3 of Grafana which includes fixes to bugs in the alert list and the alerting rules. More information is available in the Grafana’s change log.

New Features

  • PMM-1773: PMM Grafana specific metrics have been added to the Advanced Data Exploration dashboard.

Improvements

  • PMM-1485Updated MyRocks integration: MyRocks data is now collected entirely from SHOW GLOBAL STATUS, and we have eliminated SHOW ENGINE ROCKSDB STATUS as a data source in mysqld_exporter.
  • PMM-1895Update Grafana to version 4.6.3:
    • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #9951
    • Alerting: Fixes bug where rules evaluated as firing when all conditions were false and using OR operator. #9318
  • PMM-1586: The mongodb_exporter exporter exposes two new metrics: mongodb_up informing if the MongoDB Server is running and mongodb_scrape_errors_total informing the total number of times an error occurred when scraping MongoDB.
  • PMM-1764: Various small mongodb_exporter improvement
  • PMM-1942: Improved the consistency of using labels in all Prometheus related dashboards.
  • PMM-1936: Updated the Prometheus dashboard in Metrics Monitor
  • PMM-1937 Added the CPU Utilization Details (Cores) dashboard to Metrics Monitor.

Bug fixes

  • PMM-1549: Broken default auth db for mongodb:queries
  • PMM-1631: In some cases, percentage values were displayed incorrectly for MongoDB hosts.
  • PMM-1640: RDS exporter: simplify configuration
  • PMM-1760: After the mongodb:metrics monitoring service was added, the usage of CPU considerably increased in QAN versions 1.4.1 through 1.5.3.

    1.5.0 – CPU usage 95%
    1.5.3 – CPU usage 85%
    1.6.0 – CPU usage 1%

  • PMM-1815QAN could show data for a MySQL host when a MongoDB host was selected.
  • PMM-1888: In QAN, query metrics were not loaded when the QAN page was refreshed.
  • PMM-1898: In QANthe Per Query Stats graph displayed incorrect values for MongoDB
  • PMM-1796: In Metrics Monitor, the Top Process States Hourly graph from the MySQL Overview dashboard showed incorrect data.
  • PMM-1777: In QAN, the Load column could display incorrect data.
  • PMM-1744: The error Please provide AWS access credentials error appeared although the provided credentials could be processed successfully.
  • PMM-1676: In preparation for migration to Prometheus 2.0 we have updated the System Overview dashboard for compatibility.
  • PMM-1920: Some standard MySQL metrics were missing from the mysqld_exporter  Prometheus exporter.
  • PMM-1932: The Response Length metric was not displayed for MongoDB hosts in QAN.
Dec
18
2017
--

Percona Monitoring and Management 1.5.3 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management 1.5.3. This release contains fixes for bugs found after the release of Percona Monitoring and Management 1.5.2, as well as some important fixes and improvements not related to the previous release.

Improvements

  • PMM-1874: The read timeout of the proxy server (/prometheus) has been increased from the default of 60 seconds to avoid nxginx gateway timeout error when loading data-rich dashboards.
  • PMM-1863: We improved our handling of temporary Grafana credentials

Bug fixes

  • PMM-1828: On CentOS 6.9, pmm-admin list incorrectly reported that no monitoring services were running.
  • PMM-1842: It was not possible to restart the mysql:queries monitoring service after PMM Client was upgraded from version 1.0.4.
  • PMM-1797: It was not possible to update the CloudWatch data source credentials.
  • PMM-1829: When the user clicked a link in the Query Abstract column, an outdated version of QAN would open.
  • PMM-1836PMM Server installed in a Docker container could not be started if the updating procedure had been temporarily interrupted.
  • PMM-1871: In some cases, RDS instances could not be discovered.
  • PMM-1845: Converted FLUSH SLOW LOGS to FLUSH NO_WRITE_TO_BINLOG SLOW LOGS so that GTID event isn’t created
  • PMM-1816: Fixed a rendering error in Firefox.
Dec
01
2017
--

Percona Monitoring and Management 1.5: QAN in Grafana Interface

Percona-Monitoring-and-Management-1.5-QAN-1 small

In this post, we’ll examine how we’ve improved the GUI layout for Percona Monitoring and Management 1.5 by moving the Query Analytics (QAN) functions into the Grafana interface.

For Percona Monitoring and Management users, you might notice that QAN appears a little differently in our 1.5 release. We’ve taken steps to unify the PMM interface so that it feels more natural to move from reviewing historical trends in Metrics Monitor to examining slow queries in QAN.  Most significantly:

  1. QAN moves from a stand-alone application into Metrics Monitor as a dashboard application
  2. We updated the color scheme of QAN to match Metrics Monitor (but you can toggle a button if you prefer to still see QAN in white!)
  3. Date picker and host selector now use the same methods as Metrics Monitor

Percona Monitoring and Management 1.5 QAN 1

Starting from the PMM landing page, you still see two buttons – one for Metrics Monitor and another for Query Analytics (this hasn’t changed):

Percona Monitoring and Management 1.5 QAN 2

Once you select Query Analytics on the left, you see the new Metrics Monitor dashboard page for PMM Query Analytics. It is now hosted as a Metrics Monitor dashboard, and notice the URL is no longer /qan:

Percona Monitoring and Management 1.5 QAN 3

Another advantage of the Metrics Monitor dashboard integration is that the QAN inherits the host selector from Grafana, which supports partial string matching. This makes it simpler to find the host you’re searching for if you have more than a handful of instances:

Percona Monitoring and Management 1.5 QAN 4

The last feature enhancement worth mentioning is the native Grafana time selector, which lets you select down to the minute resolution time frames. This was a frequent source of feature requests — previously PMM limited you to our pre-defined default ranges. Keep in mind that QAN has an internal archiving job that caps QAN history at eight days.

Percona Monitoring and Management 1.5 QAN 5

Last but not least is the ability to toggle between the default dark interface and the optional white. Look for the small lightbulb icon at the bottom left of any QAN screen (Percona Monitoring and Management 1.5 QAN 6) and give it a try!

Percona Monitoring and Management 1.5 QAN 7

We hope you enjoy the new interface, and we look forward to your feedback on these improvements!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com