Jun
17
2021
--

Transform launches with $24.5M in funding for a tool to query and build metrics out of data troves

The biggest tech companies have put a lot of time and money into building tools and platforms for their data science teams and those who work with them to glean insights and metrics out of the masses of data that their companies produce: how a company is performing, how a new feature is working, when something is broken, or when something might be selling well (and why) are all things you can figure out if you know how to read the data.

Now, three alums that worked with data in the world of big tech have founded a startup that aims to build a “metrics store” so that the rest of the enterprise world — much of which lacks the resources to build tools like this from scratch — can easily use metrics to figure things out like this, too.

Transform, as the startup is called, is coming out of stealth today, and it’s doing so with an impressive amount of early backing — a sign not just of investor confidence in these particular founders, but also the recognition that there is a gap in the market for, as the company describes it, a “single source of truth for business data” that could be usefully filled.

The company is announcing that it has closed, while in stealth, a Series A of $20 million, and an earlier seed round of $4.5 million — both led by Index Ventures and Redpoint Ventures. The seed, the company said, also had dozens of angel investors, with the list including Elad Gil of Color Genomics, Lenny Rachitsky of Airbnb and Cristina Cordova of Notion.

The big breakthrough that Transform has made is that it’s built a metrics engine that a company can apply to its structured data — a tool similar to what big tech companies have built for their own use, but that hasn’t really been created (at least until now) for others who are not those big tech companies to use, too.

Transform can work with vast troves of data from the warehouse, or data that is being tracked in real time, to generate insights and analytics about different actions around a company’s products. Transform can be used and queried by non-technical people who still have to deal with data, Handel said.

The impetus for building the product came to Nick Handel, James Mayfield and Paul Yang — respectively Transform’s CEO, COO and software engineer — when they all worked together at Airbnb (previously Mayfield and Yang were also at Facebook together) in a mix of roles that included product management and engineering.

There, they could see first-hand both the promise that data held for helping make decisions around a product, or for measuring how something is used, or to plan future features, but also the demands of harnessing it to work, and getting everyone on the same page to do so.

“There is a growing trend among tech companies to test every single feature, every single iteration of whatever. And so as a part of that, we built this tool [at Airbnb] that basically allowed you to define the various metrics that you wanted to track to understand your experiment,” Handel recalled in an interview. “But you also want to understand so many other things like, how many people are searching for listings in certain areas? How many people are instantly booking those listings? Are they contacting customer service, are they having trust and safety issues?” The tool Airbnb built was Minerva, optimised specifically for the kinds of questions Airbnb might typically have for its own data.

“By locking down all of the definitions for the metrics, you could basically have a data engineering team, a centralized data infrastructure team, do all the calculation for these metrics, and then serve those to the data scientists to then go in and do kind of deeper, more interesting work, because they weren’t bogged down in calculating those metrics over and over,” he continued. This platform evolved within Airbnb. “We were we were really inspired by some of the early work that we saw happen on this tool.”

The issue is that not every company is built to, well, build tools like these tailored to whatever their own business interests might be.

“There’s a handful of companies who do similar things in the metrics space,” Mayfield said, “really top flight companies like LinkedIn, Airbnb and Uber. They have really started to invest in metrics. But it’s only those companies that can devote teams of eight or 10, engineers, designers who can build those things in house. And I think that was probably, you know, a big part of the impetus for wanting to start this company was to say, not every organization is going to be able to devote eight or 10 engineers to building this metrics tool.”

And the other issue is that metrics have become an increasingly important — maybe the most important — lever for decision making in the world of product design and wider business strategy for a tech (and maybe by default, any) company.

We have moved away from “move fast and break things.” Instead, we now embrace — as Mayfield put it — “If you can’t measure it, you can’t move it.”

Transform is built around three basic priorities, Handel said.

The first of these has to do with collective ownership of metrics: by building a single framework for measuring these and identifying them, their theory is that it’s easier for a company to all get on the same page with using them. The second of these is to use Transform to simply make the work of the data team more efficient and easier, by turning the most repetitive parts of extracting insights into automated scripts that can be used and reused, giving the data team the ability to spend more time analyzing the data rather than just building datasets. And third of all, to provide customers with APIs that they can use to embed the metric-extracting tools into other applications, whether in business intelligence or elsewhere.

The three products it’s introducing today, called Metrics Framework, Metrics Catalog and Metrics API follow from these principles.

Transform is only really launching publicly today, but Handel said that it’s already working with a small handful of customers (unnamed) in a small beta, enough to be confident that what it’s built works as it was intended. The funding will be used to continue building out the product as well as bring on more talent and hopefully onboard more businesses to using it.

Hopefully might be less a tenuous word than its investors would use, convinced that it’s filling a strong need in the market.

“Transform is filling a critical gap within the industry. Just as we invested in Looker early on for its innovative approach to business intelligence, Transform takes it one step further by providing a powerful yet streamlined single source of truth for metrics,” said Tomasz Tungis, MD, Redpoint Ventures, in a statement.

“We’ve seen companies across the globe struggle to make sense of endless data sources or turn them into actionable, trusted metrics. We invested in Transform because they’ve developed an elegant solution to this problem that will change how companies think about their data,” added Shardul Shah, a partner at Index Ventures.

Feb
24
2021
--

Hydrolix snares $10M seed to lower the cost of processing log data at scale

Many companies spend a significant amount of money and resources processing data from logs, traces and metrics, forcing them to make trade-offs about how much to collect and store. Hydrolix, an early stage startup, announced a $10 million seed round today to help tackle logging at scale, while using unique technology to lower the cost of storing and querying this data.

Wing Venture Capital led the round with help from AV8 Ventures, Oregon Venture Fund and Silicon Valley Data Capital.

Company CEO and co-founder Marty Kagan noted that in his previous roles, he saw organizations with tons of data in logs, metrics and traces that could be valuable to various parts of the company, but most organizations couldn’t afford the high cost to maintain these records for very long due to the incredible volume of data involved. He started Hydrolix because he wanted to change the economics to make it easier to store and query this valuable data.

“The classic problem with these cluster-based databases is that they’ve got locally attached storage. So as the data set gets larger, you have no choice but to either spend a ton of money to grow your cluster or separate your hot and cold data to keep your costs under control,” Kagan told me.

What’s more, he says that when it comes to querying, the solutions out there like BigQuery and Snowflake are not well suited for this kind of data. “They rely really heavily on caching and bulk column scans, so they’re not really useful for […] these infrastructure plays where you want to do live stream ingest, and you want to be able to do ad hoc data exploration,” he said.

Hydrolix wanted to create a more cost-effective way of storing and querying log data, while solving these issues with other tooling. “So we built a new storage layer which delivers […] SSD-like performance using nothing but cloud storage and diskless spot instances,” Kagan explained. He says that this means that there is no caching or column scales, enabling them to do index searches. “You’re getting the low cost, unlimited retention benefits of cloud storage, but with the interactive performance of fully indexed search,” he added.

Peter Wagner, founding partner at investor Wing Venture Capital, says that the beauty of this tool is that it eliminates tradeoffs, while lowering customers overall data processing costs. “The Hydrolix team has built a real-time data platform optimized not only to deliver superior performance at a fraction of the cost of current analytics solutions, but one architected to offer those same advantages as data volumes grow by orders of magnitude,” Wagner said in a statement.

It’s worth pointing out that in the past couple of weeks SentinelOne bought high speed logging platform Scalyr for $155 million, then CrowdStrike grabbed Humio, another high speed logging tool for $400 million, so this category is getting attention.

The product is currently compatible with AWS and offered through the Amazon Marketplace, but Kagan says they are working on versions for Azure and Google Cloud and expect to have those available later this year. The company was founded at the end of 2018 and currently has 20 employees spread out over six countries with headquarters in Portland, Oregon.

Sep
03
2018
--

Monitoring S.M.A.R.T. Metrics with Prometheus and PMM

visualized using Grafana

In his excellent blog post, Pavel Trukhanov showed the value of S.M.A.R.T. metric collections, so I wondered how hard would it be to enable their collection in Percona Monitoring and Management (PMM)

A quick search led me to the  text_collector plugin SmartMon, which can be easily integrated with any Prometheus Installation

For PMM, Vadim Yalovets recently showed how to do custom integrations based on text_collector

Let’s put those together:

  1. Ensure you have the smartctl tool installed. It is available in repositories for most Linux distributions
  2. Get  smartmon.sh and place it in /usr/local/bin or other location
  3. Install the cron job
    echo  "*/5 * * * * root bash  /usr/local/bin/smartmon.sh > /tmp/smart_metrics.prom  " > /etc/cron.d/smartmon
  4. Enable textfile_collector as described in this blog post

That’s it! You should get your data flowing. Now you can use Prometheus to query device information:

use prometheus to query device

Or if you want to get a specific S.M.A.R.T value, such as media_wearout indicator:

specific smart value wearout indicator

If you would like to see a nicer visualization in Grafana, you can install the appropriate dashboard from the Grafana web site.

visualized using Grafana

The number and kind of metrics you’re going to get depends on the storage device vendor and model. Here is an example list from one of my test systems:

# HELP smartmon_smartctl_version SMART metric smartctl_version
# TYPE smartmon_smartctl_version gauge
smartmon_smartctl_version{version="6.5"} 1
# HELP smartmon_current_pending_sector_raw_value SMART metric current_pending_sector_raw_value
# TYPE smartmon_current_pending_sector_raw_value gauge
smartmon_current_pending_sector_raw_value{disk="/dev/sda",type="sat",smart_id="197"} 0.000000e+00
# HELP smartmon_current_pending_sector_threshold SMART metric current_pending_sector_threshold
# TYPE smartmon_current_pending_sector_threshold gauge
smartmon_current_pending_sector_threshold{disk="/dev/sda",type="sat",smart_id="197"} 0
# HELP smartmon_current_pending_sector_value SMART metric current_pending_sector_value
# TYPE smartmon_current_pending_sector_value gauge
smartmon_current_pending_sector_value{disk="/dev/sda",type="sat",smart_id="197"} 100
# HELP smartmon_current_pending_sector_worst SMART metric current_pending_sector_worst
# TYPE smartmon_current_pending_sector_worst gauge
smartmon_current_pending_sector_worst{disk="/dev/sda",type="sat",smart_id="197"} 100
# HELP smartmon_device_info SMART metric device_info
# TYPE smartmon_device_info gauge
smartmon_device_info{disk="/dev/sda",type="sat",vendor="",product="",revision="",lun_id="",model_family="",device_model="Crucial_CT275MX300SSD1",serial_number="16431465B53F",firmware_version="M0CR031"} 1
# HELP smartmon_device_smart_available SMART metric device_smart_available
# TYPE smartmon_device_smart_available gauge
smartmon_device_smart_available{disk="/dev/sda",type="sat"} 1
# HELP smartmon_device_smart_enabled SMART metric device_smart_enabled
# TYPE smartmon_device_smart_enabled gauge
smartmon_device_smart_enabled{disk="/dev/sda",type="sat"} 1
# HELP smartmon_device_smart_healthy SMART metric device_smart_healthy
# TYPE smartmon_device_smart_healthy gauge
smartmon_device_smart_healthy{disk="/dev/sda",type="sat"} 1
# HELP smartmon_end_to_end_error_raw_value SMART metric end_to_end_error_raw_value
# TYPE smartmon_end_to_end_error_raw_value gauge
smartmon_end_to_end_error_raw_value{disk="/dev/sda",type="sat",smart_id="184"} 0.000000e+00
# HELP smartmon_end_to_end_error_threshold SMART metric end_to_end_error_threshold
# TYPE smartmon_end_to_end_error_threshold gauge
smartmon_end_to_end_error_threshold{disk="/dev/sda",type="sat",smart_id="184"} 0
# HELP smartmon_end_to_end_error_value SMART metric end_to_end_error_value
# TYPE smartmon_end_to_end_error_value gauge
smartmon_end_to_end_error_value{disk="/dev/sda",type="sat",smart_id="184"} 100
# HELP smartmon_end_to_end_error_worst SMART metric end_to_end_error_worst
# TYPE smartmon_end_to_end_error_worst gauge
smartmon_end_to_end_error_worst{disk="/dev/sda",type="sat",smart_id="184"} 100
# HELP smartmon_offline_uncorrectable_raw_value SMART metric offline_uncorrectable_raw_value
# TYPE smartmon_offline_uncorrectable_raw_value gauge
smartmon_offline_uncorrectable_raw_value{disk="/dev/sda",type="sat",smart_id="198"} 0.000000e+00
# HELP smartmon_offline_uncorrectable_threshold SMART metric offline_uncorrectable_threshold
# TYPE smartmon_offline_uncorrectable_threshold gauge
smartmon_offline_uncorrectable_threshold{disk="/dev/sda",type="sat",smart_id="198"} 0
# HELP smartmon_offline_uncorrectable_value SMART metric offline_uncorrectable_value
# TYPE smartmon_offline_uncorrectable_value gauge
smartmon_offline_uncorrectable_value{disk="/dev/sda",type="sat",smart_id="198"} 100
# HELP smartmon_offline_uncorrectable_worst SMART metric offline_uncorrectable_worst
# TYPE smartmon_offline_uncorrectable_worst gauge
smartmon_offline_uncorrectable_worst{disk="/dev/sda",type="sat",smart_id="198"} 100
# HELP smartmon_power_cycle_count_raw_value SMART metric power_cycle_count_raw_value
# TYPE smartmon_power_cycle_count_raw_value gauge
smartmon_power_cycle_count_raw_value{disk="/dev/sda",type="sat",smart_id="12"} 2.000000e+01
# HELP smartmon_power_cycle_count_threshold SMART metric power_cycle_count_threshold
# TYPE smartmon_power_cycle_count_threshold gauge
smartmon_power_cycle_count_threshold{disk="/dev/sda",type="sat",smart_id="12"} 0
# HELP smartmon_power_cycle_count_value SMART metric power_cycle_count_value
# TYPE smartmon_power_cycle_count_value gauge
smartmon_power_cycle_count_value{disk="/dev/sda",type="sat",smart_id="12"} 100
# HELP smartmon_power_cycle_count_worst SMART metric power_cycle_count_worst
# TYPE smartmon_power_cycle_count_worst gauge
smartmon_power_cycle_count_worst{disk="/dev/sda",type="sat",smart_id="12"} 100
# HELP smartmon_power_on_hours_raw_value SMART metric power_on_hours_raw_value
# TYPE smartmon_power_on_hours_raw_value gauge
smartmon_power_on_hours_raw_value{disk="/dev/sda",type="sat",smart_id="9"} 1.313300e+04
# HELP smartmon_power_on_hours_threshold SMART metric power_on_hours_threshold
# TYPE smartmon_power_on_hours_threshold gauge
smartmon_power_on_hours_threshold{disk="/dev/sda",type="sat",smart_id="9"} 0
# HELP smartmon_power_on_hours_value SMART metric power_on_hours_value
# TYPE smartmon_power_on_hours_value gauge
smartmon_power_on_hours_value{disk="/dev/sda",type="sat",smart_id="9"} 100
# HELP smartmon_power_on_hours_worst SMART metric power_on_hours_worst
# TYPE smartmon_power_on_hours_worst gauge
smartmon_power_on_hours_worst{disk="/dev/sda",type="sat",smart_id="9"} 100
# HELP smartmon_raw_read_error_rate_raw_value SMART metric raw_read_error_rate_raw_value
# TYPE smartmon_raw_read_error_rate_raw_value gauge
smartmon_raw_read_error_rate_raw_value{disk="/dev/sda",type="sat",smart_id="1"} 0.000000e+00
# HELP smartmon_raw_read_error_rate_threshold SMART metric raw_read_error_rate_threshold
# TYPE smartmon_raw_read_error_rate_threshold gauge
smartmon_raw_read_error_rate_threshold{disk="/dev/sda",type="sat",smart_id="1"} 0
# HELP smartmon_raw_read_error_rate_value SMART metric raw_read_error_rate_value
# TYPE smartmon_raw_read_error_rate_value gauge
smartmon_raw_read_error_rate_value{disk="/dev/sda",type="sat",smart_id="1"} 100
# HELP smartmon_raw_read_error_rate_worst SMART metric raw_read_error_rate_worst
# TYPE smartmon_raw_read_error_rate_worst gauge
smartmon_raw_read_error_rate_worst{disk="/dev/sda",type="sat",smart_id="1"} 100
# HELP smartmon_reallocated_sector_ct_raw_value SMART metric reallocated_sector_ct_raw_value
# TYPE smartmon_reallocated_sector_ct_raw_value gauge
smartmon_reallocated_sector_ct_raw_value{disk="/dev/sda",type="sat",smart_id="5"} 0.000000e+00
# HELP smartmon_reallocated_sector_ct_threshold SMART metric reallocated_sector_ct_threshold
# TYPE smartmon_reallocated_sector_ct_threshold gauge
smartmon_reallocated_sector_ct_threshold{disk="/dev/sda",type="sat",smart_id="5"} 10
# HELP smartmon_reallocated_sector_ct_value SMART metric reallocated_sector_ct_value
# TYPE smartmon_reallocated_sector_ct_value gauge
smartmon_reallocated_sector_ct_value{disk="/dev/sda",type="sat",smart_id="5"} 100
# HELP smartmon_reallocated_sector_ct_worst SMART metric reallocated_sector_ct_worst
# TYPE smartmon_reallocated_sector_ct_worst gauge
smartmon_reallocated_sector_ct_worst{disk="/dev/sda",type="sat",smart_id="5"} 100
# HELP smartmon_reported_uncorrect_raw_value SMART metric reported_uncorrect_raw_value
# TYPE smartmon_reported_uncorrect_raw_value gauge
smartmon_reported_uncorrect_raw_value{disk="/dev/sda",type="sat",smart_id="187"} 0.000000e+00
# HELP smartmon_reported_uncorrect_threshold SMART metric reported_uncorrect_threshold
# TYPE smartmon_reported_uncorrect_threshold gauge
smartmon_reported_uncorrect_threshold{disk="/dev/sda",type="sat",smart_id="187"} 0
# HELP smartmon_reported_uncorrect_value SMART metric reported_uncorrect_value
# TYPE smartmon_reported_uncorrect_value gauge
smartmon_reported_uncorrect_value{disk="/dev/sda",type="sat",smart_id="187"} 100
# HELP smartmon_reported_uncorrect_worst SMART metric reported_uncorrect_worst
# TYPE smartmon_reported_uncorrect_worst gauge
smartmon_reported_uncorrect_worst{disk="/dev/sda",type="sat",smart_id="187"} 100
# HELP smartmon_smartctl_run SMART metric smartctl_run
# TYPE smartmon_smartctl_run gauge
smartmon_smartctl_run{disk="/dev/sda",type="sat"} 1535666337
# HELP smartmon_temperature_celsius_raw_value SMART metric temperature_celsius_raw_value
# TYPE smartmon_temperature_celsius_raw_value gauge
smartmon_temperature_celsius_raw_value{disk="/dev/sda",type="sat",smart_id="194"} 3.100000e+01
# HELP smartmon_temperature_celsius_threshold SMART metric temperature_celsius_threshold
# TYPE smartmon_temperature_celsius_threshold gauge
smartmon_temperature_celsius_threshold{disk="/dev/sda",type="sat",smart_id="194"} 0
# HELP smartmon_temperature_celsius_value SMART metric temperature_celsius_value
# TYPE smartmon_temperature_celsius_value gauge
smartmon_temperature_celsius_value{disk="/dev/sda",type="sat",smart_id="194"} 69
# HELP smartmon_temperature_celsius_worst SMART metric temperature_celsius_worst
# TYPE smartmon_temperature_celsius_worst gauge
smartmon_temperature_celsius_worst{disk="/dev/sda",type="sat",smart_id="194"} 59
# HELP smartmon_udma_crc_error_count_raw_value SMART metric udma_crc_error_count_raw_value
# TYPE smartmon_udma_crc_error_count_raw_value gauge
smartmon_udma_crc_error_count_raw_value{disk="/dev/sda",type="sat",smart_id="199"} 0.000000e+00
# HELP smartmon_udma_crc_error_count_threshold SMART metric udma_crc_error_count_threshold
# TYPE smartmon_udma_crc_error_count_threshold gauge
smartmon_udma_crc_error_count_threshold{disk="/dev/sda",type="sat",smart_id="199"} 0
# HELP smartmon_udma_crc_error_count_value SMART metric udma_crc_error_count_value
# TYPE smartmon_udma_crc_error_count_value gauge
smartmon_udma_crc_error_count_value{disk="/dev/sda",type="sat",smart_id="199"} 100
# HELP smartmon_udma_crc_error_count_worst SMART metric udma_crc_error_count_worst
# TYPE smartmon_udma_crc_error_count_worst gauge
smartmon_udma_crc_error_count_worst{disk="/dev/sda",type="sat",smart_id="199"} 100

The post Monitoring S.M.A.R.T. Metrics with Prometheus and PMM appeared first on Percona Database Performance Blog.

Feb
20
2018
--

Understand Your Prometheus Exporters with Percona Monitoring and Management (PMM)

Prometheus Exporters 2 small

In this blog post, I will look at the new dashboards in Percona Monitoring and Management (PMM) for Prometheus exporters.

Percona Monitoring and Management (PMM) uses Prometheus exporters to capture metrics data from the system it monitors. Those Prometheus exporters are an important part of your monitoring infrastructure, and understanding their performance and other operational details is critical for well-implemented monitoring.    

To help you with this we’ve added a number of new dashboards to Percona Monitoring and Management.

The Prometheus Exporters Overview dashboard provides a high-level overview of your installed Prometheus exporter infrastructure:

Prometheus Exporters

The summary shows you how many hosts are monitored and how many exporters you have running, as well as how much CPU and memory they are using.

Note that the CPU usage shown in this graph is only the CPU usage of the exporter itself. It does not include the additional resource usage that is required to produce metrics by the application or operating system.

Next, we have an overview of resource usage by the host:  

Prometheus Exporters 2

Prometheus Exporters 3

These graphs allow us to analyze the resource usage for different hosts, allowing us to clearly see if any of the hosts have unusually high CPU or memory usage by exporters.

You may notice some of the CPU usage reported on these graphs is very high. This is due to the fact that we use very high-resolution sampling and very underpowered instances for this demonstration environment. CPU usage numbers like this are not typical.

The next graphs show resource usage by the type of exporter:

Prometheus Exporters 4

Prometheus Exporters 5

In this case, we measure CPU usage in “CPU Cores” rather than as a percent – it is more meaningful. Otherwise, the same amount of actual resource usage by the exporter will look very different on a system with one core versus a system with 64 cores. Core usage numbers have a pretty stable baseline, though.

Then there is a list of your monitored hosts and the exporters they are running:

Prometheus Exporters 6

This shows your CPU usage and memory usage per host, as well as the number of exporters running and system details.

You can click on a host to get to the System Overview, or jump to Prometheus Exporter Status dashboard.

Prometheus Exporter Status dashboard allows you to investigate how specific exporters are performing for the given host. Each of the well-known exporters has its own row in this dashboard.

Node Exporter Status shows us the resource usage, uptime and performance of Node Exporter (the exporter responsible for capturing OS-level metrics):   

Prometheus Exporters 7

Prometheus Exporters 8

The “Collector Scrape Successful” shows which node_exporter collector category (which are modules that collect specific information) have returned data reliably. If you have anything but a flat line on “1” here, you need to check for problems.

“Collector Execution Time” shows how long on average it takes to execute your enabled collectors. This shows which collectors are generally more expensive to run (or if some of them are experiencing performance problems).

MySQL Exporter Status shows us how MySQL exporter is performing:

Prometheus Exporters 9

Additionally, in resource usage we see the rate of scrapes for High, Medium and Low resolution data.

Generally, you should see three flat lines here if everything is working well. This is not the case for this host, and we can see some scrapes are not successful – either failing to complete, or not triggered by Prometheus Server altogether (due to overload or connectivity issues).

Prometheus Exporters 10

These graphs provide information about MySQL Exporter Errors – permission errors and other issues. It also shows if MySQL Server was up during this time. There are also similar details reported for MongoDB and ProxySQL exporters if they are running on the host.

I hope these new dashboards help you to understand your Prometheus exporter performance better!

Feb
09
2018
--

Collect PostgreSQL Metrics with Percona Monitoring and Management (PMM)

Collecting PostgreSQL Information using Percona Monitoring and Management

In this article, we’ll describe how to collect PostgreSQL metrics with Percona Monitoring and Management (PMM).

We designed Percona Monitoring and Management (PMM) to be the best tool for MySQL and MongoDB performance investigation. At the same time, it’s built on mature opensource components: Prometheus’ time series database and Grafana. Starting from PMM 1.4.0. it’s possible to add monitoring for any service supported by Prometheus.

Demo

# install docker and docker-compose.
git clone https://github.com/ihanick/pmm-postgresql-demo.git
cd pmm-postgresql-demo
docker-compose build
docker-compose up

At this point, we are running exporter, PostgreSQL and the PMM server, but pmm-client on the PostgreSQL server isn’t configured.

docker-compose exec pg sh /root/initpmm.sh

Now we configured pmm client and added external exporter.

Let’s assume that you have executed commands above on the localhost. At this point we have several URLs:

We also need to create graphs for our new exporter. This could be done manually (import JSON), or you can import the existing dashboard Postgres_exporter published in the Grafana gallery by number in the catalog:

  1. Go to your PMM server web interface and press on the Grafana icon at the top left corner, then dashboards, the import.
  2. Copy and paste the dashboard ID from the Grafana site to “Grafana.com Dashboard” field, and press load.
  3. In the next dialog, choose Prometheus as a data source and continue.

PostgreSQL performance graphs can be seen at: http://localhost:8080/graph/dashboard/db/postgres_exporter?orgId=1

collect PostgreSQL metrics with Percona Monitoring and Management
PMM PostgreSQL postgres_exporter template

 

PMM-PostgreSQL Demo Under the Hood

To move this configuration to production, we need to understand how this demo works.

PMM Server

First of all, you need an existing PMM Server. You can find details on new server configuration at Deploying Percona Monitoring and Management.

In my demo I’m starting PMM without volumes, and all metrics dropped after using the docker-compose down command. Also, there is no need to use port 8080 for PMM, set it up with SSL support and password in production.

PostgreSQL Setup

I’m modifying the latest default PostgreSQL image to:

Of course, you can use a dedicated PostgreSQL server instead of one running inside a docker-compose sandbox. The only requirement is that the PMM server should be able to connect to this server.

User creation and permissions:

CREATE DATABASE postgres_exporter;
CREATE USER postgres_exporter PASSWORD 'password';
ALTER USER postgres_exporter SET SEARCH_PATH TO postgres_exporter,pg_catalog;
-- If deploying as non-superuser (for example in AWS RDS)
-- GRANT postgres_exporter TO :MASTER_USER;
CREATE SCHEMA postgres_exporter AUTHORIZATION postgres_exporter;
CREATE VIEW postgres_exporter.pg_stat_activity
AS
  SELECT * from pg_catalog.pg_stat_activity;
GRANT SELECT ON postgres_exporter.pg_stat_activity TO postgres_exporter;
CREATE VIEW postgres_exporter.pg_stat_replication AS
  SELECT * from pg_catalog.pg_stat_replication;
GRANT SELECT ON postgres_exporter.pg_stat_replication TO postgres_exporter;

To simplify setup, you can use a superuser account and access pg_catalog directly. To improve security, allow this user to connect only from exporter host.

PMM Client Setup on PostgreSQL Host

You can obtain database-only statistics with just the external exporter, and you can use any host with pmm-client installed. Fortunately, you can also export Linux metrics from the database host.

After installing the pmm-client package, you still need to configure the system. We should point it to the PMM server and register the external exporter (and optionally add the linux:metrics exporter).

#!/bin/sh
pmm-admin config --client-name pg1 --server pmm-server
pmm-admin add external:metrics postgresql pgexporter:9187
# optional
pmm-admin add linux:metrics
# other postgresql instances
pmm-admin add external:instances postgresql 172.18.0.3:9187

It’s important to keep the external exporter job name as “postgresql”, since all existing templates check it. There is a bit of inconsistency here: the first postgresql server is added as external:metrics, but all further servers should be added as external:instances.

The reason is the first command creates the Prometheus job and first instance, and further servers can be added without job creation.

PMM 1.7.0 external:service

Starting from PMM 1.7.0 the setup simplified if exporter located on the same host as pmm-client:

pmm-admin config --client-name pg1 --server pmm-server
pmm-admin add external:service --service-port=9187 postgresql

pmm-admin add external:metrics or pmm-admin add external:instances are not required if you are running exporter on the same host as pmm-client.

Exporter Setup

Exporter is a simple HTTP/HTTPS server returning one page. The format is:

curl -si http://172.17.0.4:9187/metrics|grep pg_static
# HELP pg_static Version string as reported by postgres
# TYPE pg_static untyped
pg_static{short_version="10.1.0",version="PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18) 6.3.0 20170516, 64-bit"} 1

As you can see, it’s a self-describing set of counters and string values. The Prometheus time series database built-in to PMM connects to the web server and stores the results on disk. There are multiple exporters available for PostgreSQL. postgres_exporter is listed as a third-party on the official Prometheus website.

You can compile exporter by yourself, or run it inside docker container. This and many other exporters are written in Go and compiled as a static binary so that you can copy the executable from the host with same CPU architecture. For production setups, you probably will run exporter from a database host directly and start the service with systemd.

In order to check network configuration issues, login to pmm-server and use the curl command from above. Do not forget to replace 172.17.0.4:9187 with the appropriate host:port (use the same IP address or DNS name as the pmm-admin add command).

You configure postgres_exporter with a single environment variable:

DATA_SOURCE_NAME=postgresql://postgres_exporter:password@pg:5432/postgres_exporter?sslmode=disable

Make sure that you provide the correct credentials, including the database name.

Run external exporter directly on database server

In order to simplify production setup, you can run exporter directly from the same server as you are using for running PostgreSQL.
This method allows you to use pmm-admin add external:service command recently added to PMM.

# Copy exporter binary from docker container to the local directory to skip build from sources
docker cp pmmpostgres_pgexporter_1:/postgres_exporter ./
# copy exporter binary to database host, use scp instead for existing database server.
docker cp postgres_exporter pmmpostgres_pg_1:/root/
# login to database server shell
docker exec -it pmmpostgres_pg_1 bash
# start exporter
DATA_SOURCE_NAME='postgresql://postgres_exporter:password@127.0.0.1:5432/postgres_exporter?sslmode=disable' ./postgres_exporter

Grafana Setup

In the demo, I’ve used Postgres_exporter dashboard. Use the same site and look for other PostgreSQL dashboards if you need more. The exporter provides many parameters, and not all of them are visualized in this dashboard.

For huge installations, you may find that filtering servers by “instance name” is not comfortable. Write your own JSON for the dashboard, or try to use one from demo repository. It’s the same as dashboard 3742, but uses the hostname for filtering and Prometheus job name in the legends.

All entries of instance=~"$instance" get replaced with instance=~"$host:.*".

The modification allows you to switch between PostgreSQL servers with host instead of “instance”, and see CPU and disk details for the current database server instead of the previously selected host.

Notice

This blog post on how to collect PostgreSQL metrics with Percona Monitoring and Management is not an official integration of PostgreSQL and PMM. I’ve tried to describe complex external exporters setup. Instead of PostgreSQL, you can use any other services and exporters with a similar setup, or even create your own exporter and instrument your application. It’s a great thing to see correlations between application activities and other system components like databases, web servers, etc.

Jan
31
2018
--

Percona Monitoring and Management 1.7.0 (PMM) Is Now Available

Experimental Percona Monitoring and Management

Percona Monitoring and Management 1.7.0Percona announces the release of Percona Monitoring and Management 1.7.0. (PMM ) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

This release features improved support for external services, which enables a PMM Server to store and display metrics for any available Prometheus exporter. For example, you could deploy the postgres_exporter and use PMM’s external services feature to store PostgreSQL metrics in PMM. Immediately, you’ll see these new metrics in the Advanced Data Exploration dashboard. Then you could leverage many of the pre-developed PostgreSQL dashboards available on Grafana.com, and with a minimal amount of edits have a working PostgreSQL dashboard in PMM! Watch for an upcoming blog post to demonstrate a walk-through of this unlocked functionality.

New Percona Monitoring and Management 1.7.0 Features

  • PMM-1949: New dashboard: MySQL Amazon Aurora Metrics.
    Percona Monitoring and Management 1.7.0

Improvements

  • PMM-1712: Improve external exporters to let you easily add data monitoring from an arbitrary Prometheus exporter you have running on your host.
  • PMM-1510: Rename swap in and swap out labels to be more specific and help clearly see the direction of data flow for Swap In and Swap Out. The new labels are Swap In (Reads) and Swap Out (Writes) accordingly.
  • PMM-1966: Remove Grafana from a list of exporters on the dashboard to eliminate confusion with existing Grafana in the list of exporters on the current version of the dashboard.
  • PMM-1974: Add the mongodb_up in the Exporter Status dashboard. The new graph is added to maintain consistency of information about exporters. This is done based on new metrics implemented in PMM-1586.

Bug fixes

  • PMM-1967: Inconsistent formulas in Prometheus dashboards.
  • PMM-1986: Signing out with HTTP auth enabled leaves the browser signed in.
Jan
22
2018
--

Webinar Wednesday, January 24, 2018: Differences between MariaDB and MySQL

MariaDB and MySQL

MariaDB and MySQLJoin Percona’s Chief Evangelist, Colin Charles as he presents Differences Between MariaDB and MySQL on Wednesday, January 24, 2018, at 7:00 am PDT / 10:00 am EDT (UTC-7).

Tags: MariaDB, MySQL, Percona Server for MySQL, DBA, SysAdmin, DevOps
Experience Level: Novice

MariaDB and MySQL. Are they syntactically similar? Where do these two query languages differ? Why would I use one over the other?

MariaDB is on the path of gradually diverging from MySQL. One obvious example is the internal data dictionary currently under development for MySQL 8.0. This is a major change to the way metadata is stored and used within the server. MariaDB doesn’t have an equivalent feature. Implementing this feature could mark the end of datafile-level compatibility between MySQL and MariaDB.

There are also non-technical differences between MySQL and MariaDB, including:

  • Licensing: MySQL offers their code as open-source under the GPL, and provides the option of non-GPL commercial distribution in the form of MySQL Enterprise. MariaDB can only use the GPL because they derive their work from the MySQL source code under the terms of that license.
  • Support services: Oracle provides technical support, training, certification and consulting for MySQL, while MariaDB has their own support services. Some people will prefer working with smaller companies, as traditionally it affords them more leverage as a customer.
  • Community contributions: MariaDB touts the fact that they accept more community contributions than Oracle. Part of the reason for this disparity is that developers like to contribute features, bug fixes and other code without a lot of paperwork overhead (and they complain about the Oracle Contributor Agreement). However, MariaDB has its own MariaDB Contributor Agreement — which more or less serves the same purpose.

Colin will take a look at some of the differences between MariaDB and MySQL and help answer some of the common questions our Database Performance Experts get about the two databases.

Register for the webinar now.

Colin CharlesColin Charles, Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005, and been a MySQL user since 2000. Before joining MySQL, Colin worked actively on the Fedora and OpenOffice.org projects. He’s well-known within many open source communities and speaks on the conference circuit.

Jan
19
2018
--

Percona Monitoring and Management (PMM) 1.6.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management (PMM) 1.6.0. In this release, Percona Monitoring and Management Grafana metrics are available in the Advanced Data Exploration dashboard. We’ve improved the integration with MyRocks, and its data is now collected from SHOW GLOBAL STATUS.

The MongoDB Exporter now features two new metrics: mongodb_up to inform if the MongoDB Server is running and mongodb_scrape_errors_total reporting the total number of errors when scaping MongoDB.

In this release, we’ve greatly improved the performance of the mongodb:metrics monitoring service.

Percona Monitoring and Management (PMM) 1.6.0 also includes version 4.6.3 of Grafana which includes fixes to bugs in the alert list and the alerting rules. More information is available in the Grafana’s change log.

New Features

  • PMM-1773: PMM Grafana specific metrics have been added to the Advanced Data Exploration dashboard.

Improvements

  • PMM-1485Updated MyRocks integration: MyRocks data is now collected entirely from SHOW GLOBAL STATUS, and we have eliminated SHOW ENGINE ROCKSDB STATUS as a data source in mysqld_exporter.
  • PMM-1895Update Grafana to version 4.6.3:
    • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #9951
    • Alerting: Fixes bug where rules evaluated as firing when all conditions were false and using OR operator. #9318
  • PMM-1586: The mongodb_exporter exporter exposes two new metrics: mongodb_up informing if the MongoDB Server is running and mongodb_scrape_errors_total informing the total number of times an error occurred when scraping MongoDB.
  • PMM-1764: Various small mongodb_exporter improvement
  • PMM-1942: Improved the consistency of using labels in all Prometheus related dashboards.
  • PMM-1936: Updated the Prometheus dashboard in Metrics Monitor
  • PMM-1937 Added the CPU Utilization Details (Cores) dashboard to Metrics Monitor.

Bug fixes

  • PMM-1549: Broken default auth db for mongodb:queries
  • PMM-1631: In some cases, percentage values were displayed incorrectly for MongoDB hosts.
  • PMM-1640: RDS exporter: simplify configuration
  • PMM-1760: After the mongodb:metrics monitoring service was added, the usage of CPU considerably increased in QAN versions 1.4.1 through 1.5.3.

    1.5.0 – CPU usage 95%
    1.5.3 – CPU usage 85%
    1.6.0 – CPU usage 1%

  • PMM-1815QAN could show data for a MySQL host when a MongoDB host was selected.
  • PMM-1888: In QAN, query metrics were not loaded when the QAN page was refreshed.
  • PMM-1898: In QANthe Per Query Stats graph displayed incorrect values for MongoDB
  • PMM-1796: In Metrics Monitor, the Top Process States Hourly graph from the MySQL Overview dashboard showed incorrect data.
  • PMM-1777: In QAN, the Load column could display incorrect data.
  • PMM-1744: The error Please provide AWS access credentials error appeared although the provided credentials could be processed successfully.
  • PMM-1676: In preparation for migration to Prometheus 2.0 we have updated the System Overview dashboard for compatibility.
  • PMM-1920: Some standard MySQL metrics were missing from the mysqld_exporter  Prometheus exporter.
  • PMM-1932: The Response Length metric was not displayed for MongoDB hosts in QAN.
Jan
17
2018
--

Troubleshooting Percona Monitoring and Management (PMM) Metrics

In this blog post, I’ll look at some helpful tips on troubleshooting Percona Monitoring and Management metrics.

With any luck, Percona Monitoring and Management (PMM) works for you out of the box. Sometimes, however, things go awry and you see empty or broken graphs instead of dashboards full of insights.

Troubleshooting Percona Monitoring and Management Metrics 1

Before we go through troubleshooting steps, let’s talk about how data makes it to the Grafana dashboards in the first place. The PMM Architecture documentation page helps explain it:

Troubleshooting Percona Monitoring and Management Metrics 2

If we focus just on the “Metrics” path, we see the following requirements:

  • The appropriate “exporters” (Part of PMM Client) are running on the hosts you’re monitoring
  • The database is configured to expose all the metrics you’re looking for
  • The hosts are correctly configured in the repository on PMM Server side (stored in Consul)
  • Prometheus on the PMM Server side can scrape them successfully – meaning it can reach them successfully, does not encounter any timeouts and has enough resources to ingest all the provided data
  • The exporters can retrieve metrics that they requested (i.e., there are no permissions problems)
  • Grafana can retrieve the metrics stored in Prometheus Server and display them

Now that we understand the basic requirements let’s look at troubleshooting the tool.

PMM Client

First, you need to check if the services are actually configured properly and running:

root@rocky:/mnt/data# pmm-admin list
pmm-admin 1.5.2
PMM Server      | 10.11.13.140
Client Name     | rocky
Client Address  | 10.11.13.141
Service Manager | linux-systemd
-------------- ------ ----------- -------- ------------------------------------------- ------------------------------------------
SERVICE TYPE   NAME   LOCAL PORT  RUNNING  DATA SOURCE                                 OPTIONS
-------------- ------ ----------- -------- ------------------------------------------- ------------------------------------------
mysql:queries  rocky  -           YES      root:***@unix(/var/run/mysqld/mysqld.sock)  query_source=slowlog, query_examples=true
linux:metrics  rocky  42000       YES      -
mysql:metrics  rocky  42002       YES      root:***@unix(/var/run/mysqld/mysqld.sock)

Second, you can also instruct the PMM client to perform basic network checks. These can spot connectivity problems, time drift and other issues:

root@rocky:/mnt/data# pmm-admin check-network
PMM Network Status
Server Address | 10.11.13.140
Client Address | 10.11.13.141
* System Time
NTP Server (0.pool.ntp.org)         | 2018-01-06 09:10:33 -0500 EST
PMM Server                          | 2018-01-06 14:10:33 +0000 GMT
PMM Client                          | 2018-01-06 09:10:33 -0500 EST
PMM Server Time Drift               | OK
PMM Client Time Drift               | OK
PMM Client to PMM Server Time Drift | OK
* Connection: Client --> Server
-------------------- -------
SERVER SERVICE       STATUS
-------------------- -------
Consul API           OK
Prometheus API       OK
Query Analytics API  OK
Connection duration | 355.085µs
Request duration    | 938.121µs
Full round trip     | 1.293206ms
* Connection: Client <-- Server
-------------- ------ ------------------- ------- ---------- ---------
SERVICE TYPE   NAME   REMOTE ENDPOINT     STATUS  HTTPS/TLS  PASSWORD
-------------- ------ ------------------- ------- ---------- ---------
linux:metrics  rocky  10.11.13.141:42000  OK      YES        -
mysql:metrics  rocky  10.11.13.141:42002  OK      YES        -

If everything is working, next we can check if exporters are providing the expected data directly.

Checking Prometheus Exporters

Looking at the output from pmm-admin check-network, we can see the “REMOTE ENDPOINT”. This shows the exporter address, which you can use to access it directly in your browser:

Troubleshooting Percona Monitoring and Management Metrics 3

You can see MySQL Exporter has different sets of metrics for high, medium and low resolution, and you can click on them to see the provided metrics:

Troubleshooting Percona Monitoring and Management Metrics 4

There are few possible problems you may encounter at this stage

  • You do not see the metrics you expect to seeThis could be a configuration issue on the database side (docs for MySQL and MongoDB), permissions errors or exporter not being correctly configured to expose the needed metrics.
  • Page takes too long to load. This could mean the data capture is too expensive for your configuration. For example, if you have a million tables, you probably can’t afford to capture per-table data.

mysql_exporter_collector_duration_seconds is a great metric that allows you to see which collectors are enabled for different resolutions, and how much time it takes for a given collector to execute. This way you can find and potentially disable collectors that are too expensive for your environment.

Let’s look at some more advanced ways to troubleshoot exporters.  

Looking at ProcessList

root@rocky:/mnt/data# ps aux | grep mysqld_exporter
root      1697  0.0  0.0   4508   848 ?        Ss    2017   0:00 /bin/sh -c
/usr/local/percona/pmm-client/mysqld_exporter -collect.auto_increment.columns=true
-collect.binlog_size=true -collect.global_status=true -collect.global_variables=true
-collect.info_schema.innodb_metrics=true -collect.info_schema.processlist=true
-collect.info_schema.query_response_time=true -collect.info_schema.tables=true
-collect.info_schema.tablestats=true -collect.info_schema.userstats=true
-collect.perf_schema.eventswaits=true -collect.perf_schema.file_events=true
-collect.perf_schema.indexiowaits=true -collect.perf_schema.tableiowaits=true
-collect.perf_schema.tablelocks=true -collect.slave_status=true
-web.listen-address=10.11.13.141:42002 -web.auth-file=/usr/local/percona/pmm-client/pmm.yml
-web.ssl-cert-file=/usr/local/percona/pmm-client/server.crt
-web.ssl-key-file=/usr/local/percona/pmm-client/server.key >>
/var/log/pmm-mysql-metrics-42002.log 2>&1

This shows us that the exporter is running, as well as specific command line options that were used to start it (which collectors were enabled, for example).

Checking out Log File

root@rocky:/mnt/data# tail /var/log/pmm-mysql-metrics-42002.log
time="2018-01-05T18:19:10-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:11-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:12-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:492"
time="2018-01-05T18:19:12-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:12-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:616"
time="2018-01-05T18:19:13-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:14-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:15-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
time="2018-01-05T18:19:16-05:00" level=error msg="Error pinging mysqld: dial unix
/var/run/mysqld/mysqld.sock: connect: no such file or directory" source="mysqld_exporter.go:442"
2018/01/06 09:10:33 http: TLS handshake error from 10.11.13.141:56154: tls: first record does not look like a TLS handshake

If you have problems such as authentication or permission errors, you will see them in the log file. In the example above, we can see the exporter reporting many connection errors (the MySQL Server was down).

Prometheus Server

Next, we can take a look at the Prometheus Server. It is exposed in PMM Server at /prometheus path. We can go to Status->Targets to see which Targets are configured and if they are working: correctly

Troubleshooting Percona Monitoring and Management Metrics 5

In this example, some hosts are scraped successfully while others are not. As you can see I have some hosts that are down, so scraping fails with “no route to host”. You might also see problems caused by firewall configurations and other reasons.

The next area to check, especially if you have gaps in your graph, is if your Prometheus server has enough resources to ingest all the data reported in your environment. Percona Monitoring and  Management ships with the Prometheus dashboard to help to answer this question (see demo).

There is a lot of information in this dashboard, but one of the most important areas you should check is if there is enough CPU available for Prometheus:

Troubleshooting Percona Monitoring and Management Metrics 6

The most typical problem to have with Prometheus is getting into “Rushed Mode” and dropping some of the metrics data:

Troubleshooting Percona Monitoring and Management Metrics 7

Not using enough memory to buffer metrics is another issue, which is shown as “Configured Target Storage Heap Size” on the graph:

Troubleshooting Percona Monitoring and Management Metrics 8

Values of around 40% of total memory size often make sense. The PMM FAQ details how to tune this setting.

If the amount of memory is already configured correctly, you can explore upgrading to a more powerful instance size or reducing the number of metrics Prometheus ingests. This can be done either by adjusting Metrics Resolution (as explained in FAQ) or disabling some of the collectors (Manual). 

You might wonder which collectors generate the most data? This information is available on the same Prometheus Dashboard:

Troubleshooting Percona Monitoring and Management Metrics 9

While these aren’t not exact values, they correlate very well with what the load collectors generate. In this case, for example, we can see that the Performance Schema is responsible for a large amount of time series data. As such, disabling its collectors can reduce the Prometheus load substantially.

Hopefully, these troubleshooting steps were helpful to you in diagnosing PMM’s metrics capture. In a later blog post, I will write about how to diagnose problems with Query Analytics (Demo).

Jan
16
2018
--

Webinar January 18, 2018: MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) Part 2

Percona Monitoring and Management

Percona Monitoring and ManagementJoin Percona’s Product Manager Michael Coburn as he presents MySQL Troubleshooting and Performance Optimization with Percona Monitoring and Management (PMM) Part 2 on Thursday, January 18, 2018, at 11:00 am PST / 2:00 pm EST (UTC-8).

Tags: Percona Monitoring and Management, PMM, Monitoring, MySQL, Performance, Optimization, DBA, SysAdmin, DevOps
Experience Level: Expert

Optimizing MySQL performance and troubleshooting MySQL problems are two of the most critical and challenging tasks for MySQL DBAs. The databases powering your applications need to handle heavy traffic loads while remaining responsive and stable. This is so that you can deliver an excellent user experience. Furthermore, DBA’s are also expected to find cost-efficient means of solving these issues.

In this webinar — the second part of a two-part series — Michael discusses how you can optimize and troubleshoot MySQL performance and demonstrate how Percona Monitoring and Management (PMM) enables you to solve these challenges using free and open source software. We will look at specific, common MySQL problems and review the essential components in PMM that allow you to diagnose and resolve them.

By the end of this webinar, you will have a better understanding of how you can troubleshoot MySQL problems in your database.

Register for the webinar now.

Percona Monitoring and ManagementMichael Coburn, Product Manager

Michael joined Percona as a Consultant in 2012 and progressed through various roles including Managing Consultant, Principal Architect, Technical Account Manager, and Technical Support Engineer. He is now leading the Product Manager of Percona Monitoring and Management.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com