Aug
15
2019
--

Alibaba cloud biz is on a run rate over $4B

Alibaba announced its earnings today, and the Chinese e-commerce giant got a nice lift from its cloud business, which grew 66% to more than $1.1 billion, or a run rate surpassing $4 billion.

It’s not exactly on par with Amazon, which reported cloud revenue of $8.381 billion last quarter, more than double Alibaba’s yearly run rate, but it’s been a steady rise for the company, which really began taking the cloud seriously as a side business in 2015.

At that time, Alibaba Cloud’s president Simon Hu boasted to Reuters that his company would overtake Amazon in four years. It is not even close to doing that, but it has done well to get to more than a billion a quarter in just four years.

In fact, in its most recent data for the Asia-Pacific region, Synergy Research, a firm that closely tracks the public cloud market, found that Amazon was still number one overall in the region. Alibaba was first in China, but fourth in the region outside of China, with the market’s Big 3 — Amazon, Microsoft and Google — coming in ahead of it. These numbers were based on Q1 data before today’s numbers were known, but they provide a sense of where the market is in the region.

Screenshot 2019 08 15 11.17.26

Synergy’s John Dinsdale says the company’s growth has been impressive, outpacing the market growth rate overall. “Alibaba’s share of the worldwide cloud infrastructure services market was 5% in Q2 — up by almost a percentage point from Q2 of last year, which is a big deal in terms of absolute growth, especially in a market that is growing so rapidly,” Dinsdale told TechCrunch.

He added, “The great majority of its revenue does indeed come from China (and Hong Kong), but it is also making inroads in a range of other APAC country markets — Indonesia, Malaysia, Singapore, India, Australia, Japan and South Korea. While numbers are relatively small, it has also got a foothold in EMEA and some operations in the U.S.”

The company was busy last quarter adding more than 300 new products and features in the period ending June 30th (and reported today). That included changes and updates to core cloud offerings, security, data intelligence and AI applications, according to the company.

While the cloud business still isn’t a serious threat to the industry’s Big Three, especially outside its core Asia-Pacific market, it’s still growing steadily and accounted for almost 7% of Alibaba’s total of $16.74 billion in revenue for the quarter — and that’s not bad at all.

Aug
01
2019
--

Why AWS gains big storage efficiencies with E8 acquisition

AWS is already the clear market leader in the cloud infrastructure market, but it’s never been an organization that rests on its past successes. Whether it’s a flurry of new product announcements and enhancements every year, or making strategic acquisitions.

When it bought Israeli storage startup E8 yesterday, it might have felt like a minor move on its face, but AWS was looking, as it always does, to find an edge and reduce the costs of operations in its data centers. It was also very likely looking forward to the next phase of cloud computing. Reports have pegged the deal at between $50 and $60 million.

What E8 gives AWS for relatively cheap money is highly advanced storage capabilities, says Steve McDowell, senior storage analyst at Moor Research and Strategy. “E8 built a system that delivers extremely high-performance/low-latency flash (and Optane) in a shared-storage environment,” McDowell told TechCrunch.

Jun
25
2019
--

Adaptive Hash Index on AWS Aurora

Adaptive Hash Index on AWS Aurora

Adaptive Hash Index on AWS AuroraRecently I had a case where queries against Aurora Reader were 2-3 times slower than on the Writer node. In this blog post, we are going to discuss why.

I am not going to go into the details of how Aurora works, as there are other blog posts discussing that. Here I am only going to focus on one part.

The Problem

My customer reported there is a huge performance difference between the Reader and the Writer node just by running selects. I was a bit surprised, as the select queries should run locally on the reader node, the dataset could fit easily in memory, there were no reads on disk level, and everything looked fine.

I was trying to rule out every option when one of my colleagues mentioned I should have a look at the InnoDB_Adaptive_Hash_Indexes. He was right – it was disabled on the Reader nodes, I could see it on the console.

Let’s enable the adaptive hash index

I opened the control panel and I was checking the parameter groups, but the adaptive hash index was already enabled. Ok, I might have made a mistake but I double checked myself many times and it was true. Adaptive hash was disabled on the console but enabled on the control panel. That means the AWS control panel is lying!

I have restarted the nodes multiple times, and I have created new test clusters, etc… but I was not able to enable adaptive hash on the Reader node. It was enabled on the Writer node, and it was working.

Is this causing the performance difference?

Because I was able to enable or disable the adaptive hash index on the Writer node, I continued my tests there and I could confirm that when I disabled it the queries got slower. Same speed as on the Reader node. When I enabled,  AHI queries got faster.

In general with AHI on the Writer node, the customer’s queries were running 2 times faster.

AHI can help for many workloads but not all of them, and you should test your queries/workload both with and without AHI.

Why is it disabled on the Reader?

I have to be honest because I am not an AWS engineer and I do not know the code of Aurora, but I am only guessing here and I might be wrong.

Why can I change it in the parameter group?

We can modify the adaptive hash in the parameter groups, but there is no impact on the Reader nodes at all. Many customers could think they have AHI enabled but actually, they don’t. I think this is a bad practice because if we cannot enable it on the Reader node we should not be able to change it on the control panel.

Is this causing any performance problems for me?

If you are using the Reader node for selects queries, which are based on secondary keys, you are probably suffering from this but it depends on your workload if it is impacting your performance or not. In my customer’s case, the difference was 2 times slower without AHI.

But I want fast queries!

If your queries heavily benefit from AHI, you should run your queries on the Writer node or even on an async slave, or have a look on AWS RDS which does not have this limitation or use EC2 instances. You could also check query cache in Aurora.

Query Cache

In Aurora, they reworked the Query Cache which does not have the limitations like in Community Edition or in Percona Server.  Cacheable queries take out an “exclusive lock” on MySQL’s query cache. In the real world, that means only one query can use the Query Cache at a time and all the other queries have to wait for the mutex. Also in MySQL 8.0 they completely removed the Query Cache.

But in Aurora they redesigned it and they removed this limitation – there is no single global mutex on the Query Cache anymore. I think one of the reasons for this is could be because they knew that Adaptive Hash won’t work.

Does AWS know about this?

I have created a ticket to AWS engineers to get some feedback on this, and they verified my findings and have confirmed Adaptive Hash Index cannot be enabled on the Reader nodes. They are looking into why we can modify it on the control panel.

Conclusion

I would recommend checking your queries on your Reader nodes to make sure they perform well and compare the performance with the Writer node. At this moment, we cannot enable AHI on Reader nodes, and I am not sure if that will change any time soon. But this can impact the performance in some cases, for sure.

Jun
04
2019
--

How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open sourced five years ago.

Today, Kubernetes is the fastest growing open-source project and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.

May
30
2019
--

Percona Monitoring and Management (PMM) 2 Beta Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management

We are pleased to announce the release of PMM 2 Beta!  PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL, MongoDB, and PostgreSQL performance.

  • Query Analytics:
    • MySQL and MongoDB – Slow log, PERFORMANCE_SCHEMA, and Profiler data sources
    • Support for large environments – default view all queries from all instances
    • Filtering – display only the results matching filters such as the schema name or the server instance
    • Sorting and more columns – now sort by any column.
    • Modify Columns – Add one or more columns for any field exposed by the data source
    • Sparklines –  restyled sparkline targeted at making data representation more accurate
  • Labels – Prometheus now supports auto-discovered and custom labels
  • Inventory Overview Dashboard – Displays the agents, services, and nodes which are registered with PMM Server
  • Environment Overview Dashboard – See issues at a glance across multiple servers
  • API – View versions and list hosts using the API
  • MySQL, MongoDB, and PostgreSQL Metrics – Visualize database metrics over time
  • pmm-agent – Provides secure remote management of the exporter processes and data collectors on the client

PMM 2 Beta is still a work in progress – you may encounter some bugs and missing features. We are aware of a number of issues, but please report any and all that you find to Percona’s JIRA.

This release is not recommended for Production environments.

PMM 2 is designed to be used as a new installation – please don’t try to upgrade your existing PMM 1 environment.

Query Analytics Dashboard

Query Analytics Dashboard now defaults to display all queries on each of the systems that are configured for MySQL PERFORMANCE_SCHEMA, Slow Log, and MongoDB Profiler, and includes comprehensive filtering capabilities.

Query Analytics Overview

You’ll recognize some of the common elements in PMM 2 Query Analytics such as the Load, Count, and Latency columns. However, there are new elements such as the filter box and more arrows on the columns:

Query Detail

Query Analytics continues to deliver detailed information regarding individual query performance

Filter and Search By

There is a filtering panel on the left, or use the search by bar to set filters using key:value syntax. For example, I’m interested in just the queries related to mysql-sl2 server, I could then type d_server:mysql-sl2:

Sort by any column

This is a much-requested feature from PMM Query Analytics and we’re glad to announce that you can now sort by any column! Just click the small arrow to the right of the column name and:

Sparklines

As you may have already noticed, we have changed the sparkline representation. New sparklines are not points-based lines, but are interval-based, and look like a staircase line with flat values for each of the displayed period:

We also position a single sparkline for only the left-most column and render numeric values for all remaining columns.

Add extra columns

Now you can add a column for each additional field which is exposed by the data source. For example, you can add Rows Examined by clicking the + sign and typing or selecting from the available list of fields:

MySQL Query Analytics Slow Log source

We’ve increased our MySQL support to include both PERFORMANCE_SCHEMA and Slow log – and if you’re using Percona Server with the Extended Slow Log format, you’ll be able to gain deep insight into the performance of individual queries, for example, InnoDB behavior.  Note the difference between the detail available from PERFORMANCE_SCHEMA vs Slow Log:

PERFORMANCE_SCHEMA:

Slow Log:

MongoDB Metrics

Support for MongoDB Metrics included in this release means you can add a local or remote MongoDB instance to PMM 2 and take advantage of the following view of MongoDB performance:

PostgreSQL Metrics

In this release, we’re also including support for PostgreSQL Metrics. We’re launching PMM 2 Beta with just the PostgreSQL Overview dashboard, but we have others under development, so watch for new Dashboards to appear in subsequent releases!

Environment Overview Dashboard

This new dashboard provides a bird’s-eye view, showing a large number of hosts at once. It allows you to easily figure out the hosts which have issues, and move onto other dashboards for a deeper investigation.

The charts presented show the top five hosts by different parameters:

The eye-catching colored hexagons with statistical data show the current values of parameters and allow you to drill-down to a dashboard which has further details on a specific host.

Labels

An important concept we’re introducing in PMM 2 is that when a label is assigned it is persisted in both the Metrics (Prometheus) and Query Analytics (Clickhouse) databases. So, when you browse a target in Prometheus you’ll notice many more labels appear – particularly the auto-discovered (replication_set, environment, node_name, etc.) and (soon to be released) custom labels via custom_label.

Inventory Dashboard

We’ve introduced a new dashboard with several tabs so that users are better able to understand which nodes, agents, and services are registered against PMM Server. We have an established hierarchy with Node at the top, then Service and Agents assigned to a Node.

  • Nodes – Where the service and agents will run. Assigned a node_id, associated with a machine_id (from /etc/machine-id)

    • Examples: bare metal, virtualized, container
  • Services – Individual service names and where they run, against which agents will be assigned. Each instance of a service gets a service_id value that is related to a node_id
    • Examples: MySQL, Amazon Aurora MySQL
    • You can also use this feature to support multiple mysqld instances on a single node, for example: mysql1-3306, mysql1-3307
  • Agents – Each binary (exporter, agent) running on a client will get an agent_id value
    • pmm-agent is the top of the tree, assigned to a node_id
    • node_exporter is assigned to pmm-agent agent_id
    • mysqld_exporter and QAN MySQL Perfschema are assigned to a service_id
    • Examples: pmm-agent, node_exporter, mysqld_exporter, QAN MySQL Perfschema

You can now see which services, agents, and nodes are registered with PMM Server.

Nodes

In this example I have PMM Server (docker) running on the same virtualized compute instance as my Percona Server 5.7 instance, so PMM treats this as two different nodes.

Services

Agents

For a monitored Percona Server instance, you’ll see an agent for each of these:

  1. pmm-agent
  2. node_exporter
  3. mysqld_exporter
  4. QAN Perfschema

API

We are exposing an API for PMM Server! You can view versions, list hosts, and more…

The API is not guaranteed to work until GA release – so be prepared for some errors during Beta release.

Browse the API using Swagger at /swagger

Installation and configuration

The default PMM Server credentials are:

username: admin
password: admin

Install PMM Server with docker

The easiest way to install PMM Server is to deploy it with Docker. Running the PMM 2 Docker container with PMM Server can be done by the following commands (note the version tag of 2.0.0-beta1):

docker create -v /srv --name pmm-data-2-0-0-beta1 perconalab/pmm-server:2.0.0-beta1 /bin/true
docker run -d -p 80:80 -p 443:443 --volumes-from pmm-data-2-0-0-beta1 --name pmm-server-2.0.0-beta1 --restart always perconalab/pmm-server:2.0.0-beta1

Install PMM Client

Since PMM 2 is still not GA, you’ll need to leverage our experimental release of the Percona repository. You’ll need to download and install the official percona-release package from Percona, and use it to enable the Percona experimental component of the original repository. See percona-release official documentation for further details on this new tool.

Specific instructions for a Debian system are as follows:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb

Now enable the experimental repo:

sudo percona-release disable all
sudo percona-release enable original experimental

Install pmm2-client package:

apt-get update
apt-get install pmm2-client

Users who have previously installed pmm2-client alpha version should remove the package and install a new one in order to update to beta1.

Please note that leaving experimental repository enabled may affect further package installation operations with bleeding edge software that may not be suitable for Production. You can revert by disabling experimental via the following commands:

sudo percona-release disable original experimental
sudo apt-get update

Configure PMM

Once PMM Client is installed, run the pmm-admin config command with your PMM Server IP address to register your Node:

# pmm-admin config --server-insecure-tls --server-address=<IP Address>:443

You should see the following:

Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.

Adding MySQL Metrics and Query Analytics

MySQL server can be added for the monitoring in its normal way. Here is a command which adds it using the PERFORMANCE_SCHEMA source:

sudo pmm-admin add mysql --use-perfschema --username=pmm --password=pmm

where username and password are credentials for accessing MySQL.

The syntax to add MySQL services (Metrics and Query Analytics) using the Slow Log source is the following:

sudo pmm-admin add mysql --use-slowlog --username=pmm --password=pmm

When the server is added, you can check your MySQL dashboards and Query Analytics in order to view its performance information!

Adding MongoDB Metrics and Query Analytics

You can add MongoDB services (Metrics and Query Analytics) with a similar command:

pmm-admin add mongodb --use-profiler --use-exporter  --username=pmm  --password=pmm

Adding PostgreSQL monitoring service

You can add PostgreSQL service as follows:

pmm-admin add postgresql --username=pmm --password=pmm

You can then check your PostgreSQL Overview dashboard.

About PMM

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MongoDB®, and PostgreSQL® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

May
03
2019
--

Percona Monitoring and Management (PMM) 2.0.0-alpha2 Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management

We are pleased to announce the launch of PMM 2.0.0-alpha2, Percona’s second Alpha release of our long-awaited PMM 2 project! In this release, you’ll find support for MongoDB Metrics and Query Analytics – watch for sharp edges as we expect to find a lot of bugs!  We’ve also expanded our existing support of MySQL from our first Alpha to now include MySQL Slow Log as a data source for Query Analytics, which enhances the Query Detail section to include richer query metadata.

  • MongoDB Metrics – You can now launch PMM 2 against MongoDB and gather metrics and query data!
  • MongoDB Query Analytics – Data source from MongoDB Profiler is here!
  • MySQL Query Analytics
    • Queries source – MySQL Slow Log is here!
    • Sorting and more columns – fixed a lot of bugs around UI

PMM 2 is still a work in progress – expect to see bugs and other missing features! We are aware of a number of issues, but please report any and all that you find to Percona’s JIRA.

This release is not recommended for Production environments. PMM 2 Alpha is designed to be used as a new installation – please don’t try to upgrade your existing PMM 1 environment.

MongoDB Query Analytics

We’re proud to announce support for MongoDB Query Analytics in PMM 2.0.0-alpha2!

Using filters you can drill down on specific servers (and other fields):

MongoDB Metrics

In this release we’re including support for MongoDB Metrics, which means you can add a local or remote MongoDB instance to PMM 2 and take advantage of the following view of MongoDB performance:

MySQL Query Analytics Slow Log source

We’ve rounded out our MySQL support to include Slow log – and if you’re using Percona Server with the Extended Slow Log format, you’ll be able to gain deep insight into the performance of individual queries, for example, InnoDB behavior.  Note the difference between the detail available from PERFORMANCE_SCHEMA vs Slow Log:

PERFORMANCE_SCHEMA:

Slow Log:

Installation and configuration

The default PMM Server credentials are:

username: admin
password: admin

Install PMM Server with docker

The easiest way to install PMM Server is to deploy it with Docker. You can run a PMM 2 Docker container with PMM Server by using the following commands (note the version tag of 2.0.0-alpha2):

docker create -v /srv --name pmm-data-2-0-0-alpha2 perconalab/pmm-server:2.0.0-alpha2 /bin/true
docker run -d -p 80:80 -p 443:443 --volumes-from pmm-data-2-0-0-alpha2 --name pmm-server-2.0.0-alpha2 --restart always perconalab/pmm-server:2.0.0-alpha2

Install PMM Client

Since PMM 2 is still not GA, you’ll need to leverage our experimental release of the Percona repository. You’ll need to download and install the official percona-release package from Percona, and use it to enable the Percona experimental component of the original repository.  See percona-release official documentation for further details on this new tool.

Specific instructions for a Debian system are as follows:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb

Now enable the correct repo:

sudo percona-release disable all
sudo percona-release enable original experimental

Now install the pmm2-client package:

apt-get update
apt-get install pmm2-client

Users who have previously installed pmm2-client alpha1 version should remove the package and install a new one in order to update to alpha2.

Please note that having experimental packages enabled may affect further packages installation with versions which are not ready for production. To avoid this, disable this component with the following commands:

sudo percona-release disable original experimental
sudo apt-get update

Configure PMM

Once PMM Client is installed, run the pmm-admin setup command with your PMM Server IP address to register your Node within the Server:

# pmm-agent setup --server-insecure-tls --server-address=<IP Address>:443

We will be moving this functionality back to pmm-admin config in a subsequent Alpha release.

You should see the following:

Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.

Adding MySQL Metrics and Query Analytics (Slow Log source)

The syntax to add MySQL services (Metrics and Query Analytics) using the new Slow Log source:

sudo pmm-admin add mysql --use-slowlog --username=pmm --password=pmm

where username and password are credentials for accessing MySQL.

Adding MongoDB Metrics and Query Analytics

You can add MongoDB services (Metrics and Query Analytics) with the following command:

pmm-admin add mongodb --use-profiler --use-exporter  --username=pmm  --password=pmm

You can then check your MySQL and MongoDB dashboards and Query Analytics in order to view your server’s performance information!

We hope you enjoy this release, and we welcome your comments on the blog!

About PMM

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MongoDB®, and PostgreSQL performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

Apr
25
2019
--

AWS expands cloud infrastructure offerings with new AMD EPYC-powered T3a instances

Amazon is always looking for ways to increase the options it offers developers in AWS, and to that end, today it announced a bunch of new AMD EPYC-powered T3a instances. These were originally announced at the end of last year at re:Invent, AWS’s annual customer conference.

Today’s announcement is about making these chips generally available. They have been designed for a specific type of burstable workload, where you might not always need a sustained amount of compute power.

“These instances deliver burstable, cost-effective performance and are a great fit for workloads that do not need high sustained compute power but experience temporary spikes in usage. You get a generous and assured baseline amount of processing power and the ability to transparently scale up to full core performance when you need more processing power, for as long as necessary,” AWS’s Jeff Barr wrote in a blog post.

These instances are built on the AWS Nitro System, Amazon’s custom networking interface hardware that the company has been working on for the last several years. The primary components of this system include the Nitro Card I/O Acceleration, Nitro Security Chip and the Nitro Hypervisor.

Today’s release comes on top of the announcement last year that the company would be releasing EC2 instances powered by Arm-based AWS Graviton Processors, another option for developers looking for a solution for scale-out workloads.

It also comes on the heels of last month’s announcement that it was releasing EC2 M5 and R5 instances, which use lower-cost AMD chips. These are also built on top of the Nitro System.

The EPCY processors are available starting today in seven sizes in your choice of spot instances, reserved instances or on-demand, as needed. They are available in US East in northern Virginia, US West in Oregon, Europe in Ireland, US East in Ohio and Asia-Pacific in Singapore.

Apr
24
2019
--

Percona Monitoring and Management (PMM) 2.0.0-alpha1 Is Now Available

Percona Monitoring and Management 1.17.0

Percona Monitoring and Management

We are pleased to announce the launch of PMM 2.0.0-alpha1, Percona’s first Alpha release of our long-awaited PMM 2 project! We focused exclusively on MySQL support in our first Alpha (because we wanted to show progress sooner rather than later), and you’ll find updated MySQL dashboards along with enhancements to Query Analytics. We’ve also added better visibility regarding which services are registered with PMM Server, the client-side addition of a new agent called pmm-agent, and finally PMM Server is now exposing an API!

  • Query Analytics
    • Support for large environments – default view all queries from all instances
    • Filtering – display only the results matching filters – MySQL schema name, MySQL server instance
    • Sorting and more columns – now sort by any visible column. Add a column for any field exposed by the data source, for example add rows_examined, lock_time to your Overview
    • Queries source – MySQL PERFORMANCE SCHEMA (slow log coming in our next alpha around May 1st, 2019)
  • Labels – Prometheus now supports auto-discovered and custom labels
  • Inventory Overview Dashboard – Displays the agents, services, and nodes which are registered with PMM Server
  • API – View versions and list hosts using the API
  • pmm-agent – Provides secure remote management of the exporter processes and data collectors on the client

PMM 2 is still a work in progress – expect to see bugs and other missing features! We are aware of a number of issues, but please report any and all that you find to Percona’s JIRA.

Query Analytics Dashboard

Query Analytics Dashboard now defaults to display all queries on each of the systems that are configured for MySQL PERFORMANCE_SCHEMA, Slow Log, and MongoDB Profiler (this release includes support for MySQL PERFORMANCE SCHEMA only), and includes comprehensive filtering capabilities.

Query Analytics Overview

You’ll recognize some of the common elements in PMM 2 Query Analytics such as the Load, Count, and Latency columns, however there are new elements such as the filter box and more arrows on the columns which will be described further down:

PMM 2.0 has new elements available for reporting

Query Detail

Query Analytics continues to deliver detailed information regarding individual query performance:

PMM provides detailed query analytics for MySQL and MongoDB

Filter and Search By

Filtering panel on the left, or use the search by bar to set filters using key:value syntax, for example, I’m interested in just the queries that are executed in MySQL schema db3, I could then type d_schema:db3:

filtering panel on Percona Monitoring and Management

Sort by any column

This is a much requested feature from PMM Query Analytics and we’re glad to announce that you can sort by any column! Just click the small arrow to the right of the column name and

You can now sort PMM reports by any column

Add extra columns

Now you can add a column for each additional field which is exposed by the data source. For example you can add Rows Examined by clicking the + sign and typing or selecting from the available list of fields:

Add custom columns to your PMM presentations

Labels

An important concept we’re introducing in PMM 2 is that when a label is assigned it is persisted in both the Metrics (Prometheus) and Query Analytics (Clickhouse) databases.  So when you browse a target in Prometheus you’ll notice many more labels appear – particularly the auto-discovered (replication_set, environment, node_name, etc.) and (soon to be released) custom labels via custom_label.

Labels are reused in both QAN and Metrics

Inventory Dashboard

We’ve introduced a new dashboard with several tabs so that users are better able to understand which nodes, agents, and services are registered against PMM Server.  We have an established hierarchy with Node at the top, then Service and Agents assigned to a Node.

  • Nodes – Where the service and agents will run. Assigned a node_id, associated with a machine_id (from /etc/machine-id)
    • Examples: bare metal, virtualized, container
  • Services – Individual service names and where they run, against which agents will be assigned. Each instance of a service gets a service_id value that is related to a node_id
    • Examples: MySQL, Amazon Aurora MySQL
    • You can also use this feature to support multiple mysqld instances on a single node, for example: mysql1-3306, mysql1-3307
  • Agents – Each binary (exporter, agent) running on a client will get an agent_id value
    • pmm-agent is the top of the tree, assigned to a node_id
    • node_exporter is assigned to pmm-agent agent_id
    • mysqld_exporter & QAN MySQL Perfschema are assigned to a service_id
    • Examples: pmm-agent, node_exporter, mysqld_exporter, QAN MySQL Perfschema

You can now see which services, agents, and nodes are registered with PMM Server.

Nodes

In this example I have PMM Server (docker) running on the same virtualized compute instance as my Percona Server 5.7 instance, so PMM treats this as two different nodes.

Server treated as reporting node

Services

This example has two MySQL services configured:

Multiple database services supported by PMM

Agents

For a monitored Percona Server instance, you’ll see an agent for each of:

  1. pmm-agent
  2. node_exporter
  3. mysqld_exporter
  4. QAN Perfschema

Showing monitoring of Percona Server with an agent for each in PMM

 

QAN agent for Percona Server

Query Analytics Filters

Query Analytics now provides you with the opportunity to filter based on labels. We’ are beginning with labels that are sourced from MySQL Performance Schema, but eventually will include all fields from MySQL Slow Log, MongoDB Profiler, and PostgreSQL views.  We’ll also be offering the ability to set custom key:value pairs which you’ll use when setting up a new service or instance with pmm-admin during the add ... routine.

Available Filters

We’re exposing four new filters in this release, and we show where we source them from and what they mean:

Filter name Source Notes
d_client_host MySQL Slow Log MySQL PERFORMANCE_SCHEMA doesn’t include client host, so this field will be empty
d_username MySQL Slow Log MySQL PERFORMANCE_SCHEMA doesn’t include username, so this field will be empty
d_schema MySQL Slow Log

MySQL Perfschema

MySQL Schema name
d_server MySQL Slow Log

MySQL Perfschema

MySQL server instance

 

API

We are exposing an API for PMM Server! You can view versions, list hosts, and more!

The API is not guaranteed to work until we get to our GA release – so be prepared for breaking changes during our Alpha and Beta releases.

Browse the API using Swagger at /swagger

 

Installation and configuration

Install PMM Server with docker

The easiest way to install PMM Server is to deploy it with Docker. You can run a PMM 2 Docker container with PMM Server by using the following commands (note the version tag of 2.0.0-alpha1):

docker create -v /srv --name pmm-data-2-0-0-alpha1 perconalab/pmm-server:2.0.0-alpha1 /bin/true
docker run -d -p 80:80 -p 443:443 --volumes-from pmm-data-2-0-0-alpha1 --name pmm-server-2.0.0-alpha1 --restart always perconalab/pmm-server:2.0.0-alpha1

Install PMM Client

Since PMM 2 is still not GA, you’ll need to leverage our experimental release of the Percona repository. You’ll need to download and install the official percona-release package from Percona, and use it to enable the Percona experimental component of the original repository. Specific instructions for a Debian system are as follows:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb

Now enable the correct repo:

sudo percona-release disable all
sudo percona-release enable original experimental

Now install the pmm2-client package:

apt-get update
apt-get install pmm2-client

See percona-release official documentation for details.

Here are the default login credentials:

username: admin
password: admin

Please note that having experimental packages enabled may affect further packages installation with versions which are not ready for production. To avoid this, disable this component with the following commands:

sudo percona-release disable original experimental
sudo apt-get update

Configure PMM

Once PMM Client is installed, run the pmm-admin setup command with your PMM Server IP address to register your Node within the Server:

# pmm-agent setup --server-insecure-tls --server-address=<IP Address>:443

We will be moving this functionality back to pmm-admin config in a subsequent Alpha release.

You should see the following:

Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.

You then add MySQL services (Metrics and Query Analytics) with the following command:

# pmm-admin add mysql --use-perfschema --username=pmm --password=pmm

where username and password are credentials for the monitored MySQL access, which will be used locally on the database host.

After this you can view MySQL metrics or examine the added node on the new PMM Inventory Dashboard:

You can then check your MySQL dashboards and Query Analytics in order to view your server’s performance information!

We hope you enjoy this release, and we welcome your comments on the blog!

About PMM

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MongoDB®, and PostgreSQL performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

Apr
11
2019
--

Much to Oracle’s chagrin, Pentagon names Microsoft and Amazon as $10B JEDI cloud contract finalists

Yesterday, the Pentagon announced two finalists in the $10 billion, decade-long JEDI cloud contract process — and Oracle was not one of them. In spite of lawsuits, official protests and even back-channel complaining to the president, the two finalists are Microsoft and Amazon.

“After evaluating all of the proposals received, the Department of Defense has made a competitive range determination for the Joint Enterprise Defense Infrastructure Cloud request for proposals, in accordance with all applicable laws and regulations. The two companies within the competitive range will participate further in the procurement process,” Elissa Smith, DoD spokesperson for Public Affairs Operations told TechCrunch. She added that those two finalists were in fact Microsoft and Amazon Web Services (AWS, the cloud computing arm of Amazon).

This contract procurement process has caught the attention of the cloud computing market for a number of reasons. For starters, it’s a large amount of money, but perhaps the biggest reason it had cloud companies going nuts was that it is a winner-take-all proposition.

It is important to keep in mind that whether it’s Microsoft or Amazon that is ultimately chosen for this contract, the winner may never see $10 billion, and it may not last 10 years, because there are a number of points where the DoD could back out —  but the idea of a single winner has been irksome for participants in the process from the start.

Over the course of the last year, Google dropped out of the running, while IBM and Oracle have been complaining to anyone who will listen that the contract unfairly favored Amazon. Others have questioned the wisdom of even going with a single-vendor approach. Even at $10 billion, an astronomical sum to be sure, we have pointed out that in the scheme of the cloud business, it’s not all that much money — but there is more at stake here than money.

There is a belief here that the winner could have an upper hand in other government contracts, that this is an entrée into a much bigger pot of money. After all, if you are building the cloud for the Department of Defense and preparing it for a modern approach to computing in a highly secure way, you would be in a pretty good position to argue for other contracts with similar requirements.

In the end, in spite of the protests of the other companies involved, the Pentagon probably got this right. The two finalists are the most qualified to carry out the contract’s requirements. They are the top two cloud infrastructure vendors on the market, although Microsoft is far behind with around 13 or 14 percent market share. Amazon is far head, with around 33 percent, according to several companies that track such things.

Microsoft in particular has tools and resources that would be very appealing, especially Azure Stack — a mini private version of Azure, that you can stand up anywhere, an approach that would have great appeal to the military — but both companies have experience with government contracts, and both bring strengths and weaknesses to the table. It will undoubtedly be a tough decision.

In February, the contract drama took yet another turn when the department reported it was investigating new evidence of conflict of interest by a former Amazon employee who was involved in the RFP process for a time before returning to the company. Smith reports that the department found no such conflict, but there could be some ethical violations they are looking into.

“The department’s investigation has determined that there is no adverse impact on the integrity of the acquisition process. However, the investigation also uncovered potential ethical violations, which have been further referred to DOD IG,” Smith explained.

The DoD is supposed to announce the winner this month, but the drama has continued non-stop.

Apr
09
2019
--

Google Cloud challenges AWS with new open-source integrations

Google today announced that it has partnered with a number of top open-source data management and analytics companies to integrate their products into its Google Cloud Platform and offer them as managed services operated by its partners. The partners here are Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs.

The idea here, Google says, is to provide users with a seamless user experience and the ability to easily leverage these open-source technologies in Google’s cloud. But there is a lot more at play here, even though Google never quite says so. That’s because Google’s move here is clearly meant to contrast its approach to open-source ecosystems with Amazon’s. It’s no secret that Amazon’s AWS cloud computing platform has a reputation for taking some of the best open-source projects and then forking those and packaging them up under its own brand, often without giving back to the original project. There are some signs that this is changing, but a number of companies have recently taken action and changed their open-source licenses to explicitly prevent this from happening.

That’s where things get interesting, because those companies include Confluent, Elastic, MongoDB, Neo4j and Redis Labs — and those are all partnering with Google on this new project, though it’s worth noting that InfluxData is not taking this new licensing approach and that while DataStax uses lots of open-source technologies, its focus is very much on its enterprise edition.

“As you are aware, there has been a lot of debate in the industry about the best way of delivering these open-source technologies as services in the cloud,” Manvinder Singh, the head of infrastructure partnerships at Google Cloud, said in a press briefing. “Given Google’s DNA and the belief that we have in the open-source model, which is demonstrated by projects like Kubernetes, TensorFlow, Go and so forth, we believe the right way to solve this it to work closely together with companies that have invested their resources in developing these open-source technologies.”

So while AWS takes these projects and then makes them its own, Google has decided to partner with these companies. While Google and its partners declined to comment on the financial arrangements behind these deals, chances are we’re talking about some degree of profit-sharing here.

“Each of the major cloud players is trying to differentiate what it brings to the table for customers, and while we have a strong partnership with Microsoft and Amazon, it’s nice to see that Google has chosen to deepen its partnership with Atlas instead of launching an imitation service,” Sahir Azam, the senior VP of Cloud Products at MongoDB told me. “MongoDB and GCP have been working closely together for years, dating back to the development of Atlas on GCP in early 2017. Over the past two years running Atlas on GCP, our joint teams have developed a strong working relationship and support model for supporting our customers’ mission critical applications.”

As for the actual functionality, the core principle here is that Google will deeply integrate these services into its Cloud Console; for example, similar to what Microsoft did with Databricks on Azure. These will be managed services and Google Cloud will handle the invoicing and the billings will count toward a user’s Google Cloud spending commitments. Support will also run through Google, so users can use a single service to manage and log tickets across all of these services.

Redis Labs CEO and co-founder Ofer Bengal echoed this. “Through this partnership, Redis Labs and Google Cloud are bringing these innovations to enterprise customers, while giving them the choice of where to run their workloads in the cloud, he said. “Customers now have the flexibility to develop applications with Redis Enterprise using the fully integrated managed services on GCP. This will include the ability to manage Redis Enterprise from the GCP console, provisioning, billing, support, and other deep integrations with GCP.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com