Jul
10
2018
--

When Database Warm Up is Not Really UP

first few minutes MySQL Warm Up graph from PMM

The common wisdom with database performance management is that a “cold” database server has poor performance. Then, as it “warms up”, performance improves until finally you reach a completely warmed up state with peak database performance. In other words, that to get peak performance from MySQL you need to wait for database warm up.

This thinking comes from the point of view of database cache warmup. Indeed from the cache standpoint, you start with an empty cache and over time the cache is filled with data. Moreover the longer the database runs, the more statistics about data access patterns it has, and the better it can manage database cache contents.

Over recent years with the rise of SSDs, cache warmup has become less of an issue. High Performance NVMe Storage can do more than 1GB/sec read, meaning you can warm up a 100GB database cache in less than 2 minutes. Also, SSD IO latency tends to be quite good so you’re not paying as high a penalty for a higher miss rate during the warm up stage.

It is not all so rosy with database performance over time. Databases tend to delay work when possible, but there is only so much delaying you can do. When the database can’t delay work any longer performance tends to be negatively impacted. Here are some examples of delaying work:

  • Checkpointing: depending on the database technology and configuration, checkpointing may be delayed for 30 minutes or more after database start
  • Change Buffer (Innodb) can delay index maintenance work
  • Pushing Messages from Buffers to Leaves (TokuDB) can be delayed until space in the buffers is exhausted
  • Compaction for RocksDB and other LSM-Tree based system can take quite a while to reach steady state

In all these cases database performance can be a lot better almost immediately after start compared to when it is completely “warmed up”.

An experiment with database warm up

Let’s illustrate this with a little experiment running Sysbench with MySQL and Innodb storage engine for 15 minutes:

sysbench --db-driver=mysql --threads=200 --rand-type=uniform --report-interval=10 --percentile=99 --time=900 --mysql-user=root --mysql-password= /usr/share/sysbench/oltp_update_index.lua --table_size=100000000 run

Let’s look in detail at what happens during the run using graphs from Percona Monitoring and Management

PMM graph of first three minutes db warm up

As you can see the number of updates/sec we’re doing actually gets worse (and more uneven) after the first 3 minutes, while a jump to peak performance is almost immediate

InnoDB Checkpoint Age graph from PMM

The log space usage explains some of this—in the first few minutes, we did not need to do as aggressive flushing as we had to do later.

first few minutes MySQL Warm Up graph from PMM

On the InnoDB I/O graph we can see a couple of interesting things. First, you can see how quickly warm up happens—in 2 minutes the IO is already at half of its peak. You can also see the explanation for the little performance dip after its initial high performance (around 19:13)—this is where we got close to using all log space, so active flushing was required while, at the same time, a lot of IO was still needed for cache warmup.

Reaching Steady State is another term commonly used to describe the stage after warm up completes. Note though that such steady state is not guaranteed to be steady at all. In fact, the most typical steady state is unsteady. For example, you can see in this blog post both InnoDB and MyRocks have quite a variance.

Summary

While the term database warm up may imply performance after warm up will be better, it is often not the case. “Reaching Steady State” is a better term as long as you understand that “steady” does not mean uniform performance.

 

The post When Database Warm Up is Not Really UP appeared first on Percona Database Performance Blog.

Jul
05
2018
--

Configuring PMM Monitoring for MongoDB Cluster

In this blog, we will see how to configure Percona Monitoring and Management (PMM) monitoring for a MongoDB cluster. It’s very simple, like adding a replica set or standalone instances to PMM Monitoring. 

For this example, I have used docker to create PMM Server and MongoDB sharded cluster containers. If you want the steps I used to create the MongoDB® cluster environment using docker, I have shared them at the end of this blog. You can refer to this if you would like to create the same set up.

Configuring PMM Clients

For PMM installations, you can check these links for PMM installation and pmm-client setup. The following are the members of the MongoDB cluster:

mongos:
 mongos1  
config db: (replSet name - mongors1conf)  
 mongocfg1  
 mongocfg2  
 mongocfg3
Shard1: (replSet name - mongors1)  
 mongors1n1  
 mongors1n2
 mongors1n3
Shard1: (replSet name - mongors2)  
 mongors2n1
 mongors2n2  
 mongors2n3

In this setup, I installed the pmm-client on mongos1 server. Then I added an agent to monitor MongoDB metrics with cluster option as shown in the next code block. I named the cluster “mongoClusterPMM” (Note: you have to be root user or need sudo access to execute the

pmm-admin

 command):

root@90262f1360a0:/# pmm-admin add  mongodb --cluster mongoClusterPMM
[linux:metrics]   OK, now monitoring this system.
[mongodb:metrics] OK, now monitoring MongoDB metrics using URI localhost:27017
[mongodb:queries] OK, now monitoring MongoDB queries using URI localhost:27017
[mongodb:queries] It is required for correct operation that profiling of monitored MongoDB databases be enabled.
[mongodb:queries] Note that profiling is not enabled by default because it may reduce the performance of your MongoDB server.
[mongodb:queries] For more information read PMM documentation (https://www.percona.com/doc/percona-monitoring-and-management/conf-mongodb.html).
root@90262f1360a0:/# pmm-admin list
pmm-admin 1.11.0
PMM Server      | 172.17.0.2 (password-protected)
Client Name     | 90262f1360a0
Client Address  | 172.17.0.4 
Service Manager | unix-systemv
---------------- ------------- ----------- -------- ---------------- ------------------------
SERVICE TYPE     NAME          LOCAL PORT  RUNNING  DATA SOURCE      OPTIONS                 
---------------- ------------- ----------- -------- ---------------- ------------------------
mongodb:queries  90262f1360a0  -           YES      localhost:27017  query_examples=true     
linux:metrics    90262f1360a0  42000       YES      -                                        
mongodb:metrics  90262f1360a0  42003       YES      localhost:27017  cluster=mongoClusterPMM 
root@90262f1360a0:/#

As you can see, I used the 

pmm-admin add mongodb [options]

 command which enables monitoring for system, MongoDB metrics and queries. You need to enable profiler to monitor MongoDB queries. Use the next command to enable it at database level:

use db_name
db.setProfilingLevel(1)

Check this blog to know more about QAN setup and details. If you want to enable only MongoDB metrics, rather than queries to be monitored, then you can use the command 

pmm-admin add  mongodb:metrics [options]

 . After this, go to the PMM homepage (in my case localhost:8080) in your browser and select MongoDB Cluster Summary from the drop down list under the MongoDB option. The below screenshot shows the MongoDB Cluster—“mongoClusterPMM” statistics— collected by the agent that we added in mongos1 server. 

Percona Monitoring and Management for a MongoDB cluster

Did we miss something here? And do you see any metrics in the dashboard above except “Balancer Enabled” and “Chunks Balanced”?

No. This is because, PMM doesn’t have enough data to show in the dashboard. The shards are not added to the cluster yet and as you can see it displays 0 under shards. Let’s add two shard replica sets mongors1 and mongors2 in the mongos1 instance, and enable sharding to the database to complete the cluster setup as follows:

mongos> sh.addShard("mongors1/mongors1n1:27017,mongors1n2:27017,mongors1n3:27017")
{ "shardAdded" : "mongors1", "ok" : 1 }
mongos> sh.addShard("mongors2/mongors2n1:27017,mongors2n2:27017,mongors2n3:27017")
{ "shardAdded" : "mongors2", "ok" : 1 }

Now, I’ll add some data, collection and shard keys, and enable sharding so that we can see some statistics in the dashboard:

use vinodh
db.setProfilingLevel(1)
db.testColl.insertMany([{id1:1,name:"test insert”},{id1:2,name:"test insert"},{id1:3,name:"insert"},{id1:4,name:"insert"}])
db.testColl.ensureIndex({id1:1})
sh.enableSharding("vinodh")
sh.shardCollection("vinodh.testColl", {id1:1})

At last! Now you can see statistics in the graph for the MongoDB cluster:

PMM Statistics for a MongoDB Cluster 

Full Cluster Monitoring

We are not done yet. We have just added an agent to monitor mongos1 instance, and so only the cluster related statistics and QAN are collected through this node. For monitoring all nodes in the MongoDB cluster, we need to configure all nodes in PMM under “mongoClusterPMM” cluster. This will tell PMM that the configured nodes are part of the same cluster. We could also monitor the replica set related metrics for the members in config DBs and shards. Let’s add the monitoring agents in mongos1 server to monitor all MongoDB instances remotely. We’ll use these commands:

pmm-admin add mongodb:metrics --uri "mongodb://mongocfg1:27017/admin?replicaSet=mongors1conf" mongocfg1replSet --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongocfg2:27017/admin?replicaSet=mongors1conf" mongocfg2replSet --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongocfg3:27017/admin?replicaSet=mongors1conf" mongocfg3replSet --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors1n1:27017/admin?replicaSet=mongors1" mongors1replSetn1 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors1n2:27017/admin?replicaSet=mongors1" mongors1replSetn2 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors1n3:27017/admin?replicaSet=mongors1" mongors1replSetn3 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors2n1:27017/admin?replicaSet=mongors2" mongors2replSetn1 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors2n2:27017/admin?replicaSet=mongors2" mongors2replSetn2 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors2n3:27017/admin?replicaSet=mongors2" mongors2replSetn3 --cluster  mongoClusterPMM

Once you have added them, you can check the agent’s (mongodb-exporter) status as follows:

root@90262f1360a0:/# pmm-admin list
pmm-admin 1.11.0
PMM Server      | 172.17.0.2 (password-protected)
Client Name     | 90262f1360a0
Client Address  | 172.17.0.4 
Service Manager | unix-systemv
---------------- ------------------ ----------- -------- ----------------------- ------------------------
SERVICE TYPE     NAME               LOCAL PORT  RUNNING  DATA SOURCE             OPTIONS                 
---------------- ------------------ ----------- -------- ----------------------- ------------------------
mongodb:queries  90262f1360a0       -           YES      localhost:27017         query_examples=true     
linux:metrics    90262f1360a0       42000       YES      -                                               
mongodb:metrics  90262f1360a0       42003       YES      localhost:27017         cluster=mongoClusterPMM 
mongodb:metrics  mongocfg1replSet   42004       YES      mongocfg1:27017/admin   cluster=mongoClusterPMM 
mongodb:metrics  mongocfg2replSet   42005       YES      mongocfg2:27017/admin   cluster=mongoClusterPMM 
mongodb:metrics  mongocfg3replSet   42006       YES      mongocfg3:27017/admin   cluster=mongoClusterPMM 
mongodb:metrics  mongors1replSetn1  42007       YES      mongors1n1:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors1replSetn3  42008       YES      mongors1n3:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors2replSetn1  42009       YES      mongors2n1:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors2replSetn2  42010       YES      mongors2n2:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors2replSetn3  42011       YES      mongors2n3:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors1replSetn2  42012       YES      mongors1n2:27017/admin  cluster=mongoClusterPMM

So now you can monitor every member of the MongoDB cluster including their replica set and shard statistics. The next screenshot shows one of the members from the replica set under MongoDB replica set dashboard. You can select this from dashboard in this way: Cluster: mongoClusterPMM ? Replica Set: mongors1 ? Instance: mongors1replSetn2]:

 PMM MongoDB Replica Set Dashboard

MongoDB cluster docker setup

As I said in the beginning of this blog, I’ve provided the steps for the MongoDB cluster setup. Since this is for testing, I have used very simple configuration to setup the cluster environment using docker-compose. Before creating the MongoDB cluster, create the network in docker for the cluster nodes and PMM to connect each other like this:

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker network create mongo-cluster-nw
7e3203aed630fed9b5f5f2b30e346301a58a068dc5f5bc0bfe38e2eef1d48787

I used the docker-compose.yaml file to create the docker environment here:

version: '2'
services:
  mongors1n1:
    container_name: mongors1n1
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27047:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors1n2:
    container_name: mongors1n2
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27048:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors1n3:
    container_name: mongors1n3
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27049:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors2n1:
    container_name: mongors2n1
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors2 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27057:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors2n2:
    container_name: mongors2n2
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors2 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27058:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors2n3:
    container_name: mongors2n3
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors2 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27059:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongocfg1:
    container_name: mongocfg1
    image: mongo:3.4
    command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27025:27017
    environment:
      TERM: xterm
    expose:
      - "27017"
  mongocfg2:
    container_name: mongocfg2
    image: mongo:3.4
    command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27024:27017
    environment:
      TERM: xterm
    expose:
      - "27017"
  mongocfg3:
    container_name: mongocfg3
    image: mongo:3.4
    command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27023:27017
    environment:
      TERM: xterm
    expose:
      - "27017"
  mongos1:
    container_name: mongos1
    image: mongo:3.4
    depends_on:
    - mongocfg1
      - mongocfg2
    command: mongos --configdb mongors1conf/mongocfg1:27017,mongocfg2:27017,mongocfg3:27017 --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27019:27017
    expose:
      - "27017"
networks:
  mongo-cluster:
    external:
      name: mongo-cluster-nw

Hint:
In the docker compose file, above, if you are using docker-compose version >=3.x, then you can use the option for “networks” in the yaml file like this:

networks:
  mongo-cluster:
    name: mongo-cluster-nw

Now start the containers and configure the cluster quickly as follows:

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker-compose up -d
Creating mongocfg1 ... 
Creating mongos1 ... 
Creating mongors2n3 ... 
Creating mongocfg2 ... 
Creating mongors1n1 ... 
Creating mongocfg3 ... 
Creating mongors2n2 ... 
Creating mongors1n3 ... 
Creating mongors2n1 ... 
Creating mongors1n2 ...

ConfigDB replica set setup

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker exec -it mongocfg1 mongo --quiet
> rs.initiate({  _id: "mongors1conf",configsvr: true,
...   members: [{ _id:0, host: "mongocfg1" },{_id:1, host: "mongocfg2" },{ _id:2, host: "mongocfg3"}]
...   })
{ "ok" : 1 }

Shard1 replica set setup

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker exec -it mongors1n1 mongo --quiet
> rs.initiate( { _id: "mongors1", members:[ {_id: 0, host: "mongors1n1"}, {_id:1, host: "mongors1n2"}, {_id:2, host: "mongors1n3"}] })
{ "ok" : 1 }

Shard2 replica set setup:

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker exec -it mongors2n1 mongo --quiet
> rs.initiate({ _id: "mongors2",  members: [ { _id: 0, host: "mongors2n1"}, { _id:1, host: "mongors2n2"}, { _id:2, host: "mongors2n3"}] })
{ "ok" : 1 }

I hope that this blog helps you to setup a MongoDB cluster and also to configure PMM to monitor it! Have a great day!

The post Configuring PMM Monitoring for MongoDB Cluster appeared first on Percona Database Performance Blog.

Jun
27
2018
--

Percona Monitoring and Management 1.12.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

In release 1.12, we invested our efforts in the following areas:

  • Visual Explain in Query Analytics – Gain insight into MySQL’s query optimizer for your queries
  • New Dashboard – InnoDB Compression Metrics – Evaluate effectiveness of InnoDB Compression
  • New Dashboard – MySQL Command/Handler Compare – Contrast MySQL instances side by side
  • Updated Grafana to 5.1 – Fixed scrolling issues

We addressed 10 new features and improvements, and fixed 13 bugs.

Visual Explain in Query Analytics

We’re working on substantial changes to Query Analytics and the first part to roll out is something that users of Percona Toolkit may recognize – we’ve introduced a new element called Visual Explain based on pt-visual-explain.  This functionality transforms MySQL EXPLAIN output into a left-deep tree representation of a query plan, in order to mimic how the plan is represented inside MySQL.  This is of primary benefit when investigating tables that are joined in some logical way so that you can understand in what order the loops are executed by the MySQL query optimizer. In this example we are demonstrating the output of a single table lookup vs two table join:

Single Table Lookup Two Tables via INNER JOIN
SELECT DISTINCT c
FROM sbtest13
WHERE id
BETWEEN 49808
AND 49907
ORDER BY c
SELECT sbtest3.c
FROM sbtest1
INNER JOIN sbtest3
ON sbtest1.id = sbtest3.id
WHERE sbtest3.c ='long-string';

InnoDB Compression Metrics Dashboard

A great feature of MySQL’s InnoDB storage engine includes compression of data that is transparently handled by the database, saving you space on disk, while reducing the amount of I/O to disk as fewer disk blocks are required to store the same amount of data, thus allowing you to reduce your storage costs.  We’ve deployed a new dashboard that helps you understand the most important characteristics of InnoDB’s Compression.  Here’s a sample of visualizing Compression and Decompression attempts, alongside the overall Compression Success Ratio graph:

 

MySQL Command/Handler Compare Dashboard

We have introduced a new dashboard that lets you do side-by-side comparison of Command (Com_*) and Handler statistics.  A common use case would be to compare servers that share a similar workload, for example across MySQL instances in a pool of replicated slaves.  In this example I am comparing two servers under identical sysbench load, but exhibiting slightly different performance characteristics:

The number of servers you can select for comparison is unbounded, but depending on the screen resolution you might want to limit to 3 at a time for a 1080 screen size.

New Features & Improvements

  • PMM-2519: Display Visual Explain in Query Analytics
  • PMM-2019: Add new Dashboard InnoDB Compression metrics
  • PMM-2154: Add new Dashboard Compare Commands and Handler statistics
  • PMM-2530: Add timeout flags to mongodb_exporter (thank you unguiculus for your contribution!)
  • PMM-2569: Update the MySQL Golang driver for MySQL 8 compatibility
  • PMM-2561: Update to Grafana 5.1.3
  • PMM-2465: Improve pmm-admin debug output
  • PMM-2520: Explain Missing Charts from MySQL Dashboards
  • PMM-2119: Improve Query Analytics messaging when Host = All is passed
  • PMM-1956: Implement connection checking in mongodb_exporter

Bug Fixes

  • PMM-1704: Unable to connect to AtlasDB MongoDB
  • PMM-1950: pmm-admin (mongodb:metrics) doesn’t work well with SSL secured mongodb server
  • PMM-2134: rds_exporter exports memory in Kb with node_exporter labels which are in bytes
  • PMM-2157: Cannot connect to MongoDB using URI style
  • PMM-2175: Grafana singlestat doesn’t use consistent colour when unit is of type Time
  • PMM-2474: Data resolution on Dashboards became 15sec interval instead of 1sec
  • PMM-2581: Improve Travis CI tests by addressing pmm-admin check-network Time Drift
  • PMM-2582: Unable to scroll on “_PMM Add Instance” page when many RDS instances exist in an AWS account
  • PMM-2596: Set fixed height for panel content in PMM Add Instances
  • PMM-2600: InnoDB Checkpoint Age does not show data for MySQL
  • PMM-2620: Fix balancerIsEnabled & balancerChunksBalanced values
  • PMM-2634: pmm-admin cannot create user for MySQL 8
  • PMM-2635: Improve error message while adding metrics beyond “exit status 1”

Known Issues

  • PMM-2639: mysql:metrics does not work on Ubuntu 18.04 – We will address this in a subsequent release

How to get PMM Server

PMM is available for installation using three methods:

The post Percona Monitoring and Management 1.12.0 Is Now Available appeared first on Percona Database Performance Blog.

Jun
26
2018
--

MongoDB Explain – Using PMM-QAN for MongoDB Query Analytics

MongoDB explain plan with PMM-QAN

In this blog post, we will walk through PMM-Query Analytics for MongoDB. We will see how to analyze MongoDB query performance; review the initial parameters that we need to check; and find out how to compare MongoDB query performance with and without indexes with the help of EXPLAIN plan.

The Percona Monitoring and Management QAN (PMM-QAN) dashboard helps DBAs and Developers to analyze database queries and identify performance issues more easily. Sometimes it is difficult to find issues by just enabling the profiler and tracking through mongo shell for all the slow queries.

Test Case Environment

We configured the test environment with PMM Server before we ran the test:

PMM Version:<span class="s1">1.11.0</span>
MongoDB Version: 3.4.10
MongoDB Configurations:
3 Member Replicaset (1 Primary, 2 Secondaries)
ns: "test.pro"
Test Query: Find Test case: (with and without Index)
Indexed Field: "product"
Query: db.pro.find({product:"aa"})
Query count: 1000
Total count: 20000

Please Note: We have to enable profiler in the MongoDB environment to reflect the metrics in the QAN page. The QAN dashboard lists multiple queries, based on the threshold we have set in profiler. For this demonstration, we have set profiling level to 2 to get the query reflected in the dashboard.

QAN Analysis

Let’s analyze comparisons of the plans that were collected before and after the index was added to the collection.

Query used:

db.pro.find({product:"aa"})

The QAN dashboard, lists the FIND query for the “pro” collection:

Query Time

After selecting this specific query, we will see its details just below the list.

Review parameter: “Query Time”

Without Index

Here Query Time=18ms, we will check for the query count, docs returned and docs scanned in detail in the explain plan.

With Index

Now the Query Time=15ms, we have improved the query by creating an index, and MongoDB is no longer scanning the whole collection.

EXPLAIN PLAN

Analysis of the query from the QAN EXPLAIN Plan: You can use the toggle button “Expand All” to check the complete execution stats and plans, as this in case of the “test” database

COMPARISON OF PERFORMANCE

Here we make a comparison of query performance with and without the index, and take a look at the basics that need to be checked before and after index creation:

Without Index:

Stage =”COLLSCAN”, this means that MongoDB scans every document in order to fulfil the find query, so you need to create an index to improve query performance

With Index:

Stage=”IXSCAN”, indicates that the optimizer is not scanning the whole document, but scanned only the indexed bound document

Without Index:

It scanned whole documents i.e. docsExamined:20000 and return nReturned:1000.

With Index:

MongoDB uses the index and scans only the relevant documents .i.e. docsExamined:1000 and return nReturned:1000.  We can see the performance improvement from adding an index.

This is how the query behaves after index creation. We can identify exactly which index is being used with QAN. We can discover whether that index is useful or not, and other performance related events, all with an easy UI.

As this blog post is specific to Query Analysis for MongoDB, QAN graphs and their attributes can be accessed from here, an excellent blog post written by my colleague Vinodh.

The post MongoDB Explain – Using PMM-QAN for MongoDB Query Analytics appeared first on Percona Database Performance Blog.

Jun
25
2018
--

Running Percona XtraDB Cluster in Kubernetes/OpenShift

Diagram of Percona XtraDB Cluster / MySQL running in Kubernetes Open Shift

Kubernetes, and its most popular distribution OpenShift, receives a lot of interest as a container orchestration platform. However, databases remain a foreign entity, primarily because of their stateful nature, since container orchestration systems prefer stateless applications. That said, there has been good progress in support for StatefulSet applications and persistent storage, to the extent that it might be already comfortable to have a production database instance running in Kubernetes. With this in mind, we’ve been looking at running Percona XtraDB Cluster in Kubernetes/OpenShift.

While there are already many examples on the Internet of how to start a single MySQL instance in Kubernetes, for serious usage we need to provide:

  • High Availability: how can we guarantee availability when an instance (or Pod in Kubernetes terminology) crashes or becomes unresponsive?
  • Persistent storage: we do not want to lose our data in case of instance failure
  • Backup and recovery
  • Traffic routing: in the case of multiple instances, how do we direct an application to the correct one
  • Monitoring

Percona XtraDB Cluster in Kubernetes/OpenShift

Schematically it looks like this:


Percona XtraDB Cluster in Kubernetes/OpenShift a possible configuration for a resilient solution

The picture highlights the components we are going to use

Running this in Kubernetes assumes a high degree of automation and minimal manual intervention.

We provide our proof of concept in this project: https://github.com/Percona-Lab/percona-openshift. Please treat it like a source for ideas and as an alpha-quality project, in no way it is production ready.

Details

In our implementation we rely on Helm, the package manager for Kubernetes.  Unfortunately OpenShift does not officially support Helm out of the box, but there is a guide from RedHat on how to make it work.

In the clustering setup, it is quite typical to use a service discovery software like Zookeeper, etcd or Consul. It may become necessary for our Percona XtraDB Cluster deployment, but for now, to simplify deployment, we are going to use the DNS service discovery mechanism provided by Kubernetes. It should be enough for our needs.

We also expect the Kubernetes deployment to provide Dynamic Storage Provisioning. The major cloud providers (like Google Cloud, Microsoft Azure or Amazon Cloud) should have it. Also, it might not be easy to have Dynamic Storage Provisioning for on-premise deployments. You may need to setup GlusterFS or Ceph to provide Dynamic Storage Provisioning.

The challenge with a distributed file system is how many copies of data you will end up having. Percona XtraDB Cluster by itself has three copies, and GlusterFS will also require at least two copies of the data, so in the end we will have six copies of the data. This can’t be good for write intensive applications, but it’s also not good from the capacity standpoint.

One possible approach is to have local data copies for Percona XtraDB Cluster deployments. This will provide better performance and less impact on the network, but in the case of a big dataset (100GB+ ) the node failure will require SST with a big impact on the cluster and network. So the individual solution should be tailored for your workload and your requirements.

Now, as we have a basic setup working, it would be good to understand the performance impact of running Percona XtraDB Cluster in Kubernetes.  Is the network and storage overhead acceptable or it is too big? We plan to look into this in the future.

Once again, our project is located at https://github.com/Percona-Lab/percona-openshift, we are looking for your feedback and for your experience of running databases in Kubernetes/OpenShift.

Before you leave …

Percona XtraDB Cluster

If this article has interested you and you would like to know more about Percona XtraDB Cluster, you might enjoy our recent series of webinar tutorials that introduce this software and how to use it.

The post Running Percona XtraDB Cluster in Kubernetes/OpenShift appeared first on Percona Database Performance Blog.

Jun
19
2018
--

Webinar Weds 20/6: Percona XtraDB Cluster 5.7 Tutorial Part 2

webinar Percona XtraDB Cluster

Including setting up Percona XtraDB Cluster with ProxySQL and PMM

webinar Percona XtraDB ClusterPlease join Percona’s Architect, Tibi Köröcz as he presents Percona XtraDB Cluster 5.7 Tutorial Part 2 on Wednesday, June 20th, 2018, at 7:00 am PDT (UTC-7) / 10:00 am EDT (UTC-4).

 

Never used Percona XtraDB Cluster before? This is the webinar for you! In this 45-minute webinar, we will introduce you to a fully functional Percona XtraDB Cluster.

This webinar will show you how to install Percona XtraDB Cluster with ProxySQL, and monitor it with Percona Monitoring and Management (PMM).

We will also cover topics like bootstrap, IST, SST, certification, common-failure situations and online schema changes.

After this webinar, you will have enough knowledge to set up a working Percona XtraDB Cluster with ProxySQL, in order to meet your high availability requirements.

You can see part one of this series here: Percona XtraDB Cluster 5.7 Tutorial Part 1

Register Now!

Tibor Köröcz

Architect

ProxySQL for Connection Pooling

Tibi joined Percona in 2015 as a Consultant. Before joining Percona, among many other things, he worked at the world’s largest car hire booking service as a Senior Database Engineer. He enjoys trying and working with the latest technologies and applications which can help or work with MySQL together. In his spare time he likes to spend time with his friends, travel around the world and play ultimate frisbee.

 

The post Webinar Weds 20/6: Percona XtraDB Cluster 5.7 Tutorial Part 2 appeared first on Percona Database Performance Blog.

Jun
14
2018
--

Percona Monitoring and Management: Look After Your pmm-data Container

looking after pmm-datamcontainers

looking after pmm-datamcontainersIf you have already deployed PMM server using Docker you might be aware that we begin by creating a special container for persistent PMM data. In this post, I aim to explain the importance of pmm-data container when you deploy PMM server with Docker. By the end of this post, you will have a fair idea of why this Docker container is needed.

Percona Monitoring and Management (PMM) is a free and open-source solution for database troubleshooting and performance optimization that you can run in your own environment. It provides time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

What is the purpose of pmm-data?

Well, as simple as its name suggests, when PMM Server runs via Docker its data is stored in the pmm-data container. It’s a dedicated data only container which you create with bind mounts using -v i.e data volumes for holding persistent PMM data. We use pmm-data to compartmentalize the persistent data so you can more easily backup up and move data consistently across instances or containers. It acts as a single access point from which other running containers (in this case pmm-server) can access data volumes.

pmm-data container does not run, but data from the container is used by pmm-server to build graphs. PMM Server is the core of PMM that aggregates collected data and presents it in the form of tables, dashboards, and graphs in a web interface.

Why do we use docker create ?

The

docker create

  command instructs the Docker daemon to create a writable container layer over the docker image. When you execute

docker create

  using the steps shown, it will create a Docker container named pmm-data and initialize data volumes using the -v flag in conjunction with the create command. (e.g. /opt/prometheus/data).

Option -v is used multiple times in current versions of PMM to mount multiple data volumes. This allows you to create the data volume containers, and then use them from another container i.e pmm-server. We do not want to run the pmm-data container, but only to create it. nb: the number of data volumes bind mounted may change with versions of PMM

$ docker create \
   -v /opt/prometheus/data
   -v /opt/consul-data \
   -v /var/lib/mysql \
   -v /var/lib/grafana \
   --name pmm-data \
   percona/pmm-server:latest /bin/true

Make sure that the data volumes you initialize with the -v option match those given in the example. PMM Server expects you to have bind mounted those directories exactly as demonstrated in the deployment steps. For using different mount points for PMM deployment, please refer to this blog post. Data volumes are very useful as once designated and created you can share them and be include them as part of other containers. If you use -v or –volume to bind-mount a file or directory that does not yet exist on the Docker host, -v creates the endpoint for you. It is always created as a directory. Data in the pmm-data volume are actually hosted on the host’s filesystem.

Why does pmm-data not run ?

As we used

docker create

  container and not

docker run

  for pmm-data, this container does not run. It simply exists to make sure you retain all PMM data when you upgrade to a newer PMM Server image. Data volumes bind mounted on pmm-data container are shared to the running pmm-server container as the

--volumes-from

  option is used for pmm-server launch. Here we persisted data using Docker without binding it to the pmm-server by storing files in the host machine. As long as pmm-data exists, the data exists.

You can stop, destroy, or replace a container. When a non-running container is using a volume, the volume is still available to Docker and is not removed automatically. You can easily replace the pmm-server of the running container by a newer version without any impact or loss of data. For that reason, because of the need to store persistent data, we do it in a data volume. In our case, pmm-data container does not write to the same volumes as it could cause possible corruption.

Why can’t I remove pmm-data container ? What happens if I delete it ?

Removing pmm-data container results in the loss of collected metrics data.

If you remove containers that mount volumes, including the initial pmm-server container, or any subsequent containers mounted, such as pmm-server-2, you do not delete the volumes. This allows you to upgrade — or effectively migrate — data volumes between containers. Your data container might be based on an old version of container, with known security problems. It is not a big problem since it doesn’t actually run anything, but it doesn’t feel right.

As noted earlier, pmm-data stores metrics data as per the retention. You should not remove or recreate pmm-data container unless you need to wipe out all PMM data and start again. To delete the volume from disk, you must explicitly call docker rm -v against the container with a reference to the volume.

Some do’s and don’ts

  • Allocate enough disk space on the host for pmm-data to retain data.
    By default, Prometheus stores time-series data for 30 days, and QAN stores query data for 8 days.
  • Manage data retention appropriately as per your disk space available.
    You can take backup of pmm-data by extracting data from container to avoid data-loss in any situation by using steps mentioned here.

In case of any issues with metrics, here’s a good blog post regarding troubleshooting.

The post Percona Monitoring and Management: Look After Your pmm-data Container appeared first on Percona Database Performance Blog.

May
29
2018
--

Deploying PMM on DigitalOcean

Log in to DigitalOcean panel and click "Create Droplet."

It’s very easy to install Percona Monitoring and Management (PMM) on DigitalOcean. If you’ve never used DigitalOcean before, you will find that it is user-friendly and not very expensive. For $5/month you can easily host your PMM on it, letting you monitor your simple infrastructure or try out PMM before implementing it to monitor your production environments.

Let’s prepare the DigitalOcean instance

Log in to DigitalOcean (DO) control panel and click “Create Droplet.”

Log in to DigitalOcean panel and click "Create Droplet."

Thanks to DO you can skip the boring OS setup and save time by using the Docker “One click app” in DO and the Docker image from PMM.

Create Droplet on DigitalOcean

Note: After clicking on “Docker…” choose an instance size that accommodates your budget – PMM can run on as little as the 1GB 1vCPU instance!

Choose Droplet Size

Note: Scroll again!

Next step – select a nearby region

Since the next Percona Live Europe, 2018 will be in Frankfurt (https://www.percona.com/blog/2018/04/05/percona-live-europe-2018-save-the-date/ ) for me the location choice is obvious.

Choose DigitalOcean datacenter region

The final step in this section is ‘Set Hostname’

I recommend you add ‘pmm-server-‘ at the beginning so that you can easily find it in your control panel. The name in my case is ‘pmm-server-docker-s-1vcpu-1gb-fra1-01’ and I’ll use it later in this tutorial.

Finalize and create Droplet hostname

Click “Create” and wait a while.You can follow the process on the dashboard:

Creating the instance of DigitalOcean Droplet

When the Droplet is created, you’ll get an email with your login details.

The next step is ‘Set up PMM into the Droplet’

SSH to the server, change the password, and let’s prepare to install the PMM server.

==================
random@random-vb:~$ ssh root@X.X.X.X
...
"ufw" has been enabled. All ports except 22 (SSH), 80 (http) and 443 (https)
have been blocked by default.
...
Changing password for root.
(current) UNIX password:
Enter new UNIX password:
Retype new UNIX password:
root@pmm-server-docker-s-1vcpu-1gb-fra1-01:~#
====================

Note the output for the first login. You are getting Ubuntu 16.04 with pre-installed Docker.

The instructions for installing PMM are very simple. You can read them at https://www.percona.com/doc/percona-monitoring-and-management/deploy/server/docker.html

1) Pull the latest version from Docker Hub:

docker pull percona/pmm-server:latest

Wait for some time (this depends on your internet connection)

2) Create a container for persistent PMM data

docker create
-v /opt/prometheus/data
-v /opt/consul-data
-v /var/lib/mysql
-v /var/lib/grafana
--name pmm-data
percona/pmm-server:latest /bin/true

3) Create and launch PMM Server in one command

docker run -d
-p 80:80
--volumes-from pmm-data
--name pmm-server
--restart always
percona/pmm-server:latest

Just to confirm that your containers are available, go ahead and run “docker ps.” You’ll see something like this:

root@pmm-server-docker-s-1vcpu-1gb-fra1-01:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5513858041f7 percona/pmm-server:latest "/opt/entrypoint.sh" 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 443/tcp pmm-server

That’s all! Congratulations! Your PMM server is running.

If you open the IP of your server in the browser, you’ll see something like this:

PMM running in DigitalOcean Droplet instance

There you can see that PMM has already started monitoring itself.

Now you need to install PMM client on your database server and configure it, instructions for this are at https://www.percona.com/doc/percona-monitoring-and-management/deploy/client/index.html

Please note, if you also use DO for the database server by external IP, you’ll probably face “the firewall problem.” In this case, you need to open ports using the “ufw” tool. (See the welcome message from Digital Ocean). For testing purposes, you can use

ufw allow 42000:42999/tcp

To open only pmm-client related ports, follow https://www.percona.com/doc/percona-monitoring-and-management/glossary.terminology.html#term-ports  To run ufw, you need to use the terminal, and you can find more information about ufw at https://www.digitalocean.com/community/tutorials/ufw-essentials-common-firewall-rules-and-commands  Once you have opened up the ports, PMM should now work correctly for this setup.

Final recommendation: Depending on your load you may need to monitor your System Overview dashboard which you’ll find at http://X.X.X.X/graph/somesymbols/system-overview

If you are out of space, upgrade your DO Droplet.

The post Deploying PMM on DigitalOcean appeared first on Percona Database Performance Blog.

May
24
2018
--

Setting up PMM on Google Compute Engine in 15 minutes or less

Percona Monitoring and Management on Google Compute Engine

In this blog post, I will show you how easy it is to set up a Percona Monitoring and Management server on Google Compute Engine from the command line.

First off you will need to have a Google account and install the Cloud SDK tool. You need to create a GCP (Google Cloud Platform) project and enable billing to proceed. This blog assumes you are able to authenticate and SSH into instances from the command line.

Here are the steps to install PMM server in Google Cloud Platform.

1) Create the Compute engine instance with the following command. The example creates an Ubuntu Xenial 16.04 LTS compute instance in the us-west1-b zone with a 100GB persistent disk. For production systems it would be best to use a 500GB disk instead (size=500GB). This should be enough for default data retention settings, although your needs may vary.

jerichorivera@percona-support:~/GCE$ gcloud compute instances create pmm-server --tags pmmserver --image-family ubuntu-1604-lts --image-project ubuntu-os-cloud --machine-type n1-standard-4 --zone us-west1-b --create-disk=size=100GB,type=pd-ssd,device-name=sdb --description "PMM Server on GCP" --metadata-from-file startup-script=deploy-pmm-xenial64.sh
Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server].
NAME        ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP   STATUS
pmm-server  us-west1-b  n1-standard-4               10.138.0.2   35.233.216.225  RUNNING

Notice that we’ve used

--metadata-from-file startup-script=deploy-pmm-xenial64.sh

  The file has the following contents:

jerichorivera@percona-support:~$ cat GCE/deploy-pmm-xenial64.sh
#!/bin/bash
set -v
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
# Format the persistent disk, mount it then add to /etc/fstab
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
sudo mkdir -p /mnt/disks/pdssd
sudo mount -o discard,defaults /dev/sdb /mnt/disks/pdssd/
sudo chmod a+w /mnt/disks/pdssd/
sudo cp /etc/fstab /etc/fstab.backup
echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/disks/pdssd ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
# Change docker’s root directory before installing Docker
sudo mkdir /etc/systemd/system/docker.service.d/
cat << EOF > /etc/systemd/system/docker.service.d/docker.root.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/
EOF
sudo apt-get install -y docker-ce
# Creates the deploy.sh script
cat << EOF > /tmp/deploy.sh
#!/bin/bash
set -v
docker pull percona/pmm-server:latest
docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true
docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest
EOF

This startup script will be executed right after the compute instance is created. The script will format the persistent disk and mount the file system; create a custom Docker unit file for the purpose of creating Docker’s root directory from /var/lib/docker to /mnt/disks/pdssd/docker; install the Docker package; and create the deploy.sh script.

2) Once the compute engine instance is created, SSH into the instance, check that Docker is running and the root directory pointing to the desired folder.

jerichorivera@pmm-server:~$ sudo systemctl status docker
? docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/docker.service.d
           ??docker.root.conf
   Active: active (running) since Wed 2018-05-16 12:53:30 UTC; 45s ago
     Docs: https://docs.docker.com
 Main PID: 4744 (dockerd)
   CGroup: /system.slice/docker.service
           ??4744 /usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/
           ??4764 docker-containerd --config /var/run/docker/containerd/containerd.toml
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391566708Z" level=warning msg="Your kernel does not support swap memory limit"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391638253Z" level=warning msg="Your kernel does not support cgroup rt period"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391680203Z" level=warning msg="Your kernel does not support cgroup rt runtime"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.392913043Z" level=info msg="Loading containers: start."
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.767048674Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.847907241Z" level=info msg="Loading containers: done."
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875129963Z" level=info msg="Docker daemon" commit=9ee9f40 graphdriver(s)=overlay2 version=18.03.1-ce
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875285809Z" level=info msg="Daemon has completed initialization"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.884566419Z" level=info msg="API listen on /var/run/docker.sock"
May 16 12:53:30 pmm-server systemd[1]: Started Docker Application Container Engine.

3) Add your user to the docker group as shown below and change deploy.sh script to executable.

jerichorivera@pmm-server:~$ sudo usermod -aG docker $USER
jerichorivera@pmm-server:~$ sudo chmod +x /tmp/deploy.sh

4) Log off from the instance, and then log back in and then execute the deploy.sh script.

jerichorivera@pmm-server:~$ cd /tmp/
jerichorivera@pmm-server:/tmp$ ./deploy.sh
docker pull percona/pmm-server:latest
latest: Pulling from percona/pmm-server
697841bfe295: Pull complete
fa45d21b9629: Pull complete
Digest: sha256:98d2717b4f0ae83fbca63330c39590d69a7fca7ae6788f52906253ac75db6838
Status: Downloaded newer image for percona/pmm-server:latest
docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true
8977102d419cf8955fd8bbd0ed2c663c75a39f9fbc635238d56b480ecca8e749
docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest
83c2e6db2efc752a6beeff0559b472f012062d3f163c042e5e0d41cda6481d33

5) Finally, create a firewall rule to allow HTTP port 80 to access the PMM Server. For security reasons, we recommend that you secure your PMM server by adding a password, or limit access to it with a stricter firewall rule to specify which IP addresses can access port 80.

jerichorivera@percona-support:~$ gcloud compute firewall-rules create allow-http-pmm-server --allow tcp:80 --target-tags pmmserver --description "Allow HTTP traffic to PMM Server"
Creating firewall...-Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/global/firewalls/allow-http-pmm-server].
Creating firewall...done.
NAME                   NETWORK  DIRECTION  PRIORITY  ALLOW   DENY
allow-http-pmm-server  default  INGRESS    1000      tcp:80
jerichorivera@percona-support:~/GCE$ gcloud compute firewall-rules list
NAME                    NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY
allow-http-pmm-server   default  INGRESS    1000      tcp:80
default-allow-icmp      default  INGRESS    65534     icmp
default-allow-internal  default  INGRESS    65534     tcp:0-65535,udp:0-65535,icmp
default-allow-rdp       default  INGRESS    65534     tcp:3389
default-allow-ssh       default  INGRESS    65534     tcp:22

At this point you should have a PMM Server in GCP running on a Compute Engine instance.

The next steps is to install pmm-client on the database hosts and add services for monitoring.

Here I’ve launched a single standalone Percona Server 5.6 on another Compute Engine instance in the same project (thematic-acumen-204008).

jerichorivera@percona-support:~/GCE$ gcloud compute instances create mysql1 --tags mysql1 --image-family centos-7 --image-project centos-cloud --machine-type n1-standard-2 --zone us-west1-b --create-disk=size=50GB,type=pd-standard,device-name=sdb --description "MySQL1 on GCP" --metadata-from-file startup-script=compute-instance-deploy.sh
Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/mysql1].
NAME    ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
mysql1  us-west1-b  n1-standard-2               10.138.0.3   35.233.187.253  RUNNING

Installed Percona Server 5.6 and pmm-client and then added services. Take note that since the PMM Server and the MySQL server is in the same project and same VPC network, we can connect directly through INTERNAL_IP 10.138.0.2, otherwise use the EXTERNAL_IP 35.223.216.225.

[root@mysql1 jerichorivera]# pmm-admin config --server 10.138.0.2
OK, PMM server is alive.
PMM Server      | 10.138.0.2
Client Name     | mysql1
Client Address  | 10.138.0.3
[root@mysql1 jerichorivera]#
[root@mysql1 jerichorivera]# pmm-admin check-network
PMM Network Status
Server Address | 10.138.0.2
Client Address | 10.138.0.3
* System Time
NTP Server (0.pool.ntp.org)         | 2018-05-22 06:45:47 +0000 UTC
PMM Server                          | 2018-05-22 06:45:47 +0000 GMT
PMM Client                          | 2018-05-22 06:45:47 +0000 UTC
PMM Server Time Drift               | OK
PMM Client Time Drift               | OK
PMM Client to PMM Server Time Drift | OK
* Connection: Client --> Server
-------------------- -------
SERVER SERVICE       STATUS
-------------------- -------
Consul API           OK
Prometheus API       OK
Query Analytics API  OK
Connection duration | 408.185µs
Request duration    | 6.810709ms
Full round trip     | 7.218894ms
No monitoring registered for this node identified as 'mysql1'.
[root@mysql1 jerichorivera]# pmm-admin add mysql --create-user
[linux:metrics] OK, now monitoring this system.
[mysql:metrics] OK, now monitoring MySQL metrics using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)
[mysql:queries] OK, now monitoring MySQL queries from slowlog using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)
[root@mysql1 jerichorivera]# pmm-admin list
pmm-admin 1.10.0
PMM Server      | 10.138.0.2
Client Name     | mysql1
Client Address  | 10.138.0.3
Service Manager | linux-systemd
-------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------
SERVICE TYPE   NAME    LOCAL PORT  RUNNING  DATA SOURCE                                     OPTIONS
-------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------
mysql:queries  mysql1  -           YES      pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)  query_source=slowlog, query_examples=true
linux:metrics  mysql1  42000       YES      -
mysql:metrics  mysql1  42002       YES      pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)

Lastly, in case you need to delete the PMM Server instance. Just execute this delete command below to completely remove the instance and the attached disk. Be aware that you may remove the boot disk and retain the attached persistent disk if you prefer.

jerichorivera@percona-support:~/GCE$ gcloud compute instances delete pmm-server
The following instances will be deleted. Any attached disks configured
 to be auto-deleted will be deleted unless they are attached to any
other instances or the `--keep-disks` flag is given and specifies them
 for keeping. Deleting a disk is irreversible and any data on the disk
 will be lost.
 - [pmm-server] in [us-west1-b]
Do you want to continue (Y/n)?  y
Deleted [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server].

The other option is to install PMM on Google Container engine which was explained by Manjot Singh in his blog post.

The post Setting up PMM on Google Compute Engine in 15 minutes or less appeared first on Percona Database Performance Blog.

May
23
2018
--

Percona Monitoring and Management 1.11.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

In PMM Release 1.11.0, we deliver the following changes:

  • Configurable MySQL Slow Log Rotation – enable or disable rotation, and specify how many files to keep on disk
  • Predictable Graphs – we’ve updated our formulas to use aggregation functions over time for more reliable graphs
  • MySQL Exporter Parsing of my.cnf – we’ve improved how we read my.cnf
  • Annotation improvements – passing multiple strings results in single annotation being written

The issues in the release includes 1 new features & improvements, and 9 bugs fixed.

MySQL Slow Log Rotation Improvements

We spent some time this release going over how we handle MySQL’s Slow Log rotation logic. Query Analytics requires that slow logging be enabled (either to file, or to PERFORMANCE_SCHEMA) and we found that users of Percona Server for MySQL overwhelmingly choose logging to a file in order to take advantage of log_slow_verbosity which provides enhanced InnoDB Usage information. However, the challenge with MySQL’s Slow Log is that it is very verbose and thus the number one concern is disk space. PMM strives to do no harm and so MySQL Slow Log Rotation was a natural fit, but until this release we were very strict and hadn’t enabled any configuration of these parameters.

Percona Server for MySQL Users have long known about Slow Query Log Rotation and Expiration, but until now had no way of using the in-built Percona Server for MySQL feature while ensuring that PMM wasn’t missing any queries from the Slow Log during file rotation. Or perhaps your use case is that you want to do Slow Log Rotation using logrotate or some other facility. Today with Release 1.11 this is now possible!

We’ve made two significant changes:

  1. You can now specify the number of Slow Log files to remain on disk, and let PMM handle deleting the oldest files first. Default remains unchanged – 1 Slow Log to remain on disk.
  2. Slow Log rotation can now be disabled, for example if you want to manage rotation using logrotate or Percona Server for MySQL Slow Query Log Rotation and Expiration. Default remains unchanged – Slow Log Rotation is ON.

Number of Slow Logs Retained on Disk

Slow Logs Rotation – On or Off

You specify each of these two new controls when setting up the MySQL service. The following example specifies that 5 Slow Log files should remain on disk:

pmm-admin add mysql ... --retain-slow-logs=5

While the following example specifies that Slow Log rotation is to be disabled (flag value of false), with the assumption that you will perform your own Slow Log Rotation:

pmm-admin add mysql ... --slow-log-rotation=false

We don’t currently support modifying option parameters for an existing service definition. This means you must remove, then re-add the service and include the new options.

We’re including a logrotate script in this post to get you started, and it is designed to keep 30 copies of Slow Logs at 1GB each. Note that you’ll need to update the Slow Log location, and ensure a MySQL User Account with SUPER, RELOAD are used for this script to successfully execute.

Example logrotate
/var/mysql/mysql-slow.log {
    nocompress
    create 660 mysql mysql
    size 1G
    dateext
    missingok
    notifempty
    sharedscripts
    postrotate
       /bin/mysql -e 'SELECT @@global.long_query_time INTO @LQT_SAVE; SET GLOBAL long_query_time=2000; SELECT SLEEP(2); FLUSH SLOW LOGS; SELECT SLEEP(2); SET GLOBAL long_query_time=@LQT_SAVE;'
    endscript
    rotate 30
}

Predictable Graphs

We’ve updated the logic on four dashboards to better handle predictability and also to allow zooming to look at shorter time ranges.  For example, refreshing PXC/Galera graphs prior to 1.11 led to graphs spiking at different points during the metric series. We’ve reviewed each of these graphs and their corresponding queries and added in <aggregation>_over_time() functions so that graphs display a consistent view of the metric series. This improves your ability to drill in on the dashboards so that no matter how short your time range, you will still observe the same spikes and troughs in your metric series. The four dashboards affected by this improvement are:

  • Home Dashboard
  • PXC/Galera Graphs Dashboard
  • MySQL Overview Dashboard
  • MySQL InnoDB Metrics Dashboard

MySQL Exporter parsing of my.cnf

In earlier releases, the MySQL Exporter expected only key=value type flags. It would ignore options without values (i.e. disable-auto-rehash), and could sometimes read the wrong section of the my.cnf file.  We’ve updated the parsing engine to be more MySQL compatible.

Annotation improvements

Annotations permit the display of an event on all dashboards in PMM.  Users reported that passing more than one string to pmm-admin annotate would generate an error, so we updated the parsing logic to assume all strings passed during annotation creation generates a single annotation event.  Previously you needed to enclose your strings in quotes so that it would be parsed as a single string.

Issues in this release

New Features & Improvements

  • PMM-2432 – Configurable MySQL Slow Log File Rotation

Bug fixes

  • PMM-1187 – Graphs breaks at tight resolution 
  • PMM-2362 – Explain is a part of query 
  • PMM-2399 – RPM for pmm-server is missing some files 
  • PMM-2407 – Menu items are not visible on PMM QAN dashboard 
  • PMM-2469 – Parsing of a valid my.cnf can break the mysqld_exporter 
  • PMM-2479 – PXC/Galera Cluster Overview dashboard: typo in metric names 
  • PMM-2484 – PXC/Galera Graphs display unpredictable results each time they are refreshed 
  • PMM-2503 – Wrong InnoDB Adaptive Hash Index Statistics 
  • PMM-2513 – QAN-agent always changes max_slowlog_size to 0 
  • PMM-2514 – pmm-admin annotate help – fix typos
  • PMM-2515 – pmm-admin annotate – more than 1 annotation 

How to get PMM

PMM is available for installation using three methods:

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management 1.11.0 Is Now Available appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com