Aug
09
2018
--

Webinar Thursday Aug 9: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM

Monitoring Amazon Aurora with PMM

Monitoring Amazon Aurora with PMMPlease join Autodesk’s Senior Database Engineer, Vineet Khanna, and Percona’s Sr. MySQL DBA, Tate McDaniel as they present Migrating to Aurora and Monitoring with PMM on Thursday, August 9th, 2018, at 10:00 AM PDT (UTC-7) / 1:00 PM EDT (UTC-4).

Amazon Web Services (AWS) Aurora is one of the most popular cloud-based RDBMS solutions. The main reason for Aurora’s success is because it’s based on InnoDB storage engine.

In this session, we will talk about how you can efficiently plan for migration to Aurora using Terraform and Percona products and solutions. We will share our Terraform code for launching AWS Aurora clusters, look at tricks for checking data consistency, verify migration paths and effectively monitor the environment using PMM.

The topics in this session include:

  • Why AWS Aurora? What is the future of AWS Aurora?
  • Build Aurora Infrastructure
  • Using Terraform (Without Data)
  • Restore Using Terraform & Percona XtraBackup (Using AWS S3 Bucket)
  • Verify data consistency
  • Aurora migration
  • 1:1 migration
  • Many:1 migration using Percona Server multi-source replication
  • Show benchmarks and PMM dashboard
  • Demo

Register Now

 

Vineet KhannaVineet Khanna, Senior Database Engineer, Autodesk

Vineet Khanna, Senior Database Engineer at Autodesk, has 10+ years of experience as a MySQL DBA. His main professional interests are managing complex database environments, improving database performance, architecting High Availability solutions for MySQL. He has handled database environments of organizations like Chegg, Zendesk, Adobe.

 

Tate McDanielTate Mcdaniel, Sr. MySQL DBA

Tate joined Percona in June 2017 as a Remote MySQL DBA. He holds a Bachelors degree in Information Systems and Decision Strategies from LSU. He has 10+ years of experience working with MySQL and operations management. His great love is application query tuning. In his off time, he races sailboats, travels the Caribbean by sailboat, and
drives all over in an RV.

The post Webinar Thursday Aug 9: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM appeared first on Percona Database Performance Blog.

Aug
09
2018
--

How to Change Settings for PMM Deployed via Docker

change settings for PMM deployed docker

When deployed through Docker Percona Monitoring and Management (PMM) uses environment variables for its configuration

For example, if you want to adjust metrics resolution you can pass

-e METRICS_RESOLUTION=Ns

  as  an option to the

docker run

  command:

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=2s \
  percona/pmm-server:latest

You would think if you want to change the setting for existing installation you can just stop the container with

docker stop

  and when you want to start, passing new environment variable with

docker start

Unfortunately, this is not going to work as

docker start

 does not support changing environment variables, at least not at the time of writing. I assume the idea is to keep container immutable and if you want container with different properties—like environment variables—you should run a new container instead. Here’s how.

Stop and Rename the old container, just in case you want to go back

docker stop pmm-server
docker rename pmm-server pmm-server-old

Refresh the container with the latest version

docker pull percona/pmm-server:latest

Do not miss this step!  When you destroy and recreate the container, all the updates you have done through PMM Web interface will be lost. What’s more, the software version will be reset to the one in the Docker image. Running an old PMM version with a data volume modified by a new PMM version may cause unpredictable results. This could include data loss.

Run the container with the new settings, for example changing METRICS_RESOLUTION

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=5s \
  percona/pmm-server:latest

After you’re happy with your new container deployment you can remove the old container

docker rm pmm-server-old

That’s it! You should have running the latest PMM version with updated configuration settings.

The post How to Change Settings for PMM Deployed via Docker appeared first on Percona Database Performance Blog.

Aug
07
2018
--

Resource Usage Improvements in Percona Monitoring and Management 1.13

PMM 1-13 reduction CPU usage by 5x

In Percona Monitoring and Management (PMM) 1.13 we have adopted Prometheus 2, and with this comes a dramatic improvement in resource usage, along with performance improvements!

What does it mean for you? This means you can have a significantly larger number of servers and database instances monitored by the same PMM installation. Or you can reduce the instance size you use to monitor your environment and save some money.

Let’s look at some stats!

CPU Usage

PMM 1.13 reduction in CPU usage by 5x

Percona Monitoring and Management 1.13 reduction in CPU usage after adopting Prometheus 2 by 8x

We can see an approximate 5x and 8x reduction of CPU usage on these two PMM Servers. Depending on the workload, we see CPU usage reductions to range between 3x and 10x.

Disk Writes

There is also less disk write bandwidth required:

PMM 1.13 reduction in disk write bandwidth

On this instance, the bandwidth reduction is “just” 1.5x times. Note this is disk IO for the entire PMM system, which includes more than only the Prometheus component. Prometheus 2 itself promises much more significant IO bandwidth reduction according to official benchmarks

According to the same benchmark, you should expect disk space usage reduction by 33-50% for Prometheus 2 vs Prometheus 1.8. The numbers will be less for Percona Monitoring and Management, as it also stores Query Statistics outside of Prometheus.

Resource usage on the monitored hosts

Also, resource usage on the monitored hosts is significantly reduced:

Percona Monitoring and Management 1.13 reduction of resource usage by Prometheus 2

Why does CPU usage go down on a monitored host with a Prometheus 2 upgrade? This is because PMM uses TLS for the Prometheus to monitored host communication. Before Prometheus 2, a full handshake was performed for every scrape, taking a lot of CPU time. This was optimized with Prometheus 2, resulting in a dramatic CPU usage decrease.

Query performance is also a lot better with Prometheus 2, meaning dashboards visually load a lot faster, though we did not do any specific benchmarks here to share the hard numbers. Note though this improvement only applies when you’re querying the data which is stored in Prometheus 2.

If you’re querying data that was originally stored in Prometheus 1.8, it will be queried through the much slower and less efficient “Remote Read” interface, being quite a bit slower and using a lot more CPU and memory resources.

If you love better efficiency and Performance, consider upgrading to PMM 1.13!

The post Resource Usage Improvements in Percona Monitoring and Management 1.13 appeared first on Percona Database Performance Blog.

Aug
06
2018
--

Webinar Tues 8/14: Utilizing ProxySQL for Connection Pooling in PHP

ProxySQL for connection pooling

ProxySQL for connection poolingPlease join Percona’s Architect, Tibi Köröcz as he presents Utilizing ProxySQL for Connection Pooling in PHP on Tuesday August 14, 2018, at 8:00 am PDT (UTC-7) / 11:00 am EDT (UTC-4).

 

ProxySQL is a very powerful tool, with extended capabilities. This presentation will demonstrate how to use ProxySQL to gain functionality (seamless database backend switch) and correct problems (applications missing connection pooling).

The presentation will be a real-life study on how we use ProxySQL for connection pooling, database failover and load balancing the communication between our (third party) PHP-application and our master-master MySQL-cluster.
Also, we will show monitoring and statistics using Percona Monitoring and Management (PMM).

Register Now!

Tibor Köröcz

Architect

ProxySQL for Connection Pooling

Tibi joined Percona in 2015 as a Consultant. Before joining Percona, among many other things, he worked at the world’s largest car hire booking service as a Senior Database Engineer. He enjoys trying and working with the latest technologies and applications which can help or work with MySQL together. In his spare time he likes to spend time with his friends, travel around the world and play ultimate frisbee.

 

The post Webinar Tues 8/14: Utilizing ProxySQL for Connection Pooling in PHP appeared first on Percona Database Performance Blog.

Aug
01
2018
--

Percona Monitoring and Management 1.13.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

The most significant feature in this release is Prometheus 2, however we also packed a lot of visual changes into release 1.13:

  • Prometheus 2 – Consumes less resources, and Dashboards load faster!
  • New Dashboard: Network Overview – New dashboard for all things IPv4!
  • New Dashboard: NUMA Overview – New Dashboard! Understand memory allocation across DIMMs
  • Snapshots and Updates Improvements – Clearer instructions for snapshot sharing, add ability to disable update reporting
  • System Overview Dashboard improvements – See high level summary, plus drill in on CPU, Memory, Disk, and Network
  • Improved SingleStat for percentages – Trend line now reflects percentage value

We addressed 13 new features and improvements, and fixed 13 bugs.

Prometheus 2

The long awaited Prometheus 2 release is here!  By upgrading to PMM release 1.13, Percona’s internal testing has shown you will achieve a 3x-10x reduction in CPU usage, which translates into PMM Server being able to handle more instances than you could in 1.12.  You won’t see any gaps in graphs since internally PMM Server will run two instances of Prometheus and leverage remote_read in order to provide consistent graphs!

Our Engineering teams have worked very hard to make this upgrade as transparent as possible – hats off to them for their efforts!!

Lastly on Prometheus 2, we also included a new set of graphs to the Prometheus Dashboard to help you better understand when your PMM Server may run out of space. We hope you find this useful!

Network Overview Dashboard

We’re introducing a new dashboard that focuses on all things Networking – we placed a Last Hour panel highlighting high-level network metrics, and then drill into Network Traffic + Details, then focus on TCP, UDP, and ICMP behavior.

Snapshots and Updates Improvements

Of most interest to current Percona Customers, we’ve clarified the instructions on how to take a snapshot of a Dashboard in order to highlight that you are securely sharing with Percona. We’ve also configured the sharing timeout to 30 seconds (up from 4 seconds) so that we more reliably share useful data to Percona Support Engineers, as shorter timeout led to incomplete graphs being shared.

Packed into this feature is also a change to how we report installed version, latest version, and what’s new information:

Lastly, we modified the behavior of the docker environment option DISABLE_UPDATES to remove the Update button.  As a reminder, you can choose to disable update reporting for environments where you want tighter control over (i.e. lock down) who can initiate an update by launching the PMM docker container along with the environment variable as follows:

docker run ... -e DISABLE_UPDATES=TRUE

System Overview Dashboard Improvements

We’ve updated our System Overview Dashboard to focus on the four criteria of CPU, Memory, Disk, and Network, while also presenting a single panel row of high level information (uptime, count of CPUs, load average, etc)

Our last feature we’re introducing in 1.13 is a fix to SingleStat panels where the percentage value is reflected in the level of the trend line in the background.  For example, if you have a stat panel at 20% and 86%, the line in the background should fill the respective amount of the box:Improved SingleStat for percentages

New Features & Improvements

  • PMM-2225 – Add new Dashboard: Network Overview
  • PMM-2485 – Improve Singlestat for percentage values to accurately display trend line
  • PMM-2550 – Update to Prometheus 2
  • PMM-1667 – New Dashboard: NUMA Overview
  • PMM-1930 – Reduce Durability for MySQL
  • PMM-2291 – Add Prometheus Disk Space Utilization Information
  • PMM-2444 – Increase space for legends
  • PMM-2594 – Upgrade to Percona Toolkit 3.0.10
  • PMM-2610 – Configure Snapshot Timeout Default Higher and Update Instructions
  • PMM-2637 – Check for Updates and Disable Updates Improvements
  • PMM-2652 – Fix “Unexpected error” on Home dashboard after upgrade
  • PMM-2661 – Data resolution on Dashboards became 15sec min instead of 1sec
  • PMM-2663 – System Overview Dashboard Improvements

Bug Fixes

  • PMM-1977 – after upgrade pmm-client (1.6.1-1) can’t start mysql:metrics – can’t find .my.cnf
  • PMM-2379 – Invert colours for Memory Available graph
  • PMM-2413 – Charts on MySQL InnoDB metrics are not fully displayed
  • PMM-2427 – Information loss in CPU Graph with Grafana 5 upgrade
  • PMM-2476 – AWS PMM is broken on C5/M5 instances
  • PMM-2576 – Error in logs for MySQL 8 instance on CentOS
  • PMM-2612 – Wrong information in PMM Scrapes Task
  • PMM-2639 – mysql:metrics does not work on Ubuntu 18.04
  • PMM-2643 – Socket detection and MySQL 8
  • PMM-2698 – Misleading Graphs for Rare Events
  • PMM-2701 – MySQL 8 – Innodb Checkpoint Age
  • PMM-2722 – Memory auto-configuration for Prometheus evaluates to minimum of 128MB in entrypoint.sh

How to get PMM Server

PMM is available for installation using three methods:

The post Percona Monitoring and Management 1.13.0 Is Now Available appeared first on Percona Database Performance Blog.

Jul
10
2018
--

When Database Warm Up is Not Really UP

first few minutes MySQL Warm Up graph from PMM

The common wisdom with database performance management is that a “cold” database server has poor performance. Then, as it “warms up”, performance improves until finally you reach a completely warmed up state with peak database performance. In other words, that to get peak performance from MySQL you need to wait for database warm up.

This thinking comes from the point of view of database cache warmup. Indeed from the cache standpoint, you start with an empty cache and over time the cache is filled with data. Moreover the longer the database runs, the more statistics about data access patterns it has, and the better it can manage database cache contents.

Over recent years with the rise of SSDs, cache warmup has become less of an issue. High Performance NVMe Storage can do more than 1GB/sec read, meaning you can warm up a 100GB database cache in less than 2 minutes. Also, SSD IO latency tends to be quite good so you’re not paying as high a penalty for a higher miss rate during the warm up stage.

It is not all so rosy with database performance over time. Databases tend to delay work when possible, but there is only so much delaying you can do. When the database can’t delay work any longer performance tends to be negatively impacted. Here are some examples of delaying work:

  • Checkpointing: depending on the database technology and configuration, checkpointing may be delayed for 30 minutes or more after database start
  • Change Buffer (Innodb) can delay index maintenance work
  • Pushing Messages from Buffers to Leaves (TokuDB) can be delayed until space in the buffers is exhausted
  • Compaction for RocksDB and other LSM-Tree based system can take quite a while to reach steady state

In all these cases database performance can be a lot better almost immediately after start compared to when it is completely “warmed up”.

An experiment with database warm up

Let’s illustrate this with a little experiment running Sysbench with MySQL and Innodb storage engine for 15 minutes:

sysbench --db-driver=mysql --threads=200 --rand-type=uniform --report-interval=10 --percentile=99 --time=900 --mysql-user=root --mysql-password= /usr/share/sysbench/oltp_update_index.lua --table_size=100000000 run

Let’s look in detail at what happens during the run using graphs from Percona Monitoring and Management

PMM graph of first three minutes db warm up

As you can see the number of updates/sec we’re doing actually gets worse (and more uneven) after the first 3 minutes, while a jump to peak performance is almost immediate

InnoDB Checkpoint Age graph from PMM

The log space usage explains some of this—in the first few minutes, we did not need to do as aggressive flushing as we had to do later.

first few minutes MySQL Warm Up graph from PMM

On the InnoDB I/O graph we can see a couple of interesting things. First, you can see how quickly warm up happens—in 2 minutes the IO is already at half of its peak. You can also see the explanation for the little performance dip after its initial high performance (around 19:13)—this is where we got close to using all log space, so active flushing was required while, at the same time, a lot of IO was still needed for cache warmup.

Reaching Steady State is another term commonly used to describe the stage after warm up completes. Note though that such steady state is not guaranteed to be steady at all. In fact, the most typical steady state is unsteady. For example, you can see in this blog post both InnoDB and MyRocks have quite a variance.

Summary

While the term database warm up may imply performance after warm up will be better, it is often not the case. “Reaching Steady State” is a better term as long as you understand that “steady” does not mean uniform performance.

 

The post When Database Warm Up is Not Really UP appeared first on Percona Database Performance Blog.

Jul
05
2018
--

Configuring PMM Monitoring for MongoDB Cluster

In this blog, we will see how to configure Percona Monitoring and Management (PMM) monitoring for a MongoDB cluster. It’s very simple, like adding a replica set or standalone instances to PMM Monitoring. 

For this example, I have used docker to create PMM Server and MongoDB sharded cluster containers. If you want the steps I used to create the MongoDB® cluster environment using docker, I have shared them at the end of this blog. You can refer to this if you would like to create the same set up.

Configuring PMM Clients

For PMM installations, you can check these links for PMM installation and pmm-client setup. The following are the members of the MongoDB cluster:

mongos:
 mongos1  
config db: (replSet name - mongors1conf)  
 mongocfg1  
 mongocfg2  
 mongocfg3
Shard1: (replSet name - mongors1)  
 mongors1n1  
 mongors1n2
 mongors1n3
Shard1: (replSet name - mongors2)  
 mongors2n1
 mongors2n2  
 mongors2n3

In this setup, I installed the pmm-client on mongos1 server. Then I added an agent to monitor MongoDB metrics with cluster option as shown in the next code block. I named the cluster “mongoClusterPMM” (Note: you have to be root user or need sudo access to execute the

pmm-admin

 command):

root@90262f1360a0:/# pmm-admin add  mongodb --cluster mongoClusterPMM
[linux:metrics]   OK, now monitoring this system.
[mongodb:metrics] OK, now monitoring MongoDB metrics using URI localhost:27017
[mongodb:queries] OK, now monitoring MongoDB queries using URI localhost:27017
[mongodb:queries] It is required for correct operation that profiling of monitored MongoDB databases be enabled.
[mongodb:queries] Note that profiling is not enabled by default because it may reduce the performance of your MongoDB server.
[mongodb:queries] For more information read PMM documentation (https://www.percona.com/doc/percona-monitoring-and-management/conf-mongodb.html).
root@90262f1360a0:/# pmm-admin list
pmm-admin 1.11.0
PMM Server      | 172.17.0.2 (password-protected)
Client Name     | 90262f1360a0
Client Address  | 172.17.0.4 
Service Manager | unix-systemv
---------------- ------------- ----------- -------- ---------------- ------------------------
SERVICE TYPE     NAME          LOCAL PORT  RUNNING  DATA SOURCE      OPTIONS                 
---------------- ------------- ----------- -------- ---------------- ------------------------
mongodb:queries  90262f1360a0  -           YES      localhost:27017  query_examples=true     
linux:metrics    90262f1360a0  42000       YES      -                                        
mongodb:metrics  90262f1360a0  42003       YES      localhost:27017  cluster=mongoClusterPMM 
root@90262f1360a0:/#

As you can see, I used the 

pmm-admin add mongodb [options]

 command which enables monitoring for system, MongoDB metrics and queries. You need to enable profiler to monitor MongoDB queries. Use the next command to enable it at database level:

use db_name
db.setProfilingLevel(1)

Check this blog to know more about QAN setup and details. If you want to enable only MongoDB metrics, rather than queries to be monitored, then you can use the command 

pmm-admin add  mongodb:metrics [options]

 . After this, go to the PMM homepage (in my case localhost:8080) in your browser and select MongoDB Cluster Summary from the drop down list under the MongoDB option. The below screenshot shows the MongoDB Cluster—“mongoClusterPMM” statistics— collected by the agent that we added in mongos1 server. 

Percona Monitoring and Management for a MongoDB cluster

Did we miss something here? And do you see any metrics in the dashboard above except “Balancer Enabled” and “Chunks Balanced”?

No. This is because, PMM doesn’t have enough data to show in the dashboard. The shards are not added to the cluster yet and as you can see it displays 0 under shards. Let’s add two shard replica sets mongors1 and mongors2 in the mongos1 instance, and enable sharding to the database to complete the cluster setup as follows:

mongos> sh.addShard("mongors1/mongors1n1:27017,mongors1n2:27017,mongors1n3:27017")
{ "shardAdded" : "mongors1", "ok" : 1 }
mongos> sh.addShard("mongors2/mongors2n1:27017,mongors2n2:27017,mongors2n3:27017")
{ "shardAdded" : "mongors2", "ok" : 1 }

Now, I’ll add some data, collection and shard keys, and enable sharding so that we can see some statistics in the dashboard:

use vinodh
db.setProfilingLevel(1)
db.testColl.insertMany([{id1:1,name:"test insert”},{id1:2,name:"test insert"},{id1:3,name:"insert"},{id1:4,name:"insert"}])
db.testColl.ensureIndex({id1:1})
sh.enableSharding("vinodh")
sh.shardCollection("vinodh.testColl", {id1:1})

At last! Now you can see statistics in the graph for the MongoDB cluster:

PMM Statistics for a MongoDB Cluster 

Full Cluster Monitoring

We are not done yet. We have just added an agent to monitor mongos1 instance, and so only the cluster related statistics and QAN are collected through this node. For monitoring all nodes in the MongoDB cluster, we need to configure all nodes in PMM under “mongoClusterPMM” cluster. This will tell PMM that the configured nodes are part of the same cluster. We could also monitor the replica set related metrics for the members in config DBs and shards. Let’s add the monitoring agents in mongos1 server to monitor all MongoDB instances remotely. We’ll use these commands:

pmm-admin add mongodb:metrics --uri "mongodb://mongocfg1:27017/admin?replicaSet=mongors1conf" mongocfg1replSet --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongocfg2:27017/admin?replicaSet=mongors1conf" mongocfg2replSet --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongocfg3:27017/admin?replicaSet=mongors1conf" mongocfg3replSet --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors1n1:27017/admin?replicaSet=mongors1" mongors1replSetn1 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors1n2:27017/admin?replicaSet=mongors1" mongors1replSetn2 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors1n3:27017/admin?replicaSet=mongors1" mongors1replSetn3 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors2n1:27017/admin?replicaSet=mongors2" mongors2replSetn1 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors2n2:27017/admin?replicaSet=mongors2" mongors2replSetn2 --cluster  mongoClusterPMM
pmm-admin add mongodb:metrics --uri "mongodb://mongors2n3:27017/admin?replicaSet=mongors2" mongors2replSetn3 --cluster  mongoClusterPMM

Once you have added them, you can check the agent’s (mongodb-exporter) status as follows:

root@90262f1360a0:/# pmm-admin list
pmm-admin 1.11.0
PMM Server      | 172.17.0.2 (password-protected)
Client Name     | 90262f1360a0
Client Address  | 172.17.0.4 
Service Manager | unix-systemv
---------------- ------------------ ----------- -------- ----------------------- ------------------------
SERVICE TYPE     NAME               LOCAL PORT  RUNNING  DATA SOURCE             OPTIONS                 
---------------- ------------------ ----------- -------- ----------------------- ------------------------
mongodb:queries  90262f1360a0       -           YES      localhost:27017         query_examples=true     
linux:metrics    90262f1360a0       42000       YES      -                                               
mongodb:metrics  90262f1360a0       42003       YES      localhost:27017         cluster=mongoClusterPMM 
mongodb:metrics  mongocfg1replSet   42004       YES      mongocfg1:27017/admin   cluster=mongoClusterPMM 
mongodb:metrics  mongocfg2replSet   42005       YES      mongocfg2:27017/admin   cluster=mongoClusterPMM 
mongodb:metrics  mongocfg3replSet   42006       YES      mongocfg3:27017/admin   cluster=mongoClusterPMM 
mongodb:metrics  mongors1replSetn1  42007       YES      mongors1n1:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors1replSetn3  42008       YES      mongors1n3:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors2replSetn1  42009       YES      mongors2n1:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors2replSetn2  42010       YES      mongors2n2:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors2replSetn3  42011       YES      mongors2n3:27017/admin  cluster=mongoClusterPMM 
mongodb:metrics  mongors1replSetn2  42012       YES      mongors1n2:27017/admin  cluster=mongoClusterPMM

So now you can monitor every member of the MongoDB cluster including their replica set and shard statistics. The next screenshot shows one of the members from the replica set under MongoDB replica set dashboard. You can select this from dashboard in this way: Cluster: mongoClusterPMM ? Replica Set: mongors1 ? Instance: mongors1replSetn2]:

 PMM MongoDB Replica Set Dashboard

MongoDB cluster docker setup

As I said in the beginning of this blog, I’ve provided the steps for the MongoDB cluster setup. Since this is for testing, I have used very simple configuration to setup the cluster environment using docker-compose. Before creating the MongoDB cluster, create the network in docker for the cluster nodes and PMM to connect each other like this:

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker network create mongo-cluster-nw
7e3203aed630fed9b5f5f2b30e346301a58a068dc5f5bc0bfe38e2eef1d48787

I used the docker-compose.yaml file to create the docker environment here:

version: '2'
services:
  mongors1n1:
    container_name: mongors1n1
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27047:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors1n2:
    container_name: mongors1n2
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27048:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors1n3:
    container_name: mongors1n3
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27049:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors2n1:
    container_name: mongors2n1
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors2 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27057:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors2n2:
    container_name: mongors2n2
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors2 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27058:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongors2n3:
    container_name: mongors2n3
    image: mongo:3.4
    command: mongod --shardsvr --replSet mongors2 --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27059:27017
    expose:
      - "27017"
    environment:
      TERM: xterm
  mongocfg1:
    container_name: mongocfg1
    image: mongo:3.4
    command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27025:27017
    environment:
      TERM: xterm
    expose:
      - "27017"
  mongocfg2:
    container_name: mongocfg2
    image: mongo:3.4
    command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27024:27017
    environment:
      TERM: xterm
    expose:
      - "27017"
  mongocfg3:
    container_name: mongocfg3
    image: mongo:3.4
    command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27023:27017
    environment:
      TERM: xterm
    expose:
      - "27017"
  mongos1:
    container_name: mongos1
    image: mongo:3.4
    depends_on:
    - mongocfg1
      - mongocfg2
    command: mongos --configdb mongors1conf/mongocfg1:27017,mongocfg2:27017,mongocfg3:27017 --port 27017
    networks:
      - mongo-cluster
    ports:
      - 27019:27017
    expose:
      - "27017"
networks:
  mongo-cluster:
    external:
      name: mongo-cluster-nw

Hint:
In the docker compose file, above, if you are using docker-compose version >=3.x, then you can use the option for “networks” in the yaml file like this:

networks:
  mongo-cluster:
    name: mongo-cluster-nw

Now start the containers and configure the cluster quickly as follows:

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker-compose up -d
Creating mongocfg1 ... 
Creating mongos1 ... 
Creating mongors2n3 ... 
Creating mongocfg2 ... 
Creating mongors1n1 ... 
Creating mongocfg3 ... 
Creating mongors2n2 ... 
Creating mongors1n3 ... 
Creating mongors2n1 ... 
Creating mongors1n2 ...

ConfigDB replica set setup

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker exec -it mongocfg1 mongo --quiet
> rs.initiate({  _id: "mongors1conf",configsvr: true,
...   members: [{ _id:0, host: "mongocfg1" },{_id:1, host: "mongocfg2" },{ _id:2, host: "mongocfg3"}]
...   })
{ "ok" : 1 }

Shard1 replica set setup

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker exec -it mongors1n1 mongo --quiet
> rs.initiate( { _id: "mongors1", members:[ {_id: 0, host: "mongors1n1"}, {_id:1, host: "mongors1n2"}, {_id:2, host: "mongors1n3"}] })
{ "ok" : 1 }

Shard2 replica set setup:

Vinodhs-MBP:docker-mcluster vinodhkrish$ docker exec -it mongors2n1 mongo --quiet
> rs.initiate({ _id: "mongors2",  members: [ { _id: 0, host: "mongors2n1"}, { _id:1, host: "mongors2n2"}, { _id:2, host: "mongors2n3"}] })
{ "ok" : 1 }

I hope that this blog helps you to setup a MongoDB cluster and also to configure PMM to monitor it! Have a great day!

The post Configuring PMM Monitoring for MongoDB Cluster appeared first on Percona Database Performance Blog.

Jun
27
2018
--

Percona Monitoring and Management 1.12.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

In release 1.12, we invested our efforts in the following areas:

  • Visual Explain in Query Analytics – Gain insight into MySQL’s query optimizer for your queries
  • New Dashboard – InnoDB Compression Metrics – Evaluate effectiveness of InnoDB Compression
  • New Dashboard – MySQL Command/Handler Compare – Contrast MySQL instances side by side
  • Updated Grafana to 5.1 – Fixed scrolling issues

We addressed 10 new features and improvements, and fixed 13 bugs.

Visual Explain in Query Analytics

We’re working on substantial changes to Query Analytics and the first part to roll out is something that users of Percona Toolkit may recognize – we’ve introduced a new element called Visual Explain based on pt-visual-explain.  This functionality transforms MySQL EXPLAIN output into a left-deep tree representation of a query plan, in order to mimic how the plan is represented inside MySQL.  This is of primary benefit when investigating tables that are joined in some logical way so that you can understand in what order the loops are executed by the MySQL query optimizer. In this example we are demonstrating the output of a single table lookup vs two table join:

Single Table Lookup Two Tables via INNER JOIN
SELECT DISTINCT c
FROM sbtest13
WHERE id
BETWEEN 49808
AND 49907
ORDER BY c
SELECT sbtest3.c
FROM sbtest1
INNER JOIN sbtest3
ON sbtest1.id = sbtest3.id
WHERE sbtest3.c ='long-string';

InnoDB Compression Metrics Dashboard

A great feature of MySQL’s InnoDB storage engine includes compression of data that is transparently handled by the database, saving you space on disk, while reducing the amount of I/O to disk as fewer disk blocks are required to store the same amount of data, thus allowing you to reduce your storage costs.  We’ve deployed a new dashboard that helps you understand the most important characteristics of InnoDB’s Compression.  Here’s a sample of visualizing Compression and Decompression attempts, alongside the overall Compression Success Ratio graph:

 

MySQL Command/Handler Compare Dashboard

We have introduced a new dashboard that lets you do side-by-side comparison of Command (Com_*) and Handler statistics.  A common use case would be to compare servers that share a similar workload, for example across MySQL instances in a pool of replicated slaves.  In this example I am comparing two servers under identical sysbench load, but exhibiting slightly different performance characteristics:

The number of servers you can select for comparison is unbounded, but depending on the screen resolution you might want to limit to 3 at a time for a 1080 screen size.

New Features & Improvements

  • PMM-2519: Display Visual Explain in Query Analytics
  • PMM-2019: Add new Dashboard InnoDB Compression metrics
  • PMM-2154: Add new Dashboard Compare Commands and Handler statistics
  • PMM-2530: Add timeout flags to mongodb_exporter (thank you unguiculus for your contribution!)
  • PMM-2569: Update the MySQL Golang driver for MySQL 8 compatibility
  • PMM-2561: Update to Grafana 5.1.3
  • PMM-2465: Improve pmm-admin debug output
  • PMM-2520: Explain Missing Charts from MySQL Dashboards
  • PMM-2119: Improve Query Analytics messaging when Host = All is passed
  • PMM-1956: Implement connection checking in mongodb_exporter

Bug Fixes

  • PMM-1704: Unable to connect to AtlasDB MongoDB
  • PMM-1950: pmm-admin (mongodb:metrics) doesn’t work well with SSL secured mongodb server
  • PMM-2134: rds_exporter exports memory in Kb with node_exporter labels which are in bytes
  • PMM-2157: Cannot connect to MongoDB using URI style
  • PMM-2175: Grafana singlestat doesn’t use consistent colour when unit is of type Time
  • PMM-2474: Data resolution on Dashboards became 15sec interval instead of 1sec
  • PMM-2581: Improve Travis CI tests by addressing pmm-admin check-network Time Drift
  • PMM-2582: Unable to scroll on “_PMM Add Instance” page when many RDS instances exist in an AWS account
  • PMM-2596: Set fixed height for panel content in PMM Add Instances
  • PMM-2600: InnoDB Checkpoint Age does not show data for MySQL
  • PMM-2620: Fix balancerIsEnabled & balancerChunksBalanced values
  • PMM-2634: pmm-admin cannot create user for MySQL 8
  • PMM-2635: Improve error message while adding metrics beyond “exit status 1”

Known Issues

  • PMM-2639: mysql:metrics does not work on Ubuntu 18.04 – We will address this in a subsequent release

How to get PMM Server

PMM is available for installation using three methods:

The post Percona Monitoring and Management 1.12.0 Is Now Available appeared first on Percona Database Performance Blog.

Jun
26
2018
--

MongoDB Explain – Using PMM-QAN for MongoDB Query Analytics

MongoDB explain plan with PMM-QAN

In this blog post, we will walk through PMM-Query Analytics for MongoDB. We will see how to analyze MongoDB query performance; review the initial parameters that we need to check; and find out how to compare MongoDB query performance with and without indexes with the help of EXPLAIN plan.

The Percona Monitoring and Management QAN (PMM-QAN) dashboard helps DBAs and Developers to analyze database queries and identify performance issues more easily. Sometimes it is difficult to find issues by just enabling the profiler and tracking through mongo shell for all the slow queries.

Test Case Environment

We configured the test environment with PMM Server before we ran the test:

PMM Version:<span class="s1">1.11.0</span>
MongoDB Version: 3.4.10
MongoDB Configurations:
3 Member Replicaset (1 Primary, 2 Secondaries)
ns: "test.pro"
Test Query: Find Test case: (with and without Index)
Indexed Field: "product"
Query: db.pro.find({product:"aa"})
Query count: 1000
Total count: 20000

Please Note: We have to enable profiler in the MongoDB environment to reflect the metrics in the QAN page. The QAN dashboard lists multiple queries, based on the threshold we have set in profiler. For this demonstration, we have set profiling level to 2 to get the query reflected in the dashboard.

QAN Analysis

Let’s analyze comparisons of the plans that were collected before and after the index was added to the collection.

Query used:

db.pro.find({product:"aa"})

The QAN dashboard, lists the FIND query for the “pro” collection:

Query Time

After selecting this specific query, we will see its details just below the list.

Review parameter: “Query Time”

Without Index

Here Query Time=18ms, we will check for the query count, docs returned and docs scanned in detail in the explain plan.

With Index

Now the Query Time=15ms, we have improved the query by creating an index, and MongoDB is no longer scanning the whole collection.

EXPLAIN PLAN

Analysis of the query from the QAN EXPLAIN Plan: You can use the toggle button “Expand All” to check the complete execution stats and plans, as this in case of the “test” database

COMPARISON OF PERFORMANCE

Here we make a comparison of query performance with and without the index, and take a look at the basics that need to be checked before and after index creation:

Without Index:

Stage =”COLLSCAN”, this means that MongoDB scans every document in order to fulfil the find query, so you need to create an index to improve query performance

With Index:

Stage=”IXSCAN”, indicates that the optimizer is not scanning the whole document, but scanned only the indexed bound document

Without Index:

It scanned whole documents i.e. docsExamined:20000 and return nReturned:1000.

With Index:

MongoDB uses the index and scans only the relevant documents .i.e. docsExamined:1000 and return nReturned:1000.  We can see the performance improvement from adding an index.

This is how the query behaves after index creation. We can identify exactly which index is being used with QAN. We can discover whether that index is useful or not, and other performance related events, all with an easy UI.

As this blog post is specific to Query Analysis for MongoDB, QAN graphs and their attributes can be accessed from here, an excellent blog post written by my colleague Vinodh.

The post MongoDB Explain – Using PMM-QAN for MongoDB Query Analytics appeared first on Percona Database Performance Blog.

Jun
25
2018
--

Running Percona XtraDB Cluster in Kubernetes/OpenShift

Diagram of Percona XtraDB Cluster / MySQL running in Kubernetes Open Shift

Kubernetes, and its most popular distribution OpenShift, receives a lot of interest as a container orchestration platform. However, databases remain a foreign entity, primarily because of their stateful nature, since container orchestration systems prefer stateless applications. That said, there has been good progress in support for StatefulSet applications and persistent storage, to the extent that it might be already comfortable to have a production database instance running in Kubernetes. With this in mind, we’ve been looking at running Percona XtraDB Cluster in Kubernetes/OpenShift.

While there are already many examples on the Internet of how to start a single MySQL instance in Kubernetes, for serious usage we need to provide:

  • High Availability: how can we guarantee availability when an instance (or Pod in Kubernetes terminology) crashes or becomes unresponsive?
  • Persistent storage: we do not want to lose our data in case of instance failure
  • Backup and recovery
  • Traffic routing: in the case of multiple instances, how do we direct an application to the correct one
  • Monitoring

Percona XtraDB Cluster in Kubernetes/OpenShift

Schematically it looks like this:


Percona XtraDB Cluster in Kubernetes/OpenShift a possible configuration for a resilient solution

The picture highlights the components we are going to use

Running this in Kubernetes assumes a high degree of automation and minimal manual intervention.

We provide our proof of concept in this project: https://github.com/Percona-Lab/percona-openshift. Please treat it like a source for ideas and as an alpha-quality project, in no way it is production ready.

Details

In our implementation we rely on Helm, the package manager for Kubernetes.  Unfortunately OpenShift does not officially support Helm out of the box, but there is a guide from RedHat on how to make it work.

In the clustering setup, it is quite typical to use a service discovery software like Zookeeper, etcd or Consul. It may become necessary for our Percona XtraDB Cluster deployment, but for now, to simplify deployment, we are going to use the DNS service discovery mechanism provided by Kubernetes. It should be enough for our needs.

We also expect the Kubernetes deployment to provide Dynamic Storage Provisioning. The major cloud providers (like Google Cloud, Microsoft Azure or Amazon Cloud) should have it. Also, it might not be easy to have Dynamic Storage Provisioning for on-premise deployments. You may need to setup GlusterFS or Ceph to provide Dynamic Storage Provisioning.

The challenge with a distributed file system is how many copies of data you will end up having. Percona XtraDB Cluster by itself has three copies, and GlusterFS will also require at least two copies of the data, so in the end we will have six copies of the data. This can’t be good for write intensive applications, but it’s also not good from the capacity standpoint.

One possible approach is to have local data copies for Percona XtraDB Cluster deployments. This will provide better performance and less impact on the network, but in the case of a big dataset (100GB+ ) the node failure will require SST with a big impact on the cluster and network. So the individual solution should be tailored for your workload and your requirements.

Now, as we have a basic setup working, it would be good to understand the performance impact of running Percona XtraDB Cluster in Kubernetes.  Is the network and storage overhead acceptable or it is too big? We plan to look into this in the future.

Once again, our project is located at https://github.com/Percona-Lab/percona-openshift, we are looking for your feedback and for your experience of running databases in Kubernetes/OpenShift.

Before you leave …

Percona XtraDB Cluster

If this article has interested you and you would like to know more about Percona XtraDB Cluster, you might enjoy our recent series of webinar tutorials that introduce this software and how to use it.

The post Running Percona XtraDB Cluster in Kubernetes/OpenShift appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com