Apr
20
2018
--

Percona Monitoring and Management (PMM) 1.10.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

Percona Monitoring and ManagementWe focused mainly on two features in 1.10.0, but there are also several notable improvements worth highlighting:

  • Annotations – Record and display Application Events as Annotations using pmm-admin annotate
  • Grafana 5.0 – Improved visualization effects
  • Switching between Dashboards – Restored functionality to preserve host when switching dashboards
  • New Percona XtraDB Cluster Overview graphs – Added Galera Replication Latency graphs on Percona XtraDB Cluster Overview dashboard with consistent colors

The issues in the release include four new features & improvements, and eight bugs fixed.

Annotations

Application events are one of the contributors to changes in database performance characteristics, and in this release PMM now supports receiving events and displaying them as Annotations using the new command pmm-admin annotate. A recent Percona survey reveals that Database and DevOps Engineers highly value visibility into the Application layer.  By displaying Application Events on top of your PMM graphs, Engineers can now correlate Application Events (common cases: Application Deploys, Outages, and Upgrades) against Database and System level metric changes.

Usage

For example, you have just completed an Application deployment to version 1.2, which is relevant to UI only, so you want to set tags for the version and interface impacted:

pmm-admin annotate "Application deploy v1.2" --tags "UI, v1.2"

Using the optional --tags allows you to filter which Annotations are displayed on the dashboard via a toggle option.  Read more about Annotations utilization in the Documentation.

Grafana 5.0

We’re extremely pleased to see Grafana ship 5.0 and we were fortunate enough to be at Grafanacon, including Percona’s very own Dimitri Vanoverbeke (Dim0) who presented What we Learned Integrating Grafana and Prometheus!

 

 

Included in Grafana 5.0 are a number of dramatic improvements, which in future Percona Monitoring and Management releases we plan to extend our usage of each feature, and the one we like best is the virtually unlimited way you can size and shape graphs.  No longer are you bound by panel constraints to keep all objects at the same fixed height!  This improvement indirectly addresses the visualization error in PMM Server where some graphs would appear to be on two lines and ended up wasting screen space.

Switching between Dashboards

PMM now allows you to navigate between dashboards while maintaining the same host under observation, so that for example you can start on MySQL Overview looking at host serverA, switch to MySQL InnoDB Advanced dashboard and continue looking at serverA, thus saving you a few clicks in the interface.

New Percona XtraDB Cluster Galera Replication Latency Graphs

We have added new Percona XtraDB Cluster Replication Latency graphs on our Percona XtraDB Cluster Galera Cluster Overview dashboard so that you can compare latency across all members in a cluster in one view.

Issues in this release

New Features & Improvements

  • PMM-2330Application Annotations DOC Update
  • PMM-2332Grafana 5 DOC Update
  • PMM-2293Add Galera Replication Latency Graph to Dashboard PXC/Galera Cluster Overview RC Ready
  • PMM-2295Improve color selection on Dashboard PXC/Galera Cluster Overview RC Ready

Bugs fixed

  • PMM-2311Fix misalignment in Query Analytics Metrics table RC Ready
  • PMM-2341Typo in text on password page of OVF RC Ready
  • PMM-2359Trim leading and trailing whitespaces for all fields on AWS/OVF Installation wizard RC Ready
  • PMM-2360Include a “What’s new?” link for Update widget RC Ready
  • PMM-2346Arithmetic on InnoDB AHI Graphs are invalid DOC Update
  • PMM-2364QPS are wrong in QAN RC Ready
  • PMM-2388Query Analytics does not render fingerprint section in some cases DOC Update
  • PMM-2371Pass host when switching between Dashboards

How to get PMM

PMM is available for installation using three methods:

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management (PMM) 1.10.0 Is Now Available appeared first on Percona Database Performance Blog.

Apr
12
2018
--

Percona Server for MongoDB 3.4.14-2.12 Is Now Available

Percona Server for MongoDB 3.4

Percona Server for MongoDB 3.2Percona announces the release of Percona Server for MongoDB 3.4.14-2.12 on April 12, 2018. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.4 Community Edition. It supports MongoDB 3.4 protocols and drivers.

Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features. It requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.4.14 and does not include any additional changes.

The Percona Server for MongoDB 3.4.14-2.12 release notes are available in the official documentation.

The post Percona Server for MongoDB 3.4.14-2.12 Is Now Available appeared first on Percona Database Performance Blog.

Apr
06
2018
--

Free, Fast MongoDB Hot Backup with Percona Server for MongoDB

MongoDB Hot Backups

In this blog post, we will discuss the MongoDB Hot Backup feature in Percona Server for MongoDB and how it can help you get a safe backup of your data with minimal impact.

Percona Server for MongoDB

Percona Server for MongoDB is Percona’s open-source fork of MongoDB, aimed at having 100% feature compatibility (much like our MySQL fork).MongoDB Hot Backup We have added a few extra features to our fork for free that are only available with MongoDB Enterprise binaries for an additional fee.

The feature pertinent to this article is our free, open-source Hot Backup feature for WiredTiger and RocksDB, only available in Percona Server for MongoDB.

Essentially, this Hot Backup feature adds a MongoDB server command that creates a full binary backup of your data set to a new directory with no locking or impact to the database, aside from some increased resource usage due to copying the data.

It’s important to note these backups are binary-level backups, not logical backups (such as mongodump would produce).

Logical vs. Binary Backups

Before the concept of a MongoDB Hot Backup, the only way to backup a MongoDB instance, cluster or replica set was using the logical backup tool ‘mongodump’, or using block-device (binary) snapshots.

A “binary-level” backup means backup data contains the data files (WiredTiger or RocksDB) that MongoDB stores on disk. This is different from the BSON representation of the data that ‘mongodump’ (a “logical” backup tool) outputs.

Binary-level backups are generally faster than logical because a logical backup (mongodump) requires the server to read all data and return it to the MongoDB Client API. Once received, ‘mongodump’ serializes the payload into .bson files on disk. Important areas like indices are not backed up in full, merely the metadata describing the index is backed up. On restore, the entire process is reversed: ‘mongorestore’ must deserialize the data created by ‘mongodump’, send it over the MongoDB Client API to the server, then the server’s storage engine must translate this into data files on disk. Due to only metadata of indices being backed up in this approach, all indices must be recreated at restore time. This is a serial operation for each collection! I have personally restored several databases where the majority of the restore time was the index rebuilds and NOT the actual restore of the raw data!

In contrast, binary-level backups are much simpler: the storage-engine representation of the data is what is backed up. This style of backup includes the real index data, meaning a restore is as simple as copying the backup directory to be in the location of the MongoDB dbPath and restarting MongoDB. No recreation of indices is necessary! For very large datasets, this can be a massive win. Hours or even days of restore time can be saved due to this efficiency.

Of course, there are always some tradeoffs. Binary-level backups can take a bit more space on disk and care must be taken to ensure the files are restored to the right version of MongoDB on matching CPU architecture. Generally, backing up the MongoDB Configuration File and version number with your backups addresses this concern.

‘createBackup’ Command

The Hot Backup feature is triggered by a simple admin command via the ‘mongo’ shell named ‘createBackup’. This command requires only one input: the path to output the backup to, named ‘backupDir’. This backup directory must not exist or an error is returned.

If you have MongoDB Authorization enabled (I hope you do!), this command requires the built-in role: ‘backup’ or a role that inherits the “backup” role.

An example in the  ‘mongo’ shell:

> db.adminCommand({
    createBackup: 1,
    backupDir: "/data/backup27017"
  })
{ "ok" : 1 }

When this command returns an “ok”, a full backup is available to be read at the location specified in ‘backupDir’. This end-result is similar to using block-device snapshots such as LVM snapshots, however with less overhead vs. the 10-30% write degradation many users report at LVM snapshot time.

This backup directory can be deleted with a regular UNIX/Linux “rm -rf ...” command once it is no longer required. A typical deployment archives this directory and/or upload the backup to a remote location (Rsync, NFS, AWS S3, Google Cloud Storage, etc.) before removing the backup directory.

WiredTiger vs. RocksDB Hot Backups

Although the ‘createBackup’ command is syntactically the same for WiredTiger and RocksDB, there are some big differences in the implementation of the backup behind-the-scenes.

RocksDB is a storage engine available in Percona Server for MongoDB that uses a level-tiered compaction strategy that is highly write-optimized. As RocksDB uses immutable (write-once) files on disk, it can provide much more efficient backups of the database by using filesystem “hardlinks” to a single inode on disk. This is important to know for large data sets as this requires exponentially less overhead to create a backup.

If your RocksDB-based server ‘createBackup’ command uses an output path that is on the same disk volume as the MongoDB dbPath (very important requirement), hardlinks are used instead of copying most of the database data! If only 5% of the data changes during backup, only 5% of data is duplicated/copied. This makes backups potentially much faster than WiredTiger, which needs to make a full copy of the data and use two-times as much disk space as a result.

Here is an example of a ‘createBackup’ command on a RocksDB-based mongod instance that uses ‘/data/mongodb27017’ as a dbPath:

$ mongo --port=27017
test1:PRIMARY> db.adminCommand({
    createBackup: 1,
    backupDir: "/data/backup27017.rocksdb"
})
{ "ok" : 1 }
test1:PRIMARY> quit()

Seeing we received { “ok”: 1 }, the backup is ready at our output path. Let’s see:

$ cd /data/backup27017.rocksdb
$ ls -alh
total 4.0K
drwxrwxr-x. 3 tim tim  36 Mar  6 15:25 .
drwxr-xr-x. 9 tim tim 147 Mar  6 15:25 ..
drwxr-xr-x. 2 tim tim 138 Mar  6 15:25 db
-rw-rw-r--. 1 tim tim  77 Mar  6 15:25 storage.bson
$ cd db
$ ls -alh
total 92K
drwxr-xr-x. 2 tim tim  138 Mar  6 15:25 .
drwxrwxr-x. 3 tim tim   36 Mar  6 15:25 ..
-rw-r--r--. 2 tim tim 6.4K Mar  6 15:21 000013.sst
-rw-r--r--. 2 tim tim  18K Mar  6 15:22 000015.sst
-rw-r--r--. 2 tim tim  36K Mar  6 15:25 000017.sst
-rw-r--r--. 2 tim tim  12K Mar  6 15:25 000019.sst
-rw-r--r--. 1 tim tim   16 Mar  6 15:25 CURRENT
-rw-r--r--. 1 tim tim  742 Mar  6 15:25 MANIFEST-000008
-rw-r--r--. 1 tim tim 4.1K Mar  6 15:25 OPTIONS-000005

Inside the RocksDB ‘db’ subdirectory we can see .sst files containing the data are there! As this MongoDB instance stores data on the same disk at ‘/data/mongod27017’, let’s prove that RocksDB created a “hardlink” instead of a full copy of the data.

First, we get the Inode number of an example .sst file using the ‘stat’ command. I chose the RocksDB data file: ‘000013.sst’:

$ stat 000013.sst
  File: ‘000013.sst’
  Size: 6501      	Blocks: 16         IO Block: 4096   regular file
Device: fd03h/64771d	Inode: 33556899    Links: 2
Access: (0644/-rw-r--r--)  Uid: ( 1000/     tim)   Gid: ( 1000/     tim)
Context: unconfined_u:object_r:unlabeled_t:s0
Access: 2018-03-06 15:21:10.735310581 +0200
Modify: 2018-03-06 15:21:10.738310479 +0200
Change: 2018-03-06 15:25:56.778556981 +0200
 Birth: -

Notice the Inode number for this file is 33556899. After the ‘find’ command can be used to find all files pointing to Inode 33556899 on /data:

$ find /data -inum 33556899
/data/mongod27017/db/000013.sst
/data/backup27017.rocksdb/db/000013.sst

Using the ‘-inum’ (Inode Number) flag of find, here we can see .sst files in both the live MongoDB instance (/data/mongodb27017) and the backup (/data/backup27017.rocksdb) are pointing to the same inode on disk for their ‘000013.sst’ file, meaning this file was NOT duplicated or copied during the hot backup process. Only metadata was written to point to the same inode! Now, imagine this file was 1TB+ and this becomes very impressive!

Restore Time

MongoDB Hot BackupIt bears repeating that restoring a logical, mongodump-based backup is very slow; Indices are rebuilt in serial for each collection and both mongorestore and the server need to spend time translating data from logical to binary representations.

Sadly, it is extremely rare that backup restore times are tested and I’ve seen large users of MongoDB disappointed to find that logical-backup restores will take several hours, an entire day or longer while their Production is on fire.

Thankfully, binary-level backups are very easy to restore: the backup directory needs to be copied to the location of the (stopped) MongoDB instance dbPath and then the instance just needs to be started with the same configuration file and version of MongoDB. No indices are rebuilt and there is no time spent rebuilding the data files from logical representations!

Percona-Labs/mongodb_consistent_backup Support

We have plans for our Percona-Labs/mongodb_consistent_backup tool to support the ‘createBackup’/binary-level method of backups in the future. See more about this tool in this Percona Blog post: https://www.percona.com/blog/2017/05/10/percona-lab-mongodb_consistent_backup-1-0-release-explained.

Currently, this project supports cluster-consistent backups using ‘mongodump’ (logical backups only), which are very time consuming for large systems.

Support for ‘createBackup’ in this tool greatly reduces the overhead and time required for backups of clusters, but it requires some added complexity to support remote filesystems it would use.

Conclusion

As this article outlines, there are a lot of exciting developments that have reduced the impact of taking backups of your systems. More important than the backup efficiency is the time it takes to recover your important data from backup, an area where “hot” binary backups are the clear winner by leaps and bounds.

If MongoDB hot backup and topics like this interest you, please see the document below, we are hiring!

{
  hiring: true,
  role: "Consultant",
  tech: "MongoDB",
  location: "USA",
  moreInfo: "https://www.percona.com/about-percona/careers/mongodb-consultant-usa-based"
}

The post Free, Fast MongoDB Hot Backup with Percona Server for MongoDB appeared first on Percona Database Performance Blog.

Apr
04
2018
--

Percona Monitoring and Management 1.9.0 Is Now Available

Percona Monitoring and Management

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

Percona Monitoring and ManagementThere are a number of significant updates in 1.9.0 that we hope you will like, some of the key highlights include:

  • Faster loading of the index page: We have enabled performance optimizations using gzip and HTTP2.
  • AWS improvements: We have added metrics from CloudWatch RDS to 6 dashboards, as well as changed our AWS add instance workflow, and made some changes to credentials handling.
  • Percona Snapshot Server: If you are a Percona customer you can now securely share your dashboards with Percona Engineers.
  • Exporting PMM Server logs: Retrieve logs from PMM Server for troubleshooting using single button-click, avoiding the need to log in manually to the docker container.
  • Low RAM support: We have reduced the memory requirement so PMM Server will run on systems with 512MB
  • Dashboard improvements: We have changed MongoDB instance identification for MongoDB graphs, and set maximum graph Y-axis on Prometheus Exporter Status dashboard

AWS Improvements

CloudWatch RDS metrics

Since we are already consuming Amazon Cloudwatch metrics and persisting them in Prometheus, we have improved six node-specific dashboards to now display Amazon RDS node-level metrics:

  • Cross_Server (Network Traffic)
  • Disk Performance (Disk Latency)
  • Home Dashboard (Network IO)
  • MySQL Overview (Disk Latency, Network traffic)
  • Summary Dashboard (Network Traffic)
  • System Overview (Network Traffic)

AWS Add Instance changes

We have changed our AWS add instance interface and workflow to be more clear on information needed to add an Amazon Aurora MySQL or Amazon RDS MySQL instance. We have provided some clarity on how to locate your AWS credentials.

AWS Settings

We have improved our documentation to highlight connectivity best practices, and authentication options – IAM Role or IAM User Access Key.

Enabling Enhanced Monitoring

Credentials Screen

Low RAM Support

You can now run PMM Server on instances with memory as low as 512MB RAM, which means you can deploy to the free tier of many cloud providers if you want to experiment with PMM. Our memory calculation is now:

METRICS_MEMORY_MULTIPLIED=$(( (${MEMORY_AVAIABLE} - 256*1024*1024) / 100 * 40 ))
if [[ $METRICS_MEMORY_MULTIPLIED < $((128*1024*1024)) ]]; then
    METRICS_MEMORY_MULTIPLIED=$((128*1024*1024))
fi

Percona Snapshot Server

Snapshots are a way of sharing PMM dashboards via a link to individuals who do not normally have access to your PMM Server. If you are a Percona customer you can now securely share your dashboards with Percona Engineers. We have replaced the button for sharing to the Grafana publicly hosted platform onto one administered by Percona. Your dashboard will be written to Percona snapshots and only Percona Engineers will be able to retrieve the data. We will be expiring old snapshots automatically at 90 days, but when sharing you will have the option to configure a shorter retention period.

Export of PMM Server Logs

In this release, the logs from PMM Server can be exported using single button-click, avoiding the need to log in manually to the docker container. This simplifies the troubleshooting process of a PMM Server, and especially for Percona customers, this feature will provide a more consistent data gathering task that you will perform on behalf of requests from Percona Engineers.

Faster Loading of the Index Page

In version 1.8.0, the index page was redesigned to reveal more useful information about the performance of your hosts as well an immediate access to essential components of PMM, however the index page had to load much data dynamically resulting in a noticeably longer load time. In this release we enabled gzip and HTTP2 to improve the load time of the index page. The following screenshots demonstrate the results of our tests on webpagetest.org where we reduce page load time by half. We will continue to look for opportunities to improve the performance of the index page and expect that when we upgrade to Prometheus 2 we will see another improvement.

The load time of the index page of PMM version 1.8.0

The load time of the index page of PMM version 1.9.0

Issues in this release¶

New Features

  • PMM-781: Plot new PXC 5.7.17, 5.7.18 status variables on new graphs for PXC Galera, PXC Overview dashboards
  • PMM-1274: Export PMM Server logs as zip file to the browser
  • PMM-2058: Percona Snapshot Server

Improvements

  • PMM-1587: Use mongodb_up variable for the MongoDB Overview dashboard to identify if a host is MongoDB.
  • PMM-1788: AWS Credentials form changes
  • PMM-1823: AWS Install wizard improvements
  • PMM-2010: System dashboards update to be compatible with RDS nodes
  • PMM-2118: Update grafana config for metric series that will not go above 1.0
  • PMM-2215: PMM Web speed improvements
  • PMM-2216: PMM can now be started on systems without memory limit capabilities in the kernel
  • PMM-2217: PMM Server can now run in Docker with 512 Mb memory
  • PMM-2252: Better handling of variables in the navigation menu

Bug fixes

  • PMM-605: pt-mysql-summary requires additional configuration
  • PMM-941: ParseSocketFromNetstat finds an incorrect socket
  • PMM-948: Wrong load reported by QAN due to mis-alignment of time intervals
  • PMM-1486: MySQL passwords containing the dollar sign ($) were not processed properly.
  • PMM-1905: In QAN, the Explain command could fail in some cases.
  • PMM-2090: Minor formatting issues in QAN
  • PMM-2214: Setting Send real query examples for Query Analytic OFF still shows the real query in example.
  • PMM-2221: no Rate of Scrapes for MySQL & MySQL Errors
  • PMM-2224: Exporter CPU Usage glitches
  • PMM-2227: Auto Refresh for dashboards
  • PMM-2243: Long host names in Grafana dashboards are not displayed correctly
  • PMM-2257: PXC/galera cluster overview Flow control paused time has a percentage glitch
  • PMM-2282: No data is displayed on dashboards for OVA images
  • PMM-2296: The mysql:metrics service will not start on Ubuntu LTS 16.04

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management 1.9.0 Is Now Available appeared first on Percona Database Performance Blog.

Mar
20
2018
--

Percona Blog Poll: What Percona Software Are You Using?

Percona Software

Percona SoftwareThis blog post contains a poll that helps us find out what Percona software the open source database community is using.

Nearly 20 years ago, Netscape released the source code for its Netscape Communicator web browser. This marked one of the biggest moments in “open source” history. The formation of The Open Source Initiative happened shortly after that. Bruce Perens, one of the working group’s founders, adapted his Free Software Guidelines as the official Open Source Definition.

Since then, open source software has gone from being the exception in large projects and enterprises, to being a normal part of huge deployments and daily business activities. Open source software is used by some of the biggest online companies: Facebook, YouTube, Twitter, etc. Many of these companies depend on open source software as part of their business model.

Percona’s mission is to champion unbiased open source database solutions. As part of this mission, we provide open source software, completely free of charge and for reuse. We developed our Percona Server for MySQL and Percona Server for MongoDB solutions to not only be drop-in replacements for existing open source software, but often incorporate “enterprise” features from upstream.

We’ve also recognized a need for a database clustering and backup solutions, and created Percona XtraDB Cluster and Percona XtraBackup to address those concerns.

Beyond database software, Percona has created management and monitoring tools like Percona Monitoring and Management that not only help DBAs with day-to-day tasks, but also use metrics to find out how best to configure, optimize and architect a database environment to best meet the needs of applications and websites.

What we’d like to know is which of our software products are you currently using in your database environment? Are you using just database software, just management and monitoring tools, or a combination of both? As Percona makes plans for the year, we’d like to know what the community is using, what they find helpful, and how we can best allocate our resources to address those needs. We are always looking for the best ways to invest in and grow the Percona software and tools people use.

Complete the survey below by selecting all the options that apply.

Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.

Thanks in advance for your responses – this helps us see which of our software is being deployed in the community.

The post Percona Blog Poll: What Percona Software Are You Using? appeared first on Percona Database Performance Blog.

Mar
16
2018
--

Percona Server for MongoDB 3.2.19-3.10 Is Now Available

Percona Server for MongoDB 3.4

Percona Server for MongoDB 3.2Percona announces the release of Percona Server for MongoDB 3.2.19-3.10 on March 16, 2018. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.2 Community Edition. It supports MongoDB 3.2 protocols and drivers.

Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features. requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.2.19 and includes the following additional change:

  • #PSMDB-191: Fixed a bug in MongoRocks engine initialization code which caused wrong initialization of _maxPrefix value. This could lead to reuse of dropped prefix and to accidental removal of data from the collection using reused prefix.

    In some specific conditions data records could disappear at arbitrary moment of time from the collections or indexes created after server restart.

    This could happen as the result of the following sequence of events:
    1. User deletes one or more indexes or collections. These should be the ones using maximum existing prefixes values.
    2. User shuts down the server before MongoRocks compaction thread executes compactions of deleted ranges.
    3. User restarts the server and creates new collections. Due to the bug those new collections and their indexes may get the same prefix values which were deleted and not yet compacted. User inserts some data into the new collections.
    4. After the server restart MongoRocks compaction thread continues executing compactions of the deleted ranges and this process may eventually delete data from the collections sharing prefixes with deleted ranges.

The Percona Server for MongoDB 3.2.19-3.10 release notes are available in the official documentation.

Mar
12
2018
--

Mass Upgrade MongoDB Versions: from 2.6 to 3.6

Upgrade MongoDB

Upgrade MongoDBIn this blog post, we’re going to look at how to upgrade MongoDB when leaping versions.

Here at Percona, we see every type of upgrade you could imagine. Lately, however, I see an increase very old version upgrades (OVE). This is when you upgrade MongoDB from a version more than one step before the version you are upgrading to. Some examples are:

  • 2.6 -> 3.2
  • 2.6 -> 3.4
  • 2.6 -> 3.6
  • 3.0 -> 3.4
  • 3.0 -> 3.6
  • 3.2 -> 3.6

Luckily, they all have the same basic path to upgrade. Unluckily, it’s not an online upgrade. You need to dump/import again on the new version. For a relatively small system with few indexes and less than 100GB of data, this isn’t a big deal.

Many people might ask: “Why can’t I just upgrade 2.6->3.0->3.2->3.4->3.6?” You can do this and it could be fine, but it is hazardous. How many times have you upgraded five major versions in one row with no issues? How much work is involved with testing one driver change let alone five? What about the changes in the optimizer, storage engine, and driver versions, bulk feature differences and moving from stand-alone config servers to a replica set way they are now? Upgrading to the new election protocol, which implies more overhead?

Even if you navigate all of that, you still have to worry about what you didn’t think about. In a perfect world, you would have an ability to build a fresh duplicate cluster on 3.6 and then run production traffic on it to make sure things still work.

My advice is to only plan an in-place upgrade for a single version, and even then you should talk to an in-house expert or consulting firm to make sure you are making the right changes for future support.

As such, I am going to break things down into two areas:

  • Upgrade from the previous version (3.4.12 -> 3.6.3, for example)
  • Upgrading using dump/import

Upgrading from previous versions when using a Replica Set and in place

Generally speaking, if you are taking this path the manual is a great help (https://docs.mongodb.com/manual/release-notes/3.6-upgrade-replica-set/#upgrade-process). However, this is specific to 3.6, and my goal is to make this a bit more generic. As such, let’s break it down into steps acceptable in all systems.

Read the upgrade page for your version. At the end of the process below, you might have extra work to do.

  1. Set the setFeatureCompatibilityVersion to the previous version ‘db.adminCommand( { setFeatureCompatibilityVersion: “3.4” } )’
  2. Make your current primary prefer to be primary using something like below, where I assume the primary you want is the first node in the list
    >x=rs.config()
    >x.members[0].priority=1000
    >rs.reconfig(x)
  3. Now in reverse order from rs.config().members, take the highest member ID and stop one node at a time
    1. Stop the mongod node
    2. Run yum/apt upgrade, or replace the binary files with new ones
    3. Try to start the process manually, this might fail if you failed to note and fix configuration file changes
      1. A good example of this is requirements to set the engine to MMAPv1 moving from 3.0 -> 3.2, or how “smallfiles” was removed as an option and could cause the host not to start.
    4. Once started on the new version, make sure replication can keep up with ‘rs.printSlaveReplicationInfo()’
    5. Repeat this process one at a time until only node “0” (your primary) is done.
  4. Reverse your work from step three, and remove priority on the primary node. This might cause an election, but it rarely changes the primary.
  5. If the primary has not changed, run ‘rs.stepdown(300, 30)’. This tells it to let someone else be primary, gives the secondaries 30 seconds to catch up, and doesn’t allow itself to be prior for 270 more seconds.
  6. Inside those 270 seconds, you must shutdown the node and repeat step four (but only for this one node).
  7. You are done with a replica set, however, check the nodes on anything you needed to do on the Mongos layer.
    1. In MongoDB 3.6, we require config servers to be a replica set. This is easily done if the configdb configuration line on a mongos is “xxx/host1:port,host2:port2,host3:port” (Replica Set) or “host1:port,host2:port,host3:port” (Non-Replica Set). If you do not do this BEFORE upgrading mongos, it will fail to start. Treat Configs as a replica set upgrade if they are already in one.
  8. You can do one shard/replica set at a time, but if you do, the balancer MUST be off during this time to prevent odd confusion and rollbacks.
  9. You’re done!

As you can see, this is pretty generic. But it is a good set of rules to follow since each version might have a deprecated feature, removed configuration options or other changes. By reading the documentation, you should be OK. However, having someone who has done tens to hundreds of these already is very helpful.

Back to the challenge at hand, how would you like to follow this process five times in a row per replica-set/shard if you were moving to 2.6->3.6? What would be the risk of human error in all of that? Hopefully your starting just from an operational reason why we advise against OVE’s. But that’s only one side. During each of these iterations, you also need to redeploy the application, test to ensure it still works by running some type of Load or UAT system — including upgrading the driver for each version and applying builds for that driver (as some functions may change). I don’t know about you, but as a DBA, architecture, product owner and support manager this is just to much risk.

What are our options for doing this in a much more straightforward fashion, without causing engineer fatigue, splitting risk trees and other such concerns?

Upgrading using the dump/import method

Before we get into this one, we should talk about a couple of points. You do have a choice about online and offline modes for this. I will only cover the offline mode. You need to collect and apply operations occurring during this process for the online mode, and I do not recommend this for our support customers. It is something I have helped do for our consulting customer. This is because we can make sure the process works for your environment, and at the end of the day my job to make sure data is available and safe over anything else.

If you’re sharded, this must be done in parallel. You should use MCB (https://github.com/Percona-Lab/mongodb_consistent_backup). This is a good idea even if you’re not sharded, as it works with sharded and plain replica sets to ensure all the backups and config servers (if applicable) are “dumped” to the same point in time.

Next, if you are not using virtualization or the cloud, you’ll need to order in 2x the hardware and have a place for the old equipment. While not optimal, you might consider the above approach only for just the last version even with its risk if you don’t have the budget for anything else. With virtualization or cloud, people can typically use more hardware for a short time, and the cost is only the use of the equipment for that time. This is easily budgeted as part of the upgrade cost against the risks of not upgrading.

  1. Use MCB to take a backup and save it. An example config is:
    production:
         host: localhost
         port: 27017
         log_dir: /var/log/mongodb-consistent-backup
         backup:
             method: mongodump
             name: upgrade
             location: /mongo_backups/upgrade_XX_to_3.6
         replication:
             max_lag_secs: 10
         sharding:
             balancer:
                 wait_secs: [1+] (default: 300)
                 ping_secs: [1+] (default: 3)
         archive:
             method: tar
         tar:
             compression: none
  2. It figures out if it’s sharded or not. Additionally, it reaches out and maybe even backs up from another secondary as needed. When done, you will have a structure like:
    production:
    >find /mongo_backups/upgrade_XX_to_3.6
    >/mongo_backups/upgrade_XX_to_3.6/upgrade
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/rs1
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/rs1/rs1.tar
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/rs2
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/rs2/rs2.tar
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/config
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/config/config.tar
    and so on...
  3. Now that we have a backup, we can build new hardware for rs1. As I said, we will focus only on this one replica set in this example (a backup would just have that folder in a single replica set back up):
    • Setup all nodes with a 3.6 compatible configuration file. You do not need to keep the same engine, use WiredTiger (default) if you’re not sure.
    • Disable authentication for now.
    • Start the nodes and ensure the replica sets are working and healthy (rs.initiate, rs.add, rs.status).
    • Run import using mongorestore and –oplog on the extracted tar file.
    • Drop admin.users. If the salt has changed, you’ll need to recreate all users (2.6 -> 3.0+).
    • Once the restore is complete, use rs.printReplicationInfo or PMM to verify when the replication is caught up from the import.
    • Start up your application pointing to the new location using the new driver you’ve already tested on this version, grab a beer and you’re done!

Hopefully, you can see how much more comfortable this route is. You know all the new features are working, and you do not need to do anything else (like in the old system) to make sure you have switched to replica-set configs or something.

In this process to upgrade MongoDB, if you used MCB you can do the same for sharding. However, you will keep all of you existing sharding, which the default dump/restore sharded does for you. It should be noted that in a future version they could change the layout of the config servers and this process might need adaption. If you think is the case, drop a question in the Percona Forums, Twitter, or even the contact-us page and we will be glad to help.

I want to thank you for reading this blog on how to upgrade MongoDB, and hope it helps. This is just a base guideline and there are many specific things per-version to consider that are outside of the scope of this blog. Percona has support and experts to help guide you if you have any questions.

Mar
09
2018
--

Using the New MongoDB 3.6 Expression Query Operator $expr

$expr

$exprIn this blog, we will discuss the new expression query operator $expr, added to MongoDB in version 3.6. To show the power of this functionality, I will demonstrate the use of this feature with a simple example.

The $expr Query Operator

With the exception of a few basic operators ($and, $or, $lt, $gt, etc.), before MongoDB 3.6 you could only use several powerful expressions operators on query results via the aggregation pipeline. In practice, this meant that MongoDB .find() queries could not take advantage of a lot of powerful server features.

In 3.6 and above, support for a new query operator named $expr was added to the MongoDB .find() operation. This allows queries to take advantage of the unavailable operators previously available only in aggregations.

Users that are familiar with the aggregation framework will remember that expressions/conditions in an aggregation are evaluated on a per-document basis. Aggregations allow document fields to be used as variables in conditionals (by prefixing the field name with a dollar sign). The new .find() $expr operator adds that same flexibility and power to the .find(), and perhaps more importantly in this article: .findAndModify() commands!

I hope to show how this new functionality creates some very powerful and efficient application workflows in MongoDB.

Our Example Application

In this example, let’s pretend we are designing a store inventory application based on MongoDB. Among other things, one of the major functions of the store inventory application is to update items when they’re sold. In this article, we will focus on this action only.

Each item in the inventory system is stored in a single MongoDB document containing:

  1. A numeric “itemId”
  2. A “name” string
  3. The number of times the item has been sold (“sold”)
  4. The total inventory available (“total”). Importantly, each item may have a different total inventory available.

An example “items” collection document:

> db.items.findOne()
{
	"_id" : ObjectId("5a85d32f8a734d82e8bcb5b5"),
	"itemId" : 123456,
	"name" : "a really cool item",
	"total" : 10,
	"sold" : 0
}

Some additional expectations are:

  1. When the application calls the “sold item” workflow, if all items are sold an empty document (or “null”) should be returned to the application.
  2. If the item is still available, the number of sold items should be incremented and the updated document is returned to the application.

Pre-3.6: Example #1

Before 3.6, a common way to tackle our example application’s “sold item” workflow was by using a .findAndModify() operation. A .findAndModify() does exactly what it suggests: finds documents and modifies the matching documents however you specified.

In this example, our .findAndModify() operation should contain a query for the exact “itemId” we are selling and an update document to tell MongoDB to increment the number of sold items during the query.

By default, .findAndModify() returns a document BEFORE the modification/update took place. In our example, we want the result from AFTER the update, so we will also add the boolean option “new” (set to true) to cause the updated document to be returned.

In the MongoDB shell this .findAndModify() operation for “itemId” 123456 would look like this:

db.items.findAndModify({
  query: { itemId: 123456 },
  update: { $inc: { sold: 1 } },
  new: true
})

But there’s a problem: “itemId” 123456 only has ten items available:

> db.items.find(
    { itemId: 123456 },
    { _id: 0, total: 1 }
  )
{ "total" : 10 }

What if we run this .findAndModify() more than 10 times? Our query does not check if we exceeded the “total” items, this won’t work!

Here we can see after running the query 11 times, our “sold” count of 11 is incorrect:

> db.items.find(
    { itemId: 123456 },
    { _id: 0, sold: 1 }
  )
{ "sold" : 11 }

If ALL items in the inventory system had a total of 10 items, this would be quite simple; the .findAndModify() operation could just be modified to consider that “sold” should be less than ($lt) 10:

db.items.findAndModify({
  query: {
    itemId: 123456,
    sold: { $lt: 10 }
  },
  update: {
    $inc: { sold: 1 }
  },
  new: true
})

But this isn’t good enough either.

In our example, each document has it’s own “total” and “sold” counts. We can’t rely on every item having ten items available, every item may have a different “total”.

Pre-3.6: Example #2

In the pre-3.6 world, there weren’t too many ways to address the problem we found in our first approach, other than breaking the database logic into two different calls and having the application run some logic on the result in the middle. Let’s try that now.

Here’s what this new “sold item” workflow would look like:

  1. A .find() query to fetch the document needed, based on “itemId”:
    > db.items.find(
        { itemId: 123456 }
      )
    { ... }
  2. The application analyses the result document and checks if “sold” field is greater than the “total”.
  3. If there are items available an .update() operation is run to increment the “sold” number for the item. The updated document is returned by the application:
    > db.items.update({
        { itemId: 123456 },
        { $inc: { sold: 1 } }
      })
    { ... }

Aside from increased code complexity, there are several problems with this approach that might not be obvious:

  1. Atomicity: A single MongoDB operation is atomic at a single document level. In other words, if two operations update a document, one of the operations must wait. In the situation in “Example #1”, where our query AND document increment occur in the same operation, we can feel safe knowing that our “sold” and “total” counts were updated atomically. Unfortunately, in our new approach, we’ve broken our query and update into 2 x separate database calls. This means that this operation is not entirely atomic and prone to race conditions. If many sessions run this logic at the same time, it’s possible for another session to increment the counter between your first and second database operation!
  2. Several Operations: In storage engines like WiredTiger and RocksDB, operations wait in a queue when the system is busy. In our new approach, we must wait to enter the storage engine twice. Under serious load, this could create a cascading bottleneck in your architecture. It could cause application servers to stall and backlog operations in lockstep with the overwhelmed database. The most efficient approach is to perform the query and increment in a single operation.
  3. Network Inefficiency: Performing two database commands requires double the serialization overhead and network round trip time. There is also a minor increase in bandwidth usage required.

Post-3.6: Example #3

In this third example, let’s utilize the new Expression Query Operator ($expr) that was added in 3.6 to make this workflow as efficient as possible. Note that the approach in this example only works on MongoDB 3.6 or the upcoming Percona Server for MongoDB 3.6 (or greater)!

The Expression Query Operator allows powerful operators to be used on the result document(s) of regular .find() queries. How? Expression Queries/$expr are actually running an aggregation after translating your find filter/condition to a $match aggregation pipeline stage.

In our new approach, we will only need to use one basic operator in our expression: $lt (ie: less-than). The $lt operator is used in our expression to check that our “sold” field is less-than the “total” field. Using the $lt under the new $expr operator, we can make the MongoDB server compare the “sold” vs. “total” count of the item we are querying and only return and increment the document if the expression is true, all in a single, atomic, server-side operation! 

Here is what our improved query looks like:

db.items.findAndModify({
  query: {
    itemId: 123456,
    $expr: {
      $lt: [
        "$sold",
        "$total"
      ]
    }
  },
  update: {
    $inc: { sold: 1 }
  },
  new: true
})

Notice the $lt fields “sold” and “total” are prefixed with a dollar sign ($). This tells the aggregation to use the real value of each matched document in the less-than comparison dynamically. This resolves a problem we encountered earlier, and now this query only succeeds if there are items available (“sold” is less-than “total)! This efficiently pushes logic down to the database.

The itemId: 123456 has a “total” value of 10. If we run the .findAndModify() 10 times, we get this result:

> db.items.findAndModify({
   query: {
     itemId: 123456,
     $expr: {
       $lt: [
         "$sold",
         "$total"
       ]
     }
   },
   update: {
     $inc: { sold: 1 }
   },
   new: true
 })
{
	"_id" : ObjectId("5a85d32f8a734d82e8bcb5b5"),
	"itemId" : 123456,
	"name" : "a really cool item",
	"total" : 10,
	"sold" : 10
}

Ten sold, ten total. Great!

If we run the same query one more time we receive a “null”:

db.items.findAndModify({
   query: {
     itemId: 123456,
     $expr: {
       $lt: [
         "$sold",
         "$total"
       ]
     }
   },
   update: {
     $inc: { sold: 1 }
   },
   new: true
 })
null

Perfect! A null is returned because our $lt condition in our $expr operator did not succeed, just like we wanted.

Let’s make sure there was NOT an 11th increment, an issue we had in our first example:

> db.items.find(
    { itemId: 123456 },
    { _id: 0, sold: 1 }
  )
{ "sold" : 10 }

Here we can see the 11th increment did not run because the $expr failed to pass. 10 of 10 items are sold. This is very cool!

Conclusion

Here we can see the combination of two existing server features (.findAndModify() and document fields as variables in aggregations) and a new 3.6 feature ($expr) have solved many problems with a previously inefficient and potentially dangerous data workflow.

More on $expr can be found here: https://docs.mongodb.com/manual/reference/operator/query/expr/#op._S_expr.

This combination of functionality was able to provide the atomicity of our query in “Example #1” with the safe logic-checking that was done in “Example #2”. The end result is both efficient AND safe!

This article goes to show that while each component of MongoDB’s functionality is powerful on its own, the features can be even more powerful when they work together. In this case, with the help of a brand new 3.6 feature! What powerful feature combinations are you missing out on?

Jan
31
2018
--

Percona Monitoring and Management 1.7.0 (PMM) Is Now Available

Experimental Percona Monitoring and Management

Percona Monitoring and Management 1.7.0Percona announces the release of Percona Monitoring and Management 1.7.0. (PMM ) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

This release features improved support for external services, which enables a PMM Server to store and display metrics for any available Prometheus exporter. For example, you could deploy the postgres_exporter and use PMM’s external services feature to store PostgreSQL metrics in PMM. Immediately, you’ll see these new metrics in the Advanced Data Exploration dashboard. Then you could leverage many of the pre-developed PostgreSQL dashboards available on Grafana.com, and with a minimal amount of edits have a working PostgreSQL dashboard in PMM! Watch for an upcoming blog post to demonstrate a walk-through of this unlocked functionality.

New Percona Monitoring and Management 1.7.0 Features

  • PMM-1949: New dashboard: MySQL Amazon Aurora Metrics.
    Percona Monitoring and Management 1.7.0

Improvements

  • PMM-1712: Improve external exporters to let you easily add data monitoring from an arbitrary Prometheus exporter you have running on your host.
  • PMM-1510: Rename swap in and swap out labels to be more specific and help clearly see the direction of data flow for Swap In and Swap Out. The new labels are Swap In (Reads) and Swap Out (Writes) accordingly.
  • PMM-1966: Remove Grafana from a list of exporters on the dashboard to eliminate confusion with existing Grafana in the list of exporters on the current version of the dashboard.
  • PMM-1974: Add the mongodb_up in the Exporter Status dashboard. The new graph is added to maintain consistency of information about exporters. This is done based on new metrics implemented in PMM-1586.

Bug fixes

  • PMM-1967: Inconsistent formulas in Prometheus dashboards.
  • PMM-1986: Signing out with HTTP auth enabled leaves the browser signed in.
Jan
19
2018
--

Percona Monitoring and Management (PMM) 1.6.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and ManagementPercona announces the release of Percona Monitoring and Management (PMM) 1.6.0. In this release, Percona Monitoring and Management Grafana metrics are available in the Advanced Data Exploration dashboard. We’ve improved the integration with MyRocks, and its data is now collected from SHOW GLOBAL STATUS.

The MongoDB Exporter now features two new metrics: mongodb_up to inform if the MongoDB Server is running and mongodb_scrape_errors_total reporting the total number of errors when scaping MongoDB.

In this release, we’ve greatly improved the performance of the mongodb:metrics monitoring service.

Percona Monitoring and Management (PMM) 1.6.0 also includes version 4.6.3 of Grafana which includes fixes to bugs in the alert list and the alerting rules. More information is available in the Grafana’s change log.

New Features

  • PMM-1773: PMM Grafana specific metrics have been added to the Advanced Data Exploration dashboard.

Improvements

  • PMM-1485Updated MyRocks integration: MyRocks data is now collected entirely from SHOW GLOBAL STATUS, and we have eliminated SHOW ENGINE ROCKSDB STATUS as a data source in mysqld_exporter.
  • PMM-1895Update Grafana to version 4.6.3:
    • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #9951
    • Alerting: Fixes bug where rules evaluated as firing when all conditions were false and using OR operator. #9318
  • PMM-1586: The mongodb_exporter exporter exposes two new metrics: mongodb_up informing if the MongoDB Server is running and mongodb_scrape_errors_total informing the total number of times an error occurred when scraping MongoDB.
  • PMM-1764: Various small mongodb_exporter improvement
  • PMM-1942: Improved the consistency of using labels in all Prometheus related dashboards.
  • PMM-1936: Updated the Prometheus dashboard in Metrics Monitor
  • PMM-1937 Added the CPU Utilization Details (Cores) dashboard to Metrics Monitor.

Bug fixes

  • PMM-1549: Broken default auth db for mongodb:queries
  • PMM-1631: In some cases, percentage values were displayed incorrectly for MongoDB hosts.
  • PMM-1640: RDS exporter: simplify configuration
  • PMM-1760: After the mongodb:metrics monitoring service was added, the usage of CPU considerably increased in QAN versions 1.4.1 through 1.5.3.

    1.5.0 – CPU usage 95%
    1.5.3 – CPU usage 85%
    1.6.0 – CPU usage 1%

  • PMM-1815QAN could show data for a MySQL host when a MongoDB host was selected.
  • PMM-1888: In QAN, query metrics were not loaded when the QAN page was refreshed.
  • PMM-1898: In QANthe Per Query Stats graph displayed incorrect values for MongoDB
  • PMM-1796: In Metrics Monitor, the Top Process States Hourly graph from the MySQL Overview dashboard showed incorrect data.
  • PMM-1777: In QAN, the Load column could display incorrect data.
  • PMM-1744: The error Please provide AWS access credentials error appeared although the provided credentials could be processed successfully.
  • PMM-1676: In preparation for migration to Prometheus 2.0 we have updated the System Overview dashboard for compatibility.
  • PMM-1920: Some standard MySQL metrics were missing from the mysqld_exporter  Prometheus exporter.
  • PMM-1932: The Response Length metric was not displayed for MongoDB hosts in QAN.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com