Jan
10
2019
--

Percona Backup for MongoDB 0.2.0-Alpha Is Now Available

Percona Backup for MongoDB

Percona Backup for MongoDBPercona announces the first public release of Percona Backup for MongoDB 0.2.0-Alpha on January 10, 2019.

Percona Backup for MongoDB is a distributed, low-impact solution for consistent backups of MongoDB sharded clusters and replica sets. This is a tool for creating consistent backups across a MongoDB sharded cluster (or a single replica set), and for restoring those backups to a specific point in time. Percona Backup for MongoDB uses a distributed client/server architecture to perform backup/restore actions. The project was inspired by (and intends to replace) the Percona-Lab/mongodb_consistent_backup tool.

This release features:

  • Consistent backup of sharded clusters
  • Compression of oplogs and logical backups
  • Backup and restore from local files
  • Backup to S3
  • Running the backup on a single replica set using the safest node (preferably non-Primary or hidden nodes with the lowest replication priority and smallest replication lag)

Future releases will include:

Percona Backup for MongoDB supports Percona Server for MongoDB or MongoDB Community Server version 3.6 or higher with MongoDB replication enabled. Binaries for the supported platforms as well as the tarball with source code are available from the GitHub repository (https://github.com/percona/percona-backup-mongodb/releases/tag/v0.2.0). For more information about Percona Backup for MongoDB and the installation steps, see this README file.

Note Percona doesn’t recommend this release for production, and its API and configuration fields are likely to change in the future. It does not feature any API level security. You are welcome to report any bugs you encounter in our bug tracking system.

Percona Backup for MongoDB

Percona Backup for MongoDB process and interactions between key components.

 

Jan
09
2019
--

AWS gives open source the middle finger

AWS launched DocumentDB today, a new database offering that is compatible with the MongoDB API. The company describes DocumentDB as a “fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools.” In effect, it’s a hosted drop-in replacement for MongoDB that doesn’t use any MongoDB code.

AWS argues that while MongoDB is great at what it does, its customers have found it hard to build fast and highly available applications on the open-source platform that can scale to multiple terabytes and hundreds of thousands of reads and writes per second. So what the company did was build its own document database, but made it compatible with the Apache 2.0 open source MongoDB 3.6 API.

If you’ve been following the politics of open source over the last few months, you’ll understand that the optics of this aren’t great. It’s also no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

The wrinkle here is that MongoDB was one of the first companies that aimed to put a stop to this by re-licensing its open-source tools under a new license that explicitly stated that companies that wanted to do this had to buy a commercial license. Since then, others have followed.

“Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model,” MongoDB CEO and president Dev Ittycheria told us. “However, developers are technically savvy enough to distinguish between the real thing and a poor imitation. MongoDB will continue to outperform any impersonations in the market.”

That’s a pretty feisty comment. Last November, Ittycheria told my colleague Ron Miller that he believed that AWS loved MongoDB because it drives a lot of consumption. In that interview, he also noted that “customers have spent the last five years trying to extricate themselves from another large vendor. The last thing they want to do is replay the same movie.”

MongoDB co-founder and CTO Eliot Horowitz echoed this. “In order to give developers what they want, AWS has been pushed to offer an imitation MongoDB service that is based on the MongoDB code from two years ago,” he said. “Our entire company is focused on one thing — giving developers the best way to work with data with the freedom to run anywhere. Our commitment to that single mission will continue to differentiate the real MongoDB from any imitation products that come along.”

A company spokesperson for MongoDB also highlighted that the 3.6 API that DocumentDB is compatible with is now two years old and misses most of the newest features, including ACID transactions, global clusters and mobile sync.

To be fair, AWS has become more active in open source lately and, in a way, it’s giving developers what they want (and not all developers are happy with MongoDB’s own hosted service). Bypassing MongoDB’s licensing by going for API comparability, given that AWS knows exactly why MongoDB did that, was always going to be a controversial move and won’t endear the company to the open-source community.

Jan
09
2019
--

Percona Toolkit 3.0.13 Is Now Available

percona toolkit

percona toolkitPercona announces the release of Percona Toolkit 3.0.13 for January 9, 2019.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

Bug fixes:

  • PT-1673: pt-show-grants was incompatible with MariaDB 10+ (thanks Tim Birkett)
  • PT-1638: pt-online-schema-change was erroneously taking MariaDB 10.x for MySQL 8.0 and rejecting to work with it to avoid the upstream bug #89441 scope.
  • PT-1616: pt-table-checksum failed to resume on large tables with binary strings containing invalid UTF-8 characters.
  • PT-1573: pt-query-digest didn’t work in case of log_timestamps = SYSTEM my.cnf option.
  • PT-157: Specifying a non-primary key index with the ‘i’ part of the --source argument made pt-archiver to ignore the --primary-key-only option presence.

Improvements:

  • PT-1340: pt-stalk now doesn’t call mysqladmin debug command by default to avoid flooding in the error log. CMD_MYSQLADMIN="mysqladmin debug" environment variable reverts pt-stalk to the previous way of operation.
  • PT-1637: A new --fail-on-stopped-replication option  allows pt-table-checksum to detect failing slave nodes.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

Jan
08
2019
--

Percona Live 2019 Tracks

Percona Live 2019

Percona Live Percona Live 2019Open Source Database Conference 2019 in North America has moved to Austin, Texas: a cool place to be, and host to many big names in the tech space. Read what Dave Stokes, MySQL Community Manager for Oracle, has to say in favor of Austin.

If you need a conference ticket for Austin, put in your proposal now!

Those who are successful with their presentation or tutorial submissions will receive a pass to the full three days of the event. Closing date for the call for papers is Sunday, January 20.

Percona is adopting an industry trend by organizing the conference into 13 separate tracks with one Percona expert coordinating community input for each one. We believe subject-specific mini-committees of experts should provide better results than a single mega-committee covering everything.

The MySQL track is being led by Alkin Tezuysal, Senior Technical Manager

MariaDB is the responsibility of Sveta Smirnova, Principle Support Escalation Specialist.

MongoDB is being driven by Consultant Doug Duncan.

PostgreSQL is being pushed forward by Avinash Vallarapu, PostgreSQL Support Tech Lead

Other Open Source Databases well, this important challenge has been handed to Senior Support Engineer Agustín Gallego

Java Development for Open Source Databases might be of interest to developers and is being led by Rodrigo Trindade, Service Delivery Manager

Kubernetes track is being headed by Mykola Marzhan who is our Kubernetes Technical Lead

Database Security and Compliance will be overseen by Denis Farar, General Counsel and VP of HR (but make no mistake, this is still a track where tech content is very welcome)

Automation & AI topics, at the leading edge of database technology challenges, are the responsibility of Max Bubenick, Platform Lead.

Observability & Monitoring talk selection will be led by Roma Novikov, Director of Platform Engineering – so get those PMM and other OS monitoring proposals at the ready!

Polyglot Persistence is in the hands of our Senior Software Engineer Ibrar Ahmed who is waiting to hear all about your experiences with cross-database applications, data exchange and how to meet the challenges of a hybrid database world.

Migration to OpenSource Databases which is a similar-but-different track full of challenges parallel to that of polyglot applications is being watched over by Marco Tusa, Managing Consultant

Business & Enterprise track will be driven by Brian Walters, Director of Solution Engineering who is keen to hear of your case studies and experiences of the impact of open source databases on your process and organizations.

Cloud is a special case, since it touches on virtually all aspects of open source database technology. If your talk has particular relevance to ‘cloud’ then please add this track with your submission. Similarly Innovative Technologies can apply across the board, and if you have something to share that is truly new, then add that to your track list. Those that are most exciting in the context of cloud or innovative in their approach may be selected for their cloud or innovation merit, whichever track they belong to.

Our track champions will engage with community experts to select papers and shape content. If you would like to contribute by taking on talk selection, please let me know.

New speakers, and those with less experience, are welcome, we are here to help, so first check out my community blog post with links to info and video workshops on how to put together a selection-worthy proposal. Even old-hands might find some inspiration!

All in all, we think this is a great move, with the track champions contributing their passion, experience and knowledge of contemporary open source issues to the development of excellent content.  Although we’re changing several things at once, no one gets a prize for standing still. We hope you’ll continue to support and grow with us this great, open source, database focused event! Put a note in your diary to join us from May 28 – 30 in Austin, Texas.

Finally, if you would like to get in touch with any of our track champions, please let me know

Jan
03
2019
--

MongoDB Engines: MMAPV1 Vs WiredTiger

review of MongoDB storaage MMAPv1 and WiredTiger

review of MongoDB storaage MMAPv1 and WiredTigerIn this post, we’ll take a look at the differences between the MMAP and WiredTiger engines in MongoDB®. I’ve been asked this question by customers many times, and this blog is for you! We’ll tell you about the key features of these engines, then you can choose the right engine based on your requirement.

In MongoDB, we mainly use the MMAPV1 and WiredTiger engines. We could use other engines like in-Memory, rocks db with Percona Server for MongoDB (PSMDB), and in-memory engine with MongoDB Enterprise version. When MongoDB was introduced, MMAPV1 was the default engine and it’s still a part of the MongoDB releases, though it will not be seen from 4.2 as per MongoDB’s plan. Those who remember the days working with version 1.8 might miss this, even though they don’t use MMAP currently! MongoDB acquired wiredTiger Inc (see here https://www.mongodb.com/press/wired-tiger) and from version 3.2 made it the default engine of MongoDB. This engine enabled the introduction of transactions with multi-documents, and is mainly used for features such as compression and document level locking. Here we’ll see the key features of wiredTiger and MMAPV1, and also present them in a tabular column at the end – who doesn’t love a table to check quickly the differences! It reminds me my school days :-)). My co-author, and friend – Aayushi feels the same?! ?

Some differences in detail

Storage Engines

The MongoDB storage engines manage BSON data in memory and on disk to support read and write operations.

MMAPV1:  This is the original storage engine for MongoDB, introduced in the first release, but from version 4.0 it is deprecated

WiredTiger:  This is the pluggable engine introduced by MongoDB in version 3.0 and it became the default storage engine from version 3.2

Data compression

MMAPV1: does not support data compression and it is based on memory mapped files. So it works well when you can keep your writeset in memory. It excels at workloads with high volume inserts, reads, and in-place updates.

WiredTiger: supports snappy and zlib compression. Consequently, MongoDB with WiredTiger takes very little space comparing with MMAP. It has its own write-cache and a filesystem cache.

  • Snappy: This is the default algorithm,  efficient computation with reasonable compression. See here.
  • Zlib: higher compression rate at the cost of CPU. See here.

Data Directory

Let’s take a look at the file system supporting the same data and replica set member for each of the engines. 

MMAPV1:

total 1.2G
-rw-r--r-- 1 vagrant vagrant    5 Nov 28 04:41 mongod.lock
-rw-rw-r-- 1 vagrant vagrant   69 Nov 28 04:41 storage.bson
-rw------- 1 vagrant vagrant  16M Nov 28 04:58 local.0
drwxrwxr-x 2 vagrant vagrant 4.0K Nov 28 04:58 journal
-rw------- 1 vagrant vagrant  16M Nov 28 04:58 admin.ns
-rw------- 1 vagrant vagrant  16M Nov 28 04:58 admin.0
-rw------- 1 vagrant vagrant 512M Nov 28 04:59 local.2
drwxrwxr-x 2 vagrant vagrant 4.0K Nov 28 04:59 diagnostic.data
drwxrwxr-x 2 vagrant vagrant 4.0K Nov 28 05:16 _tmp
-rw------- 1 vagrant vagrant  16M Nov 28 05:17 test.ns
-rw------- 1 vagrant vagrant  16M Nov 28 05:17 test.0
-rw------- 1 vagrant vagrant  32M Nov 28 05:17 test.1
-rw------- 1 vagrant vagrant  16M Nov 28 09:09 local.ns
-rw------- 1 vagrant vagrant 512M Nov 28 09:09 local.1

WiredTiger:

total 5.4M
-rw-rw-r-- 1 vagrant vagrant   21 Nov 28 07:38 WiredTiger.lock
-rw-rw-r-- 1 vagrant vagrant   49 Nov 28 07:38 WiredTiger
drwxrwxr-x 2 vagrant vagrant 4.0K Nov 28 07:38 journal
-rw-rw-r-- 1 vagrant vagrant 4.0K Nov 28 07:38 WiredTigerLAS.wt
-rw-rw-r-- 1 vagrant vagrant   95 Nov 28 07:38 storage.bson
-rw-r--r-- 1 vagrant vagrant    5 Nov 28 07:38 mongod.lock
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 index-7--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 index-5--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 index-3--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 index-1--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 collection-4--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 collection-2--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 collection-0--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 index-15--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:38 index-14--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant 1.8M Nov 28 07:38 index-17--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant 3.2M Nov 28 07:39 collection-16--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  16K Nov 28 07:39 collection-13--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  32K Nov 28 07:39 _mdb_catalog.wt
-rw-rw-r-- 1 vagrant vagrant  36K Nov 28 09:09 sizeStorer.wt
-rw-rw-r-- 1 vagrant vagrant  36K Nov 28 09:09 collection-6--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  52K Nov 28 09:09 collection-12--2134189858403062482.wt
-rw-rw-r-- 1 vagrant vagrant  76K Nov 28 09:09 WiredTiger.wt
-rw-rw-r-- 1 vagrant vagrant 1003 Nov 28 09:09 WiredTiger.turtle
drwxrwxr-x 2 vagrant vagrant 4.0K Nov 28 09:09 diagnostic.data

Journaling

MMAPV1: Ensures that writes are atomic.  If MongoDB goes down or terminates before committing changes to the data files, MongoDB can use the journal files to apply the write operation to the data files and maintain a consistent state.

WiredTiger: This uses checkpoints between writes and the journal persists all data modifications between checkpoints. So for any recovery from database crash or abrupt termination, it uses journal entries since the last checkpoint. In most cases, journal is not necessary for this engine and you enable it only if you need to be sure to recover until the last successful write before the crash from the journal. Otherwise, usually MongoDB can recover from the last valid checkpoint. Checkpoint occurs every minute by default. 

Journal directory

This is how journal files appear in the data directory for the different engines:

MMAPV1:

vagrant@m103:/data/mongo1/journal$ ls -lrth
total 35M
-rw------- 1 vagrant vagrant  88 Nov 28 09:17 lsn
-rw------- 1 vagrant vagrant 35M Nov 28 09:17 j._0

WiredTiger:

-rw-rw-r-- 1 vagrant vagrant 100M Nov 28 07:38 WiredTigerPreplog.0000000001
-rw-rw-r-- 1 vagrant vagrant 100M Nov 28 07:38 WiredTigerPreplog.0000000002
-rw-rw-r-- 1 vagrant vagrant 100M Nov 28 09:16 WiredTigerLog.0000000001

Locks and concurrency

MMAPV1

  • Up until version 2.6: uses a readers-writer [1] lock that allows concurrent reads access to a database, but gives exclusive access to a single write operation. When a read lock exists, many read operations may use this lock. However, when a write lock exists, a single write operation holds the lock exclusively, and no other read or write operations may share the lock.
  • From 3.0: The MMAPv1 storage engine uses collection level locking as of the 3.0 release series, an improvement on earlier versions in which the database lock was the finest-grain lock.

WiredTiger: supports document level locking. For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database, and collection levels.

For example: deleting documents from the collection “testData” for a value of {x:1}, will acquire write “LOCK” at collection level differently for each of the storage engines.

MMAPV1:

2018-12-17T10:09:46.830+0000 I COMMAND  [conn8] command
testDB.$cmd appName: "MongoDB Shell"
command: delete { delete: "testData",
deletes: [ { q: { x: 1.0 }, limit: 0.0 } ], ordered: true }
numYields:0 reslen:89 locks:{ Global: { acquireCount: { r: 100795, w: 100795 } },
MMAPV1Journal: { acquireCount: { w: 100796 }, acquireWaitCount: { w: 12 },
timeAcquiringMicros: { w: 46212 } }, Database: { acquireCount: { w: 100795 } }
, Collection: { acquireCount: { W: 795 } }

where w = Represents Exclusive (X) lock

WiredTiger:

2018-12-17T10:17:38.340+0000 I COMMAND  [conn1] command
testDB.$cmd appName: "MongoDB Shell"
command: delete { delete: "testData",
deletes: [ { q: { x: 1.0 }, limit: 0.0 } ], ordered: true }
numYields:0 reslen:89 locks:{ Global: { acquireCount: { r: 100795, w: 100795 } },
Database: { acquireCount: { w: 100795 } }, Collection: { acquireCount: { w: 795 } }

where w = Represents Intent Exclusive (IX) lock

Memory

MMAPv1: MongoDB automatically uses all free memory on the machine as its cache. System resource monitors show that MongoDB uses a lot of memory, but its usage is dynamic. If another process suddenly needs half the server’s RAM, MongoDB will yield cached memory to the other process.

Technically, the operating system’s virtual memory subsystem manages MongoDB’s memory. This means that MongoDB will use as much free memory as it can, swapping to disk as needed. Deployments with enough memory to fit the application’s working data set in RAM will achieve the best performance.

WiredTiger: with wiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache. Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes. Starting in 3.4, the WiredTiger internal cache, by default, will use the larger of either:

  • 50% of (RAM – 1 GB), or
  • 256 MB.

Quick reference: MMAPV1 vs WiredTiger

Use this table for a quick reference to the differences between MMAPv1 and WiredTiger

Key Feature MMAPV1 wiredTiger
Introduction & Default Engine Introduced with MongoDB from scratch and default engine till 3.0 version. Deprecated in 4.0 and will be removed in future Introduced in 3.0 version and default from 3.2 version
Data Compression Doesn’t support compression Compression with default snappy compression method and zlib compression method. So occupy less space than MMAPV1 engine
Journaling MongoDB writes the in-memory changes first to on-disk journal files. If MongoDB goes down/terminates before committing the changes to the data files, MongoDB can use the journal files to apply the write operation to the data files and maintain a consistent state. The WiredTiger journal persists all data modifications between checkpoints. If MongoDB exits between checkpoints, it uses the journal to replay all data modified since the last checkpoint.
Locks & Concurrency Till 2.6, MongoDB uses a readers-writer [1] lock that allows concurrent reads access to a database but gives exclusive access to a single write operation. From 3.0, uses collection level lock It supports document level locking.
Transaction Operation on a single document is atomic Multi-document transactions are only available for deployments from version 4.0
CPU Performance adding CPU cores does not improve performance much performs better on multicore systems
Encryption Encryption is not possible Encryption at rest is available with MongoDB enterprise and as BETA in PSMDB 3.6.8
Memory automatically uses all free memory on the machine as its cache Uses internal cache and filesystem cache
Updates It excels at workloads with high volume inserts, reads, and in-place updates. Does not support in place updates. It causes the whole document to rewrite
Tuning Less chance to tune it Allows more tuning with this engine through different variables. Eg: cache size, read / write tickets, checkpoint interval etc

Conclusion

The above information does not cover every difference between MMAPV1 and WiredTiger, but it lists the key differences. If you have any key features to add, please feel free to add in the comments! Let’s share and let everyone know about them ?


Photo by Mathew Schwartz on Unsplash

Dec
21
2018
--

Percona Server for MongoDB Authentication Using Active Directory

authentication

mongodb authentication with active directoryThis article will walk you through using the SASL library to allow your Percona Server for MongoDB instance to authenticate with your company’s Active Directory server. Percona Server for MongoDB includes enterprise level features, such as LDAP authentication, audit logging and with the 3.6.8 release a beta version of data encryption at rest, all in its open source offering.

Pre set-up assumptions

In this article we will make a couple of assumptions:

  1. You have an Active Directory server up and running and that it is accessible to the server that you have Percona Server for MongoDB installed on.
  2. These machines are installed behind a firewall as the communications between the two servers will be in plain text. This is due the fact that we can only use the SASL mechanism of PLAIN when authenticating and credentials will be sent in plain text.
  3. You have sudo privilege on the server you are going to install Percona Server for MongoDB on.

Installing Percona Server for MongoDB

The first thing you are going to need to do is to install the Percona Server for MongoDB package. You can get this in a couple of different ways. You can either install from the Percona repositories, or you can download the packages and install them manually.

Once you have Percona Server for MongoDB installed, we want to start the mongod service and make sure it is set to run on restart.

sudo systemctl start mongod
sudo systemctl enable mongod

Now that the service is up and running, we want to open the mongo shell and add a database administrator user. This user will be authenticated inside of the MongoDB server itself and will not have any interactions with the Active Directory server.

To start the mongo shell up, type mongo from a terminal window. Once you do this you will see something similar to the following:

Percona Server for MongoDB shell version v3.6.8-2.0
connecting to: mongodb://127.0.0.1:27017
Percona Server for MongoDB server version: v3.6.8-2.0
Server has startup warnings:
2018-12-11T17:48:47.471+0000 I STORAGE [initandlisten]
2018-12-11T17:48:47.471+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-12-11T17:48:47.471+0000 I STORAGE [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2018-12-11T17:48:48.197+0000 I CONTROL [initandlisten]
2018-12-11T17:48:48.197+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-12-11T17:48:48.197+0000 I CONTROL [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-12-11T17:48:48.197+0000 I CONTROL [initandlisten] **          You can use percona-server-mongodb-enable-auth.sh to fix it.
2018-12-11T17:48:48.197+0000 I CONTROL [initandlisten]

Notice the second warning that access control is not enabled for the database. Percona Server for MongoDB comes with a script that you can run that will enable authentication for you, but we can also do this manually.

We will go ahead and manually add a user in MongoDB that has the root role assigned to it. This user will have permission to do anything on the server, so you will want to make sure to keep the password safe. You will also not want to use this user for doing your day to day work inside of MongoDB.

This user needs to be created in the admin database as it needs to have access to the entire system. To do this run the following commands inside of the mongo shell:

> use admin
switched to db admin
> db.createUser({"user": "admin", "pwd": "$3cr3tP4ssw0rd", "roles": ["root"]})
Successfully added user: { "user" : "admin", "roles" : [ "root" ] }

Now that we have a user created in MongoDB we can go ahead and enable authorization. To do this we need to modify the /etc/mongod.conf file and add the following lines:

security:
  authorization: enabled
setParameter:
  authenticationMechanisms: PLAIN,SCRAM-SHA-1

Notice that we have two mechanisms set up for authentication. The first one, PLAIN, is used for authenticating with Active Directory. The second one, SCRAM-SHA-1 is used for internal authentication inside of MongoDB.

Once you’ve made the changes, you can restart the mongod service by running the following command:

sudo systemctl restart mongod

Now if you were to run the mongo shell again, you wouldn’t see the access control warning any more, and you would need to log in as your new user to be able to run any commands.

If you were to try to get a list of databases before logging in you would get an error:

> show dbs;
2018-12-11T21:50:39.551+0000 E QUERY [thread1] Error: listDatabases failed:{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { listDatabases: 1.0, $db: \"admin\" }",
"code" : 13,
"codeName" : "Unauthorized"
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1
shellHelper.show@src/mongo/shell/utils.js:849:19
shellHelper@src/mongo/shell/utils.js:739:15
@(shellhelp2):1:1

Let’s go ahead and run the mongo shell and then log in with our admin user:

> use admin
switched to db admin
> db.auth("admin", "$3cr3tP4ssw0rd")
1

If you are successful you will get a return value of 1. If authentication fails, you will get a return value of 0. Failure is generally due to a mistyped username or password, but you could also be trying to authenticate in the wrong database. In MongoDB you must be in the database that the user was created in before trying to authenticate.

Now that we’ve logged in as the admin user, we will add a document that will be used to verify that our Active Directory based user can successfully access the data at the end of this post.

> use percona
switched to db percona
> db.test.insert({"message": "Active Directory user success!"})
WriteResult({ "nInserted" : 1 })

Install the Cyrus SASL packages

Now that we have a Percona Server for MongoDB instance set up and it is secured, we need to add some packages that will allow us to communicate properly with the Active Directory server.

For RedHat use the following command

sudo yum install -y cyrus-sasl cyrus-sasl-plain

For Ubuntu use this command

sudo app install -y sasl2-bin

Next we need to update the SASL configuration to use LDAP instead of PAM, which is the default. To do this we need to edit the file /etc/sysconfig/saslauthd, remembering to backup up your original file first.

For RedHat we use the following commands

sudo cp /etc/sysconfig/saslauthd /etc/sysconfig/saslauthd.bak
sudo sed -i -e s/^MECH=pam/MECH=ldap/g /etc/sysconfig/saslauthd

For Ubuntu we use these commands instead

sudo cp /etc/default/saslauthd /etc/default/saslauthd.bak
sudo sed -i -e s/^MECHANISMS="pam"/MECHANISMS="ldap"/g /etc/default/saslauthd 
sudo sed -i -e s/^START=no/START=yes/g /etc/default/saslauthd

We also need to create the file /etc/saslauthd.conf with contents similar to the following (replace values as necessary for your Active Directory installation):

ldap_servers: ldap://LDAP.EXAMPLE.COM
ldap_mech: PLAIN
ldap_filter: cn=%u,CN=Users,DC=EXAMPLE,DC=COM
ldap_search_base:CN=Users,DC=EXAMPLE,DC=COM
ldap_filter:(cn=%u)
ldap_bind_dn:CN=ADADMIN,CN=Users,DC=EXAMPLE,DC=COM
ldap_password:ADADMINPASSWORD

Now that we’ve got SASL set up, we can start the saslauthd process and set it to run on restart.

sudo systemctl start saslauthd
sudo systemctl enable saslauthd

Next we need to allow the mongod process to write to the saslauthd mux socket and change the permissions on the owning directory to 755 so MongoDB can write to it. This is the default on RedHat, but not for Ubuntu.

On Ubuntu you can either change the permissions on the folder

sudo chmod 755 /run/saslauthd

Or you could add the mongod user to the sasl group

sudo usermod -a -G sasl mongod

Test the users

The SASL installation provides us with a tool to test that our Active Directory users can be logged in from this machine. Let’s go ahead and test to see if we can authenticate with our Active Directory user.

sudo testsaslauthd -u aduser -p ADP@assword1

You should see 0: OK "Success." if authentication worked.

Create a SASL config file for MongoDB

To allow MongoDB to use SASL to communicate with Active Direcory, we need to create a configuration file.

Create the requisite directory if it doesn’t exist:

mkdir -p /etc/sasl2

And then we need to create the file /etc/sasl2/mongodb.conf and place the following contents into it:

pwcheck_method: saslauthd
saslauthd_path: /var/run/saslauthd/mux
log_level: 5
mech_list: plain

Add Active Directory user to MongoDB

Now we can finally add our Active Directory user to our MongoDB instance:

$ mongo
Percona Server for MongoDB shell version v3.6.8-2.0
connecting to: mongodb://127.0.0.1:27017
Percona Server for MongoDB server version: v3.6.8-2.0
> use admin
switched to db admin
> db.auth("admin", "$3cr3tP4ssw0rd")
1
> use $external
switched to db $external
> db.createUser({"user": "aduser", "roles": [{"role": "read", "db": "percona"}]})
Successfully added user: {
        "user" : "aduser",
        "roles" : [
                {
                        "role" : "read",
                        "db" : "percona"
                }
        ]
}

As you can see from the above, when we create the user that will be authenticated with Active Directory, we need to be in the special $external database and we don’t supply a password as we would when we create a MongoDB authenticated user.

Now let’s try to log in with our Active Directory based user. First we need exit our current mongo shell and restart it, and then we can log in with our Active Directory user:

> exit
bye
$ mongo
Percona Server for MongoDB shell version v3.6.8-2.0
connecting to: mongodb://127.0.0.1:27017
Percona Server for MongoDB server version: v3.6.8-2.0
> use $external
switched to db $external
> db.auth({"mechanism": "PLAIN", "user": "aduser", "pwd": "adpassword", "digestPassword ": false})
1
> use percona
switched to db percona
> db.test.find()
{ "_id" : ObjectId("5c12a47904a287e45fcb580e"), "message" : "Active Directory user success!" }

As you can see above our Active Directory based user was able to authenticate and then change over to the percona database and see the document we stored earlier.

You will notice that our auth() call above is different than the one we used to log in with MongoDB based users. In this case we need to pass in a document with not only the user and password, but also the mechanism to use. We also want to set digestPassword to false.

You can also log in directly from the command line with the following:

mongo percona --host localhost --port 27017 --authenticationMechanism PLAIN --authenticationDatabase \$external --username dduncan --p

There are a couple of things to note here if you’re not used to using the command line to log in:

  1. We place the --password option at the end of the command line and do not provide a password here. This will cause the application to prompt us for a password.
  2. You will also automatically be placed into the percona database, or whatever database name you provide after mongo.
  3. You need to escape the $external database name with a backslash (\) or the terminal will treat $external as an environment variable and you will most likely get an error.

Conclusion

In conclusion, it is easy to connection your Percona Server for MongoDB instance to your corporate Active Directory server. This allows your MongoDB users to use the same credentials to log into MongoDB as they do their corporate email and workstation.


Photo by Steve Halama on Unsplash

Dec
20
2018
--

Percona Database Performance Blog 2018 Year in Review: Top Blog Posts

Percona Database Performance Blog

Percona Database Performance BlogLet’s look at some of the most popular Percona Database Performance Blog posts in 2018.

The closing of a year lends itself to looking back. And making lists. With the Percona Database Performance Blog, Percona staff and leadership work hard to provide the open source community with insights, technical support, predictions and metrics around multiple open source database software technologies. We’ve had nearly 4 million visits to the blog in 2018: thank you! We look forward to providing you with even better articles, news and information in 2019.

As 2018 moves into 2019, let’s take a quick look back at some of the most popular posts on the blog this year.

Top 10 Most Read

These posts had the most number of views (working down from the highest):

When Should I Use Amazon Aurora and When Should I use RDS MySQL?

Now that Database-as-a-service (DBaaS) is in high demand, there is one question regarding AWS services that cannot always be answered easily : When should I use Aurora and when RDS MySQL?

About ZFS Performance

ZFS has many very interesting features, but I am a bit tired of hearing negative statements on ZFS performance. It feels a bit like people are telling me “Why do you use InnoDB? I have read that MyISAM is faster.” I found the comparison of InnoDB vs. MyISAM quite interesting, and I’ll use it in this post.

Linux OS Tuning for MySQL Database Performance

In this post we will review the most important Linux settings to adjust for performance tuning and optimization of a MySQL database server. We’ll note how some of the Linux parameter settings used OS tuning may vary according to different system types: physical, virtual or cloud.

A Look at MyRocks Performance

As the MyRocks storage engine (based on the RocksDB key-value store http://rocksdb.org ) is now available as part of Percona Server for MySQL 5.7, I wanted to take a look at how it performs on a relatively high-end server and SSD storage.

How to Restore MySQL Logical Backup at Maximum Speed

The ability to restore MySQL logical backups is a significant part of disaster recovery procedures. It’s a last line of defense.

Why MySQL Stored Procedures, Functions and Triggers Are Bad For Performance

MySQL stored procedures, functions and triggers are tempting constructs for application developers. However, as I discovered, there can be an impact on database performance when using MySQL stored routines. Not being entirely sure of what I was seeing during a customer visit, I set out to create some simple tests to measure the impact of triggers on database performance. The outcome might surprise you.

AMD EPYC Performance Testing… or Don’t get on the wrong side of SystemD

Ever since AMD released their EPYC CPU for servers I wanted to test it, but I did not have the opportunity until recently, when Packet.net started offering bare metal servers for a reasonable price. So I started a couple of instances to test Percona Server for MySQL under this CPU. In this benchmark, I discovered some interesting discrepancies in performance between  AMD and Intel CPUs when running under systemd.

Tuning PostgreSQL Database Parameters to Optimize Performance

Out of the box, the default PostgreSQL configuration is not tuned for any particular workload. Default values are set to ensure that PostgreSQL runs everywhere, with the least resources it can consume and so that it doesn’t cause any vulnerabilities. It is primarily the responsibility of the database administrator or developer to tune PostgreSQL according to their system’s workload. In this blog, we will establish basic guidelines for setting PostgreSQL database parameters to improve database performance according to workload.

Using AWS EC2 instance store vs EBS for MySQL: how to increase performance and decrease cost

If you are using large EBS GP2 volumes for MySQL (i.e. 10TB+) on AWS EC2, you can increase performance and save a significant amount of money by moving to local SSD (NVMe) instance storage. Interested? Then read on for a more detailed examination of how to achieve cost-benefits and increase performance from this implementation.

Why You Should Avoid Using “CREATE TABLE AS SELECT” Statement

In this blog post, I’ll provide an explanation why you should avoid using the CREATE TABLE AS SELECT statement. The SQL statement “create table <table_name> as select …” is used to create a normal or temporary table and materialize the result of the select. Some applications use this construct to create a copy of the table. This is one statement that will do all the work, so you do not need to create a table structure or use another statement to copy the structure.

Honorable Mention:

Is Serverless Just a New Word for Cloud-Based?

Top 10 Most Commented

These posts generated some healthy discussions (not surprisingly, this list overlaps with the first):

Posts Worth Revisiting

Don’t miss these great posts that have excellent information on important topics:

Have a great end of the year celebration, and we look forward to providing more great blog posts in 2019.

Dec
19
2018
--

Using Partial and Sparse Indexes in MongoDB

MongoDb using partial sparse indexes

MongoDb using partial sparse indexesIn this article I’m going to talk about partial and sparse indexes in MongoDB® and Percona Server for MongoDB®. I’ll show you how to use them, and look at cases where they can be helpful. Prior to discussing these indexes in MongoDB in detail, though, let’s talk about an issue on a relational database like MySQL®.

The boolean issue in MySQL

Consider you have a very large table in MySQL with a boolean column. Typically you created a ENUM(‘T’,’F’) field to store the boolean information or a TINYINT column to store only 1s and 0s. This is good so far. But think now what you can do if you need to run a lot of queries on the table, with a condition on the boolean field, and no other relevant conditions on other indexed columns are used to filter the examined rows.

Why not create and index on the boolean field? Well, yes, you can, but in some cases this solution will be completely useless and will introduce an overhead for the index maintenance.

Think about if you have an even distribution of true and false values in the table, in more or less a 50:50 split. In this situation, the index on the boolean column cannot be used because MySQL will prefer to do a full scan of the large table instead of selecting half of rows using the BTREE entries. We can say that a boolean field like this one has a low cardinality, and it’s not highly selective.

Consider now the case in which you don’t have an even distribution of the values, let’s say 2% of the rows contain false and the remaining 98% contain true. In such a situation, a query to select the false values will most probably use the index. The queries to select the true values won’t use the index, for the same reason we have discussed previously. In this second case the index is very useful, but only for selecting the great minority of rows. The remaining 98% of the entries in the index are completely useless. This represents a great waste of disk space and resources, because the index must be maintained for each write.

It’s not just booleans that can have this problem in relation to index usage, but any field with a low cardinality.

Note: there are several workarounds to deal with this problem, I know. For example, you can create a multi-column index using a more selective field and the boolean. Or you could design your database differently. Here, I’m illustrating the nature of the problem in order to explain a MongoDB feature in a context. 

The boolean issue in MongoDB

How about MongoDB? Does MongoDB have the same problem?  The answer is: yes, MongoDB has the same problem. If you have a lot of documents in a collection with a boolean field or a low cardinality field, and you create an index on it, then you will have a very large index that’s not really useful. But more importantly you will have writes degradation for the index maintenance.

The only difference is that MongoDB will tend to use the index anyway, instead of doing the entire collection scan, but the execution time will be of the same magnitude as doing the COLLSCAN. In the case of very large indexes, a COLLSCAN should be preferable.

Fortunately MongoDB has an option that you can specify during index creation to define a Partial Index. Let’s see.

Partial Index

A partial index is an index that contains only a subset of values based on a filter rule. So, in the case of the unevenly distributed boolean field, we can create an index on it specifying that we want to consider only the false values. This way we avoid recording the remaining 98% of useless true entries. The index will be smaller, we’ll save disk and memory space, and the most frequent writes – when entering the true values – won’t initiate the index management activity. As a result, we won’t have lots of penalties during writes but we’ll have a useful index when searching the false values.

Let’s say that, when you have an uneven distribution, the most relevant searches are the ones for the minority of the values. This is in general the scenario for real applications.

Let’s see now how to create a Partial Index.

First, let’s create a collection with one million random documents. Each document contains a boolean field generated by the javascript function randomBool(). The function generates a false value in 5% of the documents, in order to have an uneven distribution. Then, test the number of false values in the collection.

> function randomBool() { var bool = true; var random_boolean = Math.random() >= 0.95; if(random_boolean) { bool = false }; return bool; }
> for (var i = 1; i <= 1000000; i++) { db.test.insert( { _id: i, name: "name"+i, flag: randomBool() } ) }
WriteResult({ "nInserted" : 1 })
> db.test.find().count()
1000000
> db.test.find( { flag: false } ).count()
49949

Create the index on the flag field and look at the index size using db.test.stats().

> db.test.createIndex( { flag: 1 } )
{ "createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1 }
> db.test.stats().indexSizes
{ "_id_" : 13103104, "flag_1" : 4575232 }

The index we created is 4575232 bytes.

Test some simple queries to extract the documents based on the flag value and take a look at the index usage and the execution times. (For this purpose, we use an explainable object)

// create the explainable object
> var exp = db.test.explain( "executionStats" )
// explain the complete collection scan
> exp.find( {  } )
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.test",
		"indexFilterSet" : false,
		"parsedQuery" : {
		},
		"winningPlan" : {
			"stage" : "COLLSCAN",
			"direction" : "forward"
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 1000000,
		"executionTimeMillis" : 250,
		"totalKeysExamined" : 0,
		"totalDocsExamined" : 1000000,
		"executionStages" : {
			"stage" : "COLLSCAN",
			"nReturned" : 1000000,
			"executionTimeMillisEstimate" : 200,
			"works" : 1000002,
			"advanced" : 1000000,
			"needTime" : 1,
			"needYield" : 0,
			"saveState" : 7812,
			"restoreState" : 7812,
			"isEOF" : 1,
			"invalidates" : 0,
			"direction" : "forward",
			"docsExamined" : 1000000
		}
	},
	"serverInfo" : {
		"host" : "ip-172-30-2-181",
		"port" : 27017,
		"version" : "4.0.4",
		"gitVersion" : "f288a3bdf201007f3693c58e140056adf8b04839"
	},
	"ok" : 1
}
// find the documents flag=true
> exp.find( { flag: true } )
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.test",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"flag" : {
				"$eq" : true
			}
		},
		"winningPlan" : {
			"stage" : "FETCH",
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"flag" : 1
				},
				"indexName" : "flag_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"flag" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"flag" : [
						"[true, true]"
					]
				}
			}
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 950051,
		"executionTimeMillis" : 1028,
		"totalKeysExamined" : 950051,
		"totalDocsExamined" : 950051,
		"executionStages" : {
			"stage" : "FETCH",
			"nReturned" : 950051,
			"executionTimeMillisEstimate" : 990,
			"works" : 950052,
			"advanced" : 950051,
			"needTime" : 0,
			"needYield" : 0,
			"saveState" : 7422,
			"restoreState" : 7422,
			"isEOF" : 1,
			"invalidates" : 0,
			"docsExamined" : 950051,
			"alreadyHasObj" : 0,
			"inputStage" : {
				"stage" : "IXSCAN",
				"nReturned" : 950051,
				"executionTimeMillisEstimate" : 350,
				"works" : 950052,
				"advanced" : 950051,
				"needTime" : 0,
				"needYield" : 0,
				"saveState" : 7422,
				"restoreState" : 7422,
				"isEOF" : 1,
				"invalidates" : 0,
				"keyPattern" : {
					"flag" : 1
				},
				"indexName" : "flag_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"flag" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"flag" : [
						"[true, true]"
					]
				},
				"keysExamined" : 950051,
				"seeks" : 1,
				"dupsTested" : 0,
				"dupsDropped" : 0,
				"seenInvalidated" : 0
			}
		}
	},
	"serverInfo" : {
		"host" : "ip-172-30-2-181",
		"port" : 27017,
		"version" : "4.0.4",
		"gitVersion" : "f288a3bdf201007f3693c58e140056adf8b04839"
	},
	"ok" : 1
}
// find the documents with flag=false
> exp.find( { flag: false } )
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.test",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"flag" : {
				"$eq" : false
			}
		},
		"winningPlan" : {
			"stage" : "FETCH",
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"flag" : 1
				},
				"indexName" : "flag_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"flag" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"flag" : [
						"[false, false]"
					]
				}
			}
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 49949,
		"executionTimeMillis" : 83,
		"totalKeysExamined" : 49949,
		"totalDocsExamined" : 49949,
		"executionStages" : {
			"stage" : "FETCH",
			"nReturned" : 49949,
			"executionTimeMillisEstimate" : 70,
			"works" : 49950,
			"advanced" : 49949,
			"needTime" : 0,
			"needYield" : 0,
			"saveState" : 390,
			"restoreState" : 390,
			"isEOF" : 1,
			"invalidates" : 0,
			"docsExamined" : 49949,
			"alreadyHasObj" : 0,
			"inputStage" : {
				"stage" : "IXSCAN",
				"nReturned" : 49949,
				"executionTimeMillisEstimate" : 10,
				"works" : 49950,
				"advanced" : 49949,
				"needTime" : 0,
				"needYield" : 0,
				"saveState" : 390,
				"restoreState" : 390,
				"isEOF" : 1,
				"invalidates" : 0,
				"keyPattern" : {
					"flag" : 1
				},
				"indexName" : "flag_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"flag" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"flag" : [
						"[false, false]"
					]
				},
				"keysExamined" : 49949,
				"seeks" : 1,
				"dupsTested" : 0,
				"dupsDropped" : 0,
				"seenInvalidated" : 0
			}
		}
	},
	"serverInfo" : {
		"host" : "ip-172-30-2-181",
		"port" : 27017,
		"version" : "4.0.4",
		"gitVersion" : "f288a3bdf201007f3693c58e140056adf8b04839"
	},
	"ok" : 1
}

As expected, MongoDB does a COLLSCAN when looking for db.test.find( {} ). The important thing here is that it takes 250 milliseconds for the entire collection scan.

In both the other cases – find ({flag:true}) and find({flag:false}) – MongoDB uses the index. But let’s have a look at the execution times:

  • for db.test.find({flag:true}) is 1028 milliseconds. The execution time is more than the COLLSCAN. The index in this case is not useful. COLLSCAN should be preferable.
  • for db.test.find({flag:false}) is 83 milliseconds. This is good. The index in this case is very useful.

Now, create the partial index on the flag field. To do it we must use the PartialFilterExpression option on the createIndex command.

// drop the existing index
> db.test.dropIndex( { flag: 1} )
{ "nIndexesWas" : 2, "ok" : 1 }
// create the partial index only on the false values
> db.test.createIndex( { flag : 1 }, { partialFilterExpression :  { flag: false }  } )
{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 1,
	"numIndexesAfter" : 2,
	"ok" : 1
}
// get the index size
> db.test.stats().indexSizes
{ "_id_" : 13103104, "flag_1" : 278528 }
// create the explainalbe object
> var exp = db.test.explain( "executionStats" )
// test the query for flag=false
> exp.find({ flag: false  })
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.test",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"flag" : {
				"$eq" : false
			}
		},
		"winningPlan" : {
			"stage" : "FETCH",
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"flag" : 1
				},
				"indexName" : "flag_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"flag" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : true,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"flag" : [
						"[false, false]"
					]
				}
			}
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 49949,
		"executionTimeMillis" : 80,
		"totalKeysExamined" : 49949,
		"totalDocsExamined" : 49949,
		"executionStages" : {
			"stage" : "FETCH",
			"nReturned" : 49949,
			"executionTimeMillisEstimate" : 80,
			"works" : 49950,
			"advanced" : 49949,
			"needTime" : 0,
			"needYield" : 0,
			"saveState" : 390,
			"restoreState" : 390,
			"isEOF" : 1,
			"invalidates" : 0,
			"docsExamined" : 49949,
			"alreadyHasObj" : 0,
			"inputStage" : {
				"stage" : "IXSCAN",
				"nReturned" : 49949,
				"executionTimeMillisEstimate" : 40,
				"works" : 49950,
				"advanced" : 49949,
				"needTime" : 0,
				"needYield" : 0,
				"saveState" : 390,
				"restoreState" : 390,
				"isEOF" : 1,
				"invalidates" : 0,
				"keyPattern" : {
					"flag" : 1
				},
				"indexName" : "flag_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"flag" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : true,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"flag" : [
						"[false, false]"
					]
				},
				"keysExamined" : 49949,
				"seeks" : 1,
				"dupsTested" : 0,
				"dupsDropped" : 0,
				"seenInvalidated" : 0
			}
		}
	},
	"serverInfo" : {
		"host" : "ip-172-30-2-181",
		"port" : 27017,
		"version" : "4.0.4",
		"gitVersion" : "f288a3bdf201007f3693c58e140056adf8b04839"
	},
	"ok" : 1
}
// test the query for flag=true
> exp.find({ flag: true  })
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.test",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"flag" : {
				"$eq" : true
			}
		},
		"winningPlan" : {
			"stage" : "COLLSCAN",
			"filter" : {
				"flag" : {
					"$eq" : true
				}
			},
			"direction" : "forward"
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 950051,
		"executionTimeMillis" : 377,
		"totalKeysExamined" : 0,
		"totalDocsExamined" : 1000000,
		"executionStages" : {
			"stage" : "COLLSCAN",
			"filter" : {
				"flag" : {
					"$eq" : true
				}
			},
			"nReturned" : 950051,
			"executionTimeMillisEstimate" : 210,
			"works" : 1000002,
			"advanced" : 950051,
			"needTime" : 49950,
			"needYield" : 0,
			"saveState" : 7812,
			"restoreState" : 7812,
			"isEOF" : 1,
			"invalidates" : 0,
			"direction" : "forward",
			"docsExamined" : 1000000
		}
	},
	"serverInfo" : {
		"host" : "ip-172-30-2-181",
		"port" : 27017,
		"version" : "4.0.4",
		"gitVersion" : "f288a3bdf201007f3693c58e140056adf8b04839"
	},
	"ok" : 1
}

We can notice the following:

  • db.test.find({flag:false}) uses the index and the execution time is more or less the same as before
  • db.test.find({flag:true}) doesn’t use the index. MongoDB does the COLLSCAN and the execution is better than before
  • note the index size is only 278528 bytes. now A great saving in comparison to the complete index on flag. There won’t be overhead during the writes in the great majority of the documents.

Partial option on other index types

You can use the partialFilterExpression option even in compound indexes or other index types. Let’s see an example of a compound index.

Insert some documents in the students collection

db.students.insert( [
{ _id:1, name: "John", class: "Math", grade: 10 },
{ _id: 2, name: "Peter", class: "English", grade: 6 },
{ _id: 3, name: "Maria" , class: "Geography", grade: 8 },
{ _id: 4, name: "Alex" , class: "Geography", grade: 5},
{ _id: 5, name: "George" , class: "Math", grade: 7 },
{ _id: 6, name: "Tony" , class: "English", grade: 9 },
{ _id: 7, name: "Sam" , class: "Math", grade: 6 },
{ _id: 8, name: "Tom" , class: "English", grade: 5 }
])

Create a partial compound index on name and class fields for the grade greater or equal to 8.

> db.students.createIndex( { name: 1, class: 1  }, { partialFilterExpression: { grade: { $gte: 8} } } )
{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 1,
	"numIndexesAfter" : 2,
	"ok" : 1
}

Notice that the grade field doesn’t necessarily need to be part of the index.

Query coverage

Using the students collection, we want now to show when a partial index can be used.

The important thing to remember is that a partial index is “partial”. It means that it doesn’t contain all the entries.

In order for MongoDB to use it the conditions in the query must include an expression on the filter field and the selected documents must be a subset of the index.

Let’s see some examples.

The following query can use the index because we are selecting a subset of the partial index.

> db.students.find({name:"Tony", grade:{$gt:8}})
{ "_id" : 6, "name" : "Tony", "class" : "English", "grade" : 9 }
// let's look at the explain
> db.students.find({name:"Tony", grade:{$gt:8}}).explain()
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.students",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"$and" : [
				{
					"name" : {
						"$eq" : "Tony"
					}
				},
				{
					"grade" : {
						"$gt" : 8
					}
				}
			]
		},
		"winningPlan" : {
			"stage" : "FETCH",
			"filter" : {
				"grade" : {
					"$gt" : 8
				}
			},
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"name" : 1,
					"class" : 1
				},
				"indexName" : "name_1_class_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"name" : [ ],
					"class" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : true,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"name" : [
						"[\"Tony\", \"Tony\"]"
					],
					"class" : [
						"[MinKey, MaxKey]"
					]
				}
			}
		},
		"rejectedPlans" : [ ]
	},
	"serverInfo" : {
		"host" : "ip-172-30-2-181",
		"port" : 27017,
		"version" : "4.0.4",
		"gitVersion" : "f288a3bdf201007f3693c58e140056adf8b04839"
	},
	"ok" : 1
}

The following query cannot use the index because the condition on grade > 5 is not selecting a subset of the partial index. So the COLLSCAN is needed.

> db.students.find({name:"Tony", grade:{$gt:5}}).explain()
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.students",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"$and" : [
				{
					"name" : {
						"$eq" : "Tony"
					}
				},
				{
					"grade" : {
						"$gt" : 5
					}
				}
			]
		},
		"winningPlan" : {
			"stage" : "COLLSCAN",
			"filter" : {
				"$and" : [
					{
						"name" : {
							"$eq" : "Tony"
						}
					},
					{
						"grade" : {
							"$gt" : 5
						}
					}
				]
			},
			"direction" : "forward"
		},
		"rejectedPlans" : [ ]
	},
	"serverInfo" : {
		"host" : "ip-172-30-2-181",
		"port" : 27017,
		"version" : "4.0.4",
		"gitVersion" : "f288a3bdf201007f3693c58e140056adf8b04839"
	},
	"ok" : 1
}

Even the following query cannot use the index. As we said the grade field is not part of the index. The simple condition on grade is not sufficient.

> db.students.find({grade:{$gt:8}}).explain()
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.students",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"grade" : {
				"$gt" : 8
			}
		},
		"winningPlan" : {
			"stage" : "COLLSCAN",
			"filter" : {
				"grade" : {
					"$gt" : 8
				}
			},
			"direction" : "forward"
		},
		"rejectedPlans" : [ ]
	},
	"serverInfo" : {
		"host" : "ip-172-30-2-181",
		"port" : 27017,
		"version" : "4.0.4",
		"gitVersion" : "f288a3bdf201007f3693c58e140056adf8b04839"
	},
	"ok" : 1
}

Sparse Index

A sparse index is an index that contains entries only for the documents that have the indexed field.

Since MongoDB is a schemaless database, not all the documents in a collection will necessarily contain the same fields. So we have two options when creating an index:

  • create a regular “non-sparse” index
    • the index contains as many entries as the documents
    • the index contains entries as null for all the documents without the indexed field
  • create a sparse index
    • the index contains as many entries as the documents with the indexed field

We call it “sparse” because it doesn’t contain all the documents of the collection.

The main advantage of the sparse option is to reduce the index size.

Here’s how to create a sparse index:

db.people.createIndex( { city: 1 }, { sparse: true } )

Sparse indexes are a subset of partial indexes. In fact you can emulate a sparse index using the following definition of a partial.

db.people.createIndex(
{city:  1},
{ partialFilterExpression: {city: {$exists: true} } }
)

For this reason partial indexes are preferred over sparse indexes.

Conclusions

Partial indexing is a great feature in MongoDB. You should consider using it to achieve the following advantages:

  • have smaller indexes
  • save disk and memory space
  • improve writes performance

You are strongly encouraged to consider partial indexes if you have one or more of these use cases:

  • you run queries on a boolean field with an uneven distribution, and you look mostly for the less frequent value
  • you have a low cardinality field and the majority of the queries look for a subset of the values
  • the majority of the queries look for a limited subset of the values in a field
  • you don’t have enough memory to store very large indexes – for example, you have a lot of page evictions from the WiredTiger cache

Further readings

Partial indexes: https://docs.mongodb.com/manual/core/index-partial/

Sparse indexes: https://docs.mongodb.com/manual/core/index-sparse/

Articles on query optimization and investigation:


Photo by Mike Greer from Pexels

Dec
18
2018
--

Percona Server for MongoDB 4.0.4-1 GA Is Now Available

Percona Server for MongoDB Operator

Percona announces the GA release of Percona Server for MongoDB 4.0.4-1 on December 18, 2018. Download the latest version from the Percona website or the Percona software repositories.

Date: December 18, 2018
Download: Percona website
Installation: Installing Percona Server for MongoDB

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 4.0 Community Edition. It supports MongoDB 4.0 protocols and drivers.

Percona Server for MongoDB extends the functionality of the MongoDB 4.0 Community Edition by including the Percona Memory Engine storage engine, encrypted WiredTiger storage engine, audit logging, SASL authentication, hot backups, and enhanced query profilingPercona Server for MongoDB requires no changes to MongoDB applications or code.

This release includes all features of MongoDB 4.0 Community Edition 4.0. Most notable among these are:

Note that the MMAPv1 storage engine is deprecated in MongoDB 4.0 Community Edition 4.0.

In Percona Server for MongoDB 4.0.4-1, data at rest encryption is considered BETA quality. Do not use this feature in a production environment.

Bugs Fixed

  • PSMDB-235: In some cases, hot backup did not back up the keydb directory; mongod could crash after restore.
  • PSMDB-233: When starting Percona Server for MongoDB with WiredTiger encryption options but using a different storage engine, the server started normally and produced no warnings that these options had been ignored
  • PSMDB-239: The WiredTiger encryption was not disabled when using the Percona Memory Engine storage engine.
  • PSMDB-241: WiredTiger per database encryption keys were not purged when the database was deleted
  • PSMDB-243: A log message was added to indicate that the server is running with encryption
  • PSMDB-245: KeyDB’s WiredTiger logs were not properly rotated without restarting the server.
  • PSMDB-266: When running the server with the --directoryperdb option, the user could add arbitrary collections to the keydb directory which is designated for data encryption.

Due to the fix of bug PSMDB-266, it is not possible to downgrade from version 4.0.4-1 to version 3.6.8-2.0 of  Percona Server for MongoDB if using data at rest encryption (it will be possible to downgrade to PSMDB 3.6 as soon as PSMDB-266 is ported to that version).

Dec
13
2018
--

MongoDB Backup: How and When To Use PSMDB hotbackup and mongodb_consistent_backup

mongodb backup

mongodb backupWe have many backup methods to backup a MongoDB database using native mongodump or external tools. However, in this article, we’ll take a look at the backup tools offered by Percona, keeping in mind the restoration scenarios for MongoDB replicaSet and Sharded Cluster environments. We’ll explore how and when to use the tool mongodb-consistent-backup from Percona lab to backup the database consistently in Sharded Cluster/replicaSet environments. We’ll also take a look at hotbackup, a tool that’s available in Percona Server for MongoDB (PSMDB) packages. 

Backup is done – What about Restore?

Those who are responsible for data almost always think about the methods needed to backup the database and store the backups securely. But they often fail to foresee the scenario where the backup needs to be used to restore data. For example, unfortunately, I have seen many companies schedule the backup of config files and shard servers separately, but they start and complete the backups at different times based on data volumes. But can we use that backup when we need to restore and start the cluster with it? The answer is no—well, maybe yes if you can tweak the metadata, but data inconsistency may occur. Using this backup schedule, the backup is not consistent for the whole cluster, and we don’t have a point where we can restore the data for all shards/config dbs so that we can start the cluster from that point. Consequently, we face a difficult situation where we really need to use that backup! 

Let’s explore the two tools/features available to backup MongoDB from Percona, and look at which method to choose based on your restoration plan. 

Hot backup for both replicaset and Sharded cluster:

The main problem with backup is maintaining consistency, as an application still writes to the DB while backup is going on. So to maintain the consistency throughout the backup, and get a reliable full backup of all data needed to restore the database, the backup tool needs to track changes via oplog as well.  Using the mongodump utility along with oplog backup would help to achieve this easily in a replicaSet environment since you will need consistency for that replicaSet alone.

But when we need a consistent backup of a Sharded cluster, then it is very difficult to achieve the total cluster consistency as it involvs the backup of all shards and config servers all together up to a particular point,  to reuse in failover cases. In this case, even if you use mongodump manually in each shard/config separately, and try to take a consistent backup of the total cluster when there are writes being made, it is a very tedious job.  The backup of each shard ends at different points based on different scenarios such as load, data volume etc.

To remedy this, we could take a consistent hot backup of the Sharded cluster by using our utility mongodb-consistent-backup – in other words, point-in-time backup for the sharded cluster environment. This utility internally uses mongodump and gets the oplog changes from each node until the backup from all data nodes and configs are complete. This ensures that there is consistency in the backup of a total Sharded Cluster! You have to make sure you are using replicaSet for your config server too.  In fact, this tool also helps you to take a consistent backup in the replicaSet environment. 

This utility is available in our Percona lab but please note that it is not yet supported officially. To install this package, please make sure you install all the dependency packages, and follow the steps mentioned in this link to complete the installation process.

If you have enabled authentication in your environment, then create a user like below:

db.createUser({
	user: "backup_usr",
	pwd: "backup_pass",
	roles: [
	{ role: "clusterMonitor", db: "admin" }
	]
})/

The backup could be taken as follows by connecting one of the mongos node in the Sharded Cluster. Here mongos is running on 27051 port and the Cluster has one config replicaSet cfg and two Shards s1 and s2.

[root@app mongodb_consistent_backup-master]# ./bin/mongodb-consistent-backup -H localhost \
> -P 27051 \
> -u backup_usr \
> -p backup_pass \
> -a admin \
> -n clusterFullBackup \
> -l backup/mongodb
[2018-12-05 18:57:38,863] [INFO] [MainProcess] [Main:init:144] Starting mongodb-consistent-backup version 1.4.0 
(git commit: unknown)
[2018-12-05 18:57:38,864] [INFO] [MainProcess] [Main:init:145] Loaded config: {"archive": {"method": "tar", "tar": 
{"binary": "tar", "compression": "gzip"}, "zbackup": {"binary": "/usr/bin/zbackup", "cache_mb": 128, "compression": "lzma"}}, 
"authdb": "admin", "backup": {"location": "backup/mongodb", "method": "mongodump", "mongodump": {"binary": "/usr/bin/mongodump", 
"compression": "auto"}, "name": "clusterFullBackup"}, "environment": "production", "host": "localhost", "lock_file": 
"/tmp/mongodb-consistent-backup.lock", "notify": {"method": "none"}, "oplog": {"compression": "none", "flush": {"max_docs": 100, 
"max_secs": 1}, "tailer": {"enabled": "true", "status_interval": 30}}, "password": "******", "port": 27051, "replication": 
{"max_lag_secs": 10, "max_priority": 1000}, "sharding": {"balancer": {"ping_secs": 3, "wait_secs": 300}}, "upload": {"method": 
"none", "retries": 5, "rsync": {"path": "/", "port": 22}, "s3": {"chunk_size_mb": 50, "region": "us-east-1", "secure": true}, 
"threads": 4}, "username": "backup_usr"}
...
...
[2018-12-05 18:57:40,715] [INFO] [MongodumpThread-5] [MongodumpThread:run:204] Starting mongodump backup of s2/127.0.0.1:27043
[2018-12-05 18:57:40,722] [INFO] [MongodumpThread-7] [MongodumpThread:run:204] Starting mongodump backup of cfg/127.0.0.1:27022
[2018-12-05 18:57:40,724] [INFO] [MongodumpThread-6] [MongodumpThread:run:204] Starting mongodump backup of s1/127.0.0.1:27032
[2018-12-05 18:57:40,800] [INFO] [MongodumpThread-5] [MongodumpThread:wait:130] s2/127.0.0.1:27043:	Enter password:
[2018-12-05 18:57:40,804] [INFO] [MongodumpThread-6] [MongodumpThread:wait:130] s1/127.0.0.1:27032:	Enter password:
[2018-12-05 18:57:40,820] [INFO] [MongodumpThread-7] [MongodumpThread:wait:130] cfg/127.0.0.1:27022:	Enter password:
...
...
[2018-12-05 18:57:54,880] [INFO] [MainProcess] [Mongodump:wait:105] All mongodump backups completed successfully
[2018-12-05 18:57:54,892] [INFO] [MainProcess] [Stage:run:95] Completed running stage mongodb_consistent_backup.Backup with task 
Mongodump in 14.21 seconds
[2018-12-05 18:57:54,913] [INFO] [MainProcess] [Tailer:stop:86] Stopping all oplog tailers
[2018-12-05 18:57:55,955] [INFO] [MainProcess] [Tailer:stop:118] Waiting for tailer s2/127.0.0.1:27043 to stop
[2018-12-05 18:57:56,889] [INFO] [TailThread-2] [TailThread:run:177] Done tailing oplog on s2/127.0.0.1:27043, 2 oplog changes, 
end ts: Timestamp(1544036268, 1)
[2018-12-05 18:57:59,967] [INFO] [MainProcess] [Tailer:stop:118] Waiting for tailer s1/127.0.0.1:27032 to stop
[2018-12-05 18:58:00,801] [INFO] [TailThread-3] [TailThread:run:177] Done tailing oplog on s1/127.0.0.1:27032, 3 oplog changes, 
end ts: Timestamp(1544036271, 1)
[2018-12-05 18:58:03,985] [INFO] [MainProcess] [Tailer:stop:118] Waiting for tailer cfg/127.0.0.1:27022 to stop
[2018-12-05 18:58:04,803] [INFO] [TailThread-4] [TailThread:run:177] Done tailing oplog on cfg/127.0.0.1:27022, 8 oplog changes, 
end ts: Timestamp(1544036279, 1)
[2018-12-05 18:58:06,989] [INFO] [MainProcess] [Tailer:stop:125] Oplog tailing completed in 27.85 seconds
...
...
[2018-12-05 18:58:09,478] [INFO] [MainProcess] [Rotate:symlink:83] Updating clusterFullBackup latest symlink to current backup 
path: backup/mongodb/clusterFullBackup/20181205_1857
[2018-12-05 18:58:09,480] [INFO] [MainProcess] [Main:run:461] Completed mongodb-consistent-backup in 30.49 sec

where,
n – backup directory name to be created
l – backup directory
H – hostname
P – port
p – password
u – user
a – authentication database

The log, above, shows the backup pattern going on, and it captures the state of the oplog, and updates the changes. The same command could be used to connect the replicaSet by having a proper hostname. The tool also has the ability to identify whether it is a replicaSet or Sharded cluster before proceeding with the backup. This can be determined from the log output, as shown below, which is written by the tool when running the backup:

For shading cluster:

[2018-12-05 19:05:02,453] [INFO] [MainProcess] [Main:run:299] Running backup in sharding mode using seed node(s): localhost:27051

For replicaSet:

[2018-12-05 19:23:05,070] [INFO] [MainProcess] [Main:run:257] Running backup in replset mode using seed node(s): localhost:27041

You can check out a couple of our blogs here and here for more details about the utility.

Hot but Cold backup

You may be wondering about the title Hot but Cold backup. Yes, for Percona Server for MongoDB (PSMDB) packages, there is feature to take the binary hot backup using hotbackup. Those who know the MySQL world will already know about Percona XtraBackup which is our open source and free binary hot backup utility for MySQL. PSMDB hotbackup works in a similar way. When you use hotbackup to backup, then you will have a binary backup ready to start an instance with the backup directory. You don’t need to worry about restoring from scratch and recreating indices. However, this solution works for replicaset/standalone mongodb instances only. 

If you can plan well, then you could feasibly use this feature to backup a Sharded cluster by bringing down one of the secondaries from all shards/config servers at the same time (probably when there is low or no transaction writing), then start them on a different port and without the replicaSet variable option, so that those instances won’t rejoin their replicaSet. Now you can start the hotbackup in all instances, once they are finished. You can revert the changes in the config file and allow them to rejoin their replicaSet.

Cautionary notes: Please make sure you are using the low priority or hidden nodes for this purpose, so that the election is not triggered when they split/join back to the replicaSet and don’t use SIGKILL (kill -9) to stop the db as it shuts down the database abruptly. Also, please plan to have at least an equal amount of disk space to that of your shard. A hotbackup takes an approximately equal amount of space as your node. 

 My colleague Tim Vaillancourt has written a great blogpost on this. See here.  

Conclusion

So from the above two methods, now you have the option to choose the similar backup methods based on your RTO, RPO explained here. Hope this helps you! Please share your comments and feedback below, and tell me what you think!

REFERENCES:

https://www.percona.com/doc/percona-server-for-mongodb/LATEST/hot-backup.html
https://www.percona.com/forums/questions-discussions/percona-server-for-mongodb/53006-percona-mongodb-difference-between-hot-backup-and-backup-using-mongo-dump
https://www.percona.com/blog/2016/07/25/mongodb-consistent-backups/
https://www.percona.com/blog/2018/04/06/free-fast-mongodb-hot-backup-with-percona-server-for-mongodb/
https://www.bluelock.com/blog/rpo-rto-pto-and-raas-disaster-recovery-explained/
https://en.wikipedia.org/wiki/Disaster_recovery
https://www.druva.com/blog/understanding-rpo-and-rto/
https://www.percona.com/live/e17/sites/default/files/slides/Running%20MongoDB%20in%20Production%20-%20FileId%20-%20115299.pdf
https://major.io/2010/03/18/sigterm-vs-sigkill/
https://docs.mongodb.com/manual/core/sharded-cluster-config-servers


Photo by Designecologist from Pexels

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com