PMM 101: Troubleshooting MongoDB with Percona Monitoring and Management

Troubleshooting MongoDB with Percona Monitoring and Management

Troubleshooting MongoDB with Percona Monitoring and ManagementPercona Monitoring and Management (PMM) is an open-source tool developed by Percona that allows you to monitor and manage your MongoDB, MySQL, and PostgreSQL databases.  This blog will give you an overview of troubleshooting your MongoDB deployments with PMM.

Let’s start with a basic understanding of the architecture of PMM. PMM has two main architectural components:

  1. PMM Client – Client that lives on each database host in your environment.  Collects server metrics, system metrics, and database metrics used for Query Analytics
  2. PMM Server – Central part of PMM that the clients report all of their metric data to.   Also presents dashboards, graphs, and tables of that data in its web interface for visualization of your metric data.

For more details on the architecture of PMM, check out our docs.

Query Analytics

PMM Query Analytics (“QAN”) allows you to analyze MongoDB query performance over periods of time.  In the below screenshot you can see that the most longest-running query was against the testData collection.

Percona Monitoring and Management query analytics

If we drill deeper by clicking on the query in PMM we can see exactly what it was running. In this case, the query was searching in the testData collection of the mpg database looking for records where the value of x is 987544.

Percona Monitoring and Management

This is very helpful in determining what each query is doing, how much it is running, and which queries make up the bulk of your load.

The output is from db.currentOp(), and I agree it may not be clear at a glance what the application-side (or mongo shell) command was. This is a limitation of the MongoDB API in general – the drivers will send the request with perfect functional accuracy but it does not necessarily resemble what the user typed (or programmed).  But with an understanding of this, and focusing first on what the “command” field contains it is not too hard to picture a likely original format. For example the example above could have been sent by running “use mpg; db.testData.find({“x”: { “$lte”: …, “$gt”: … }).skip(0)” in the shell. The last “.skip(0)” is optional as it is 0 by default.

Additionally, you can see the full explain plan for your query just as you would by adding .explain() to your query.   In the below example we can see that the query did a full collection scan on the mpg.testData collection and we should think about adding an index to the ‘x’ field to improve the performance of this query.

Metrics Monitor

Metrics Monitor allows you to monitor, alert, and visualize different metrics related to your database overall, its internal metrics, and the systems they are running on.

Overall System Performance View

The first view that is helpful is your overall system performance view.   Here you can see at a high level, how much CPU and memory are being used, the amount of writes and reads from disk, network bandwidth, # of database connections, database queries per second, RAM, and the uptime for both the host and the database.   This view can often lead you to the problematic node(s) if you’re experiencing any issues and can also give you a high level of the overall health of your monitored environment.

Percona Monitoring and Management system overview

WiredTiger Metrics

Next, we’ll start digging into some of the database internal metrics that are helpful for troubleshooting MongoDB.  These metrics are mostly from the WiredTiger Storage Engine that is the default storage engine for MongoDB since MongoDB 3.0.  In addition to the metrics I cover, there are more documented here.

The WiredTiger storage engine uses tickets as a way to handle concurrency, The default is for WiredTiger to have 128 read and 128 write tickets.  PMM allows you to alert when your available tickets are getting low.  You can also correlate with other metrics as to why so many tickets are being utilized. The graph sample below shows a low-load situation – only ~1 ticket out of 128 was checked out at any time.

Percona Monitoring and Management wiredtiger

One of the metrics that could be causing you to use a large number of tickets is if your checkpoint time is high.   WiredTiger, by default, does a full checkpoint at least every 60 seconds, this is controlled by the WiredTiger parameter checkpoint=(wait=60)).  Checkpointing flushes all the dirty pages to disk. (By the way ‘dirty’ is not as bad as it sounds – it’s just a storage engine term meaning ‘not committed to disk yet’.)  High checkpointing times can lead to more tickets being in use.

Finally, we have WiredTiger Cache Activity metrics. WiredTiger Cache activity indicates the level of data that is being read into or written from the cache.  These metrics can help you baseline your normal cache activity, so you can notice if you have a large amount of data being read into the cache, perhaps from a poorly tuned query, or a lot of data being written from the cache.

WiredTiger Cache Activity

Database Metrics

PMM also has database metrics that are not WiredTiger specific.   Here we can see the uptime for the node, queries per second, latency, connections, and number of cursors.   These are higher-level metrics which can be indicative of a larger problem such as connection storms, storage latency, and excessive queries per second.  These can help you hone in on potential issues for your database.

Percona Monitoring and Management database metrics

Node Overview Metrics

System metrics can point you towards an issue at the O/S level that may or may not correlate to your database.  CPU, CPU saturation, core usage, DISK I/O, Swap Activity, and Network Traffic are some of the metrics that can help you find issues that may start at the O/S level or below.  Additional metrics to the below can be found in our documentation.

Node Overview Metrics


In this blog, we’ve discussed how PMM can help you troubleshoot your MongoDB deployment, whether you’re looking at the WiredTiger specific metrics, system-level metrics, or database level metrics PMM has you covered and can help you troubleshoot your MongoDB deployment.  Thanks for reading!

Additional Resources:

Download Percona Monitoring and Management

PMM for MongoDB Quick Start Guide

PMM Blog Topics

MongoDB Blog Topics


5 Things Developers Should Know Before Deploying MongoDB

Developers Should Know Before Deploying MongoDB

Developers Should Know Before Deploying MongoDBMongoDB is one of the most popular databases and is one of the easiest NoSQL databases to set up. Oftentimes, developers want a quick environment to just test out an idea they have for an application or to try and figure out a good data model for their data without waiting for their Operations team to spin up the infrastructure.  What can sometimes happen is these quick, one-off instances grow, and before you know it that little test DB is your production DB supporting your new app. For anyone who finds themselves in this situation, I encourage you to check out our Percona blogs as we have lots of great information for those both new and experienced with MongoDB.  Don’t let the ease of installing MongoDB fool you into a false sense of security, there are things you need to consider as a developer before deploying MongoDB.  Here are five things developers should know before deploying MongoDB in production.

1) Enable Authentication and Authorization

Security is of utmost importance to your database.  While gone are the days when security was disabled by default for MongoDB, it’s still easy to start MongoDB without security.  Without security and with your database bound to a public IP, anyone can connect to your database and steal your data.   By simply adding some important security configuration options to your configuration file, you can ensure that your data is protected.  You can also configure MongoDB to utilize native LDAP or Kerberos for authentication.  Setting up authentication and authorization is one of the simplest ways to ensure that your MongoDB database is secure.  The most important configuration option is turning on authorization which enables users and roles and requires you to authenticate and have the proper roles to access your data.

  authorization: enabled
  keyfile: /path/to/our.keyfile


2) Connect to a Replica Set/Multiple Mongos, Not Individual Nodes

MongoDB’s drivers all support connecting directly to a standalone node, a replica set, or a mongos for sharded clusters.   Sometimes your database starts off with one specific node that is always your primary.  It’s easy to set your connection string to only connect to that one node.   But what happens when that one node goes down?   If you don’t have a highly available connection string in your application configuration, then you’re missing out on a key advantage of MongoDB replica sets. Connect to the primary no matter which node it is.  All of MongoDB’s supported language drivers support the MongoDB URI connection string format and implement the failover logic.  Here are some examples of connection strings for PyMongo, MongoDB’s Python Driver, of a standalone connection string, a replica set, and an SRV record connection string.  If you have the privilege to set up SRV DNS records, it allows you to standardize your connection string to point to an address without needing to worry about the underlying infrastructure getting changed.

Standalone Connection String:

client = MongoClient('mongodb://hostabc.example.com:27017/?authSource=admin')


Replica Set Connection String:

client = MongoClient('mongodb://hostabc.example.com:27017,hostdef:27017,hostxyz.example.com/?replicaSet=foo&authSource=admin')


SRV Connection String:

client = MongoClient('mongodb+srv://host.example.com/')

Post-script for clusters: If you’re just starting you’re usually not setting up a sharded cluster. But if it is a cluster then instead of using a replicaset connection you will connect to a mongos node. To get automatic failover in the event of mongos node being restarted (or otherwise being down) start them on multiple hosts and put them, comma-concatenated, in your connection string’s host list. As with replicasets you can use SRV records for these too.

3) Sharding Can Help Performance But Isn’t Always a Silver Bullet

Sharding is how MongoDB handles the partitioning of data.  This practice is used to distribute load across more replicasets for a variety of reasons such as write performance, low-latency geographical writes, and archiving data to shards utilizing slower and cheaper storage.   These sharding approaches are helpful in keeping your working set in memory because it lowers the amount of data each shard has to deal with.

As previously mentioned, sharding can also be used to reduce latency by separating your shards by geographic region, a common example if having a US-based shard, an EU-based shard, and a shard in Asia where the data is kept local to its origin.  Although it is not the only application for shard zoning “Geo-sharding” like this is a common one. This approach can also help applications comply with various data regulations that are becoming more important and more strict throughout the world.

While sharding can oftentimes help write performance, that sometimes comes at the detriment of read performance.  An easy example of a poor read performance would be if we needed to run a query to find all of the orders regardless of their origin. This find query would need to be sent to the US shard, the EU shard, and the shard in Asia, with all the network latency that comes with reading from the non-local regions, and then it would need to sort all the returned records on the mongos query router before returning them to the client. This kind of give and take should help you determine what approach you take to choosing a shard key and weighing its impact on your typical query patterns.

4) Replication ? Backups

MongoDB Replication, while powerful and easy to set up, is not a substitution for a good backup strategy.  Some might think that their replica set members in a DR data center will be sufficient to keep them up in a data loss scenario.   While a replica set member in a DR center will surely help you in a DR situation, it will not help you if you accidentally drop a database or a collection in production as that delete will quickly be replicated to your secondary in your DR data center.

Other common misconceptions are that delayed replica set members keep you safe.   Delayed members still rely on you finding the issue you want to restore from before it gets applied to your delayed member.  Are your processes that rock-solid that you can guarantee that you’ll find the issue before it reaches your delayed member?

Backups are just as important with MongoDB as they were with any other database.  There are tools like mongodump, mongoexport, Percona Backup for MongoDB, and Ops Manager (Enterprise Edition only) that support Point In Time Recovery, Oplog backups, Hot Backups, full and Incremental Backups.  As mentioned, Backups can be run from any node in your replica set.  The best practice is to run your backup from a secondary node so you don’t put unnecessary pressure on your primary node.   In addition to the above methods, you can also take snapshots of your data, this is possible as long as you pause writes to the node that you’re snapshotting by freezing the file system to ensure a consistent snapshot of your MongoDB database.

5) Schemaless is a Myth, Schemas Still Matter

MongoDB was originally touted as a schemaless database, this was attractive to developers who had long struggled to update and maintain their schemas in relational databases.   But these schemas succeeded for good reasons in the early days of databases and while MongoDB allowed you the flexibility to not set up your schema and create it on the fly, this often led to some poor-performing schema designs and anti-patterns.   There are lots of stories out in the wild of users not enforcing any structured schema on their MongoDB data models and running into various performance problems as their schema began to become unwieldy.  Today, MongoDB supports JSON schema and schema validation.  These approaches allow you to apply as much or as little structure to your schemas as is needed, so you still have the flexibility of MongoDB’s looser schema structure while still enforcing schema rules that will keep your application performing well and your data model consistent.

Another aspect that is affected by poor schema design in MongoDB is its aggregation framework.   The aggregation framework lets you do more analytical query patterns such as sorting, grouping, and some useful things such as unwinding of arrays and supporting joins and a whole lot more.  Without a good schema, these sorts of queries can really suffer poor performance.

MongoDB was also popular due to its lack of support for joins. Joins can be expensive and avoiding them allowed MongoDB to run quite fast.  Though MongoDB has since added $lookup to support left outer joins, embedded documents are a typical workaround to this approach.   This approach comes with its pros and cons.  As with relational databases, embedding documents is essentially creating a One-to-N relationship, this is covered in greater detail in this blog.  In MongoDB, the value of N matters, if it’s One-to-few (2-10), one-to-many,(10-1000) this can still be a good schema design as long as your indexes support your queries.   When you get to one-to-tons(10000+) this is where you need to consider things like MongoDB’s 16 MB limit per document or using references to the parent document.

Examples of each of these approaches:

One-to-Few, consider having multiple phone numbers for a user:

{  "_id" : ObjectId("1234567890"),
  "name" :  "John Doe",
  "phone" : [     
     { "type" : "mobile", "number" : "+1-585-555-5555" }, 
     { "type" : "work", "number" : "+1-585-555-1111"}  

One-to-Many, consider a parts list for a product with multiple items:

{ "_id" : ObjectId("123"),
 “Item” : “Widget”,
 “Price” : 100 

{  "_id" : ObjectId("0123456789"), 
   "manufacturer" : "Percona",
   "catalog_number" : 123456,
   "parts" : [    
      { “item”: ObjectID("123")},  
      { “item”: ObjectID("456")},
      { “item”: ObjectID("789")},

One-to-Tons, consider a social network type application:

{  "_id" : ObjectId("123"),
   "username" : "Jane Doe" 

{  "_id" : ObjectId("456"),
   "username" : "Eve DBA"

{  "_id" : ObjectId("9876543210"),
   "username" : "Percona",
   "followers" : [     


Bonus Topic: Transactions

MongoDB supports multi-document transactions since MongoDB 4.0 (replica sets) and MongoDB 4.2 (sharded clusters).  Transactions in MongoDB work quite similarly to how they work in relational databases.   That is to say that either all actions in the transaction succeed or they all fail.  Here’s an example of a transaction in MongoDB:

rs1:PRIMARY> session.startTransaction() 
rs1:PRIMARY> session.getDatabase("percona").test.insert({today : new Date()})
WriteResult({ "nInserted" : 1 })
rs1:PRIMARY> session.getDatabase("percona").test.insert({some_value : "abc"})
WriteResult({ "nInserted" : 1 }) 
rs1:PRIMARY> session.commitTransaction()

Transactions can be quite powerful if they are truly needed for your application, but do realize the performance implications as all queries in a transaction will wait to finish until the whole transaction succeeds or fails.


While MongoDB is easy to get started with and has a lower barrier to entry, just like any other database there are some key things that you, as a developer, should consider before deploying MongoDB.   We’ve covered enabling authentication and authorization to ensure you have a secure application and don’t leak data.   We’ve highlighted using Highly available connection strings, whether to your replica set, a mongos node list, or utilizing SRV, to ensure you’re always connecting to the appropriate nodes.  The balancing act of ensuring that when you select your shard key you consider the impact to both reads and writes and understand the tradeoffs that you are making.   The importance of backups and to not rely on replication as a backup method was also covered.  Finally, we covered the fact that schemas still matter with MongoDB, but you still have flexibility in defining how rigid it is. We hope this helps you have a better idea about things to consider when deploying MongoDB for your next application and to be able to understand it better.  Thanks for reading!


5 Things DBAs Should Know Before Deploying MongoDB

Before Deploying MongoDB

Before Deploying MongoDBMongoDB is one of the most popular databases and is one of the easiest NoSQL databases to set up for a DBA.  Oftentimes, relational DBAs will inherit MongoDB databases without knowing all there is to know about MongoDB.   I encourage you to check out our Percona blogs as we have lots of great information for those both new and experienced with MongoDB.  Don’t let the ease of installing MongoDB fool you; there are things you need to consider before deploying MongoDB.  Here are five things DBAs should know before deploying MongoDB in production.

1) Enable Authentication and Authorization

Security is of utmost importance to your database.  Gone are the days when security was disabled by default for MongoDB, but it’s still easy to start MongoDB without security.  Without security and with your database bound to a public IP, anyone can connect to your database and steal your data.  By simply adding some important MongoDB security configuration options to your configuration file, you can ensure that your data is protected.  You can also configure MongoDB to utilize native LDAP for authentication.  Setting up authentication and authorization is one of the simplest ways to ensure that your MongoDB database is secure.  The most important configuration option is turning on authorization which enables users and roles and requires you to authenticate and have the proper roles to access your data.

  authorization: enabled
  keyfile: /path/to/your.keyfile


2) Deploy Replica Sets at a Minimum

A great feature of MongoDB is how easy it is to set up replication.  Not only is it easy but it gives you built-in High Availability.  When your primary crashes or goes down for maintenance, MongoDB automatically fails over to one of your secondaries and your application is back online without any intervention. Replica Sets allow you to offload reads, as long as your application can tolerate eventual consistency.  Additionally, replica sets allow you to offload your backups to a secondary.  Another important feature to know about Replica Sets is that only one member of the Replica Set can accept client writes at a time; if you want multiple nodes to accept writes, then you should look into sharded clusters.  There are some tradeoffs with replica sets that you should also be aware of; when reading from a secondary, depending on your read preference, you may be reading stale data or data that hasn’t been acknowledged by all members of the replica set.

3) Backups

Backups are just as important with MongoDB as they were with any other database.  There are tools like mongodump, mongoexport, Percona Backup for MongoDB, and Ops Manager (Enterprise Edition only) that support Point In Time Recovery, Oplog backups, Hot Backups, and full and incremental Backups.  As mentioned, backups can be run from any node in your replica set.  The best practice is to run your backup from a secondary node so you don’t put unnecessary pressure on your primary node.   In addition to the above methods, you can also take snapshots of your data, and this is possible as long as you pause writes to the node that you’re snapshotting before freezing the file system to ensure a consistent snapshot of your MongoDB database.

4) Indexes Are Just as Important

Indexes are just as important for MongoDB as they are with relational databases.  Just like with relational databases, indexes can help speed up your queries by reducing the size of data that your query returns thus speeding up query performance.  Indexes also help your working set more easily fit into memory.  Pay attention to the query patterns and look for full collection scans in your logfiles to know where there are queries that could benefit from an index.  In addition to that, make sure that you follow the ESR rule when creating your indexes and examining your query patterns.  Just like relational databases, indexes aren’t free, as MongoDB has to update indexes every time there’s an insert, delete or update, so make sure your indexes are being used and aren’t unnecessarily slowing down your writes.

5) Whenever Possible, Working Set < RAM

As with any database, fitting your data into RAM will allow for faster reads than from disk.  MongoDB is no different.  Knowing how much data MongoDB has to read in for your queries can help you determine how much RAM you should allocate to your database.  For example, if your query requires a working set that is 50 GB and you only have 32 GB of RAM allocated to the Wired Tiger Cache, MongoDB is going to constantly read in more of the working set from disk and page it out to make room for the additional data, and this will lead to slower query performance and a system that is constantly using all of its available memory for its cache.  Conversely, if you have a 50 GB working set and 100 GB of RAM for your Wired Tiger cache, the working set will fit completely into memory and as long as other data doesn’t page it out of the cache, MongoDB should serve reads much faster as it will all be in memory.

Avoiding reads from disk is not always possible or practical.  To assess how many reads you’re doing from disk, you can use commands like db.serverStatus() or measure it with tools like Percona Monitoring and Management.  Be sure your filesystem uses the MongoDB recommended XFS file system and whenever possible, uses Solid-State Drives (SSD’s) to speed up disk performance.   Because of database workloads, you should also make sure to provision enough IOPS for your database servers to try and avoid disk bottlenecks as much as possible.   Multiple cored systems will also tend to work better as this allows faster checkpointing for the Wired Tiger storage engine, but you will still be reliant on going as fast as the disks can take you.


While MongoDB is easy to get started with and has a lower barrier to entry, just like any other database there are some key things that you, as a DBA, should consider before deploying MongoDB.   We’ve covered enabling authentication and authorization to ensure you have a secure deployment.  We’ve also covered deploying replica sets to ensure high availability, backups to ensure your data stays safe, the importance of indexes to your query performance and having sufficient memory to cover your queries, and what to do if you don’t. We hope this helps you have a better idea of how to deploy MongoDB and to be able to support it better.  Thanks for reading!


Ask An Expert On Percona’s Community Forum!

Percona Community Forum

Percona Community ForumHave you recently visited Percona’s Community Forum? It’s your hub for direct Q&A with top database experts, including Percona CEO Peter Zaitsev and CTO Vadim Tkachenko. Last quarter over 450 users participated, including 45 engineers from Percona’s staff. Since it was first launched in 2006 and revamped earlier this year, our Forum has built up a mountain of easily-searched database expertise.

This free online community is open to everyone from newbies to maestros. Register as a user and then ask your question or visit the unanswered question list and help someone else. You’ll feel great doing so, plus you’ll earn points and advance in rank, building your online reputation like done by vaibhav_upadhyay40, Fan, Federico Razzoli, Ghan, djp, Björn, rdab100, Stateros, gordan, Venkat, and others.

Our Forum Q&A covers all flavors of MySQL, MongoDB, and PostgreSQL, as well as key utilities like ProxySQL and PMM. Plus it’s the only site announcing all of Percona’s new software releases and Percona’s occasional urgent software alerts. You can even personalize email notifications to track only certain categories and skip the rest.  And we promise to never spam you!

A few of our most popular posts illustrate how it all works:

So what’s the fine print? Most importantly, remember that it’s volunteers who answer, so answers may not be timely, might not be in in-depth, or occasionally might never arrive at all. Response time now averages five days. Remember to never share confidential details, as everything on the Forum is public.

The Forum operates on a spirit of self-help, so do a reasonable amount of your own research before popping off a question. And if you get help, try to give back help too. Everything depends on a spirit of reciprocity and goodwill. The Forum is for those who love databases and want to share their enthusiasm with others, and for those who want to master the database craft.

Finally, as our lawyers make us say, answers are “as-is” meaning Percona does not guarantee accuracy and disclaims all liability. Our Help Articles, Code of Conduct, and Terms of Service explain it all.

So register as a user and give the Percona Forum a try! As always we welcome your suggestions and feedback to Community-Team@Percona.com.


The Criticality of a Kubernetes Operator for Databases

Importance of Kubernetes Operators for Databases

Importance of Kubernetes Operators for DatabasesAs a Solutions Engineer at Percona, one of my responsibilities is to support our customers as they investigate new and emerging technologies. This affords me the opportunity to speak to many current and new customers who partner with Percona. The topic of Kubernetes is becoming more popular as companies are investigating and adopting this technology. The issue most companies are encountering is architecting a stateful database that doesn’t fall victim to an environment tuned for ephemeral workloads. This obviously introduces a level of complexity as to how to run a stateful database in an inherently stateless world, as databases are not natively designed for that.

To make your life easier, as a part of the Percona Cloud-Native Autonomous Database Initiative, our engineering teams have built two Kubernetes Operators: Percona Kubernetes Operator for Percona XtraDB Cluster and Percona Kubernetes Operator for Percona Server for MongoDB, which allows for Kubernetes Pods to be destroyed, moved, or created with no impact to the application. To see an overview of Kubernetes, you can read this previous blog of mine Introduction to Percona Kubernetes Operator for Percona XtraDB Cluster that covers this topic. It’s common for companies new to Kubernetes to attempt to run their databases in Kubernetes the same way they would in a traditional environment. But, this is not advised as it introduces the possibility of data loss and it is not recommended for production workloads. Why is this dangerous and how has Percona solved this?

Appropriate Workloads for Kubernetes

Kubernetes is not the answer for everyone. It’s even not the answer for most people. Do not be misled into thinking that moving a database into Kubernetes is going to solve any of your problems. Before you consider moving your database into Kubernetes, ensure the rest of your application is cloud-native and can be used with Kubernetes. Moving your database to Kubernetes should happen after you have started both elastic vertical and horizontal scale and need to orchestrate it to control costs.

As more companies are moving to Kubernetes something has to happen to the legacy workloads. Oftentimes we see a lift and shift mentality into Kubernetes, which can be dangerous or cause more work than expected. We have seen two primary ideal use cases for moving database workloads to Kubernetes: Microservices and Unified Abstraction Layer.

Monolithic, large datasets can prohibit some of Kubernetes’ strong points: self-healing and availability. This can be an issue due to the time it takes to physically transmit data to a new Pod instance as it joins the database cluster. If your dataset is too large, this process is slow due to physical limitations and prohibits performance and the availability of your database. Microservices are a great fit due to the relatively smaller datasets, which allows Kubernetes automation to work well with the dataset size.

Companies looking to take full advantage of cloud-native applications and databases can be a really good fit for Kubernetes as well. If you truly want the ability to deploy and run your databases anywhere utilizing the concept of a Unified Abstraction Layer, Kubernetes is a great option. You can move your databases to anywhere that is running Kubernetes and know it will work.

We talked about large unsharded datasets and the limitations Kubernetes presents when handling them, but we should mention a few more workloads better suited for traditional platforms. Applications with a throughput sensitivity may not do well on Kubernetes, or they may not be cost-effective to do so. Kubernetes is fundamentally designed for container orchestration and is not designed to handle highly performant databases that require low latency. This may be possible to achieve, but at what cost? This applies to highly performant distributed applications as well. Lowest latency across all nodes is not a core tenant of Kubernetes, so ensure you have planned and tested against this before you move everything over to Kubernetes.

Pods Are Cattle, Not Pets

If you’re not familiar with Pets vs Cattle, it’s a DevOps concept that differentiates deployment methodologies of unique servers that require attention when issues arise (pets) versus the ability to replace a server with a copy if issues arise (cattle). Due to the nature of how Kubernetes operates, Pods can be destroyed, spun up, and moved at any time due to factors outside of the application’s control, much like how cattle are treated. Kubernetes uses a scheduler, which by design, can destroy and recreate Pods to meet the configuration needs of your Kubernetes Cluster. This is great for stateless applications as any failure in the application will result in a Pod containing the application being destroyed and recreated, eliminating the need for human interaction, and greatly speeding up the process to a resolution. This isn’t ideal for databases as you don’t want your database to suddenly stop working, halt the application, and introduce the potential for lost or corrupted data. One of the tools Kubernetes can utilize to help combat this is called Stateful Sets. These help by keeping a Pod’s identity assigned to it as it is destroyed and re-created. This helps facilitate stateful workloads, but how does this come into play with high availability and utilizing the automation aspects of Kubernetes?

Databases Are Pets, Not Cattle

Databases by design need to keep their identity, information, and most importantly their data safe and accessible at all times. They are the backbones of the application as they are the source of truth an application relies on for normal processing. Any errors in their operations will quickly stop an application from functioning. They are important, to say the least. How can we safely run databases in Kubernetes and still ensure we have highly available database deployments? By using Stateful Sets and Persistent Volumes we can maintain data integrity, but we need an additional set of hands to take on database administrator tasks such as ensuring failover happens, database members are recovered, and re-join the highly available architecture, along with other technology-specific functions. Fortunately, Kubernetes is extensible and has Operators, which aims to automate the key task of a human operator who is managing a service or set of services.

Automation, Automation, Automation

We know the complexities of running a database (safely) in Kubernetes and some of the concepts used to help bridge the gaps between automation and traditional human functions. With the help of Percona’s Kubernetes Operators, we can safely run databases the way they were intended to run. Percona’s Kubernetes Operators are able to automate tasks that are usually done by a database administrator such as:

  • Fully automated deployments with strict consistency and no single point of failure
  • Automated scaling with the ability to change the size parameter to add or remove members of a Cluster or Replica-Set
  • Fully automated backups and restores
  • Fully automated self-healing by automatically recovering from the failure of a single Cluster or Replica-Set member.
  • Automatically manage system users on password rotation
  • Simplified updates

Always Use a Kubernetes Operator

With the complexities of running a highly available database environment and the inherent dangers introduced by using the dynamic Kubernetes environment, an Operator should always be used when deploying databases in Kubernetes. Fortunately, Percona has already solved this by providing Percona Kubernetes Operator for Percona XtraDB Cluster and Percona Kubernetes Operator for Percona Server for MongoDB. Percona provides full support for databases running in Kubernetes with the Percona Operators. If you are interested in learning more or obtaining support or professional services to maximize your database deployments, please reach out to us.


Grab Your Percona Swag – For Free!

Percona Swag

Would you like to get the latest in Percona gear 100% free, shipped to you anywhere in the world? Maybe that sounds too good to be true, but it’s true!  It’s easy and takes as little as 20 minutes to earn your swag. Here are some examples of the swag items you can claim:

Percona Swag

So what’s the catch? Percona software products are now listed on four online software directories, but our listings are too new to have accumulated many user reviews. We need reviews!

So our offer is simple. You write one review, you pick one Percona swag item. You write two reviews, you pick two. Seven reviews, pick seven pieces of swag, our limit. But you must post your reviews by November 15, 2020!

Any meaningful review earns swag, be it positive, negative, or mixed. Write whatever you believe; only write something! There’s no swag for a review that gives a rating but says nothing at all or nothing meaningful, so make those reviews count!

Here’s where to post reviews:

Product Capterra   G2           TrustRadius   SourceForge
Percona Monitoring and Management Capterra G2 TrustRadius SourceForge
Percona Server For MySQL Capterra G2 TrustRadius SourceForge
Percona XtraDB Cluster Capterra G2 TrustRadius SourceForge
Percona XtraBackup Capterra G2 TrustRadius SourceForge
Percona Distribution for PostgreSQL n/a G2 TrustRadius SourceForge
Percona Backup for MongoDB n/a G2 TrustRadius SourceForge
Percona Server for MongoDB n/a n/a TrustRadius SourceForge
Percona Kubernetes Operator for Percona XtraDB Cluster n/a G2 TrustRadius SourceForge

You can review several different products and post them on one site, or you can write one product review and post it on multiple sites.  Or post any combination of reviews, up to a max of seven.  The more reviews you post, the more the swag delivered to your home address for free, courtesy of Percona.

To claim your swag, write to <community-team@percona.com>.  Include:

  • Links to each review you posted.
  • Your postal mailing address.
  • Your phone number (for delivery use only, never for marketing)

For t-shirt orders, also state:

  • Color (White, Black, or Blue)
  • Size (Small, Medium, Large, or Extra Large)

It’s that simple! Start writing now!


Deploying Percona Kubernetes Operators with OpenEBS Local Storage

Deploying Percona Kubernetes Operators with OpenEBS Local Storage

Deploying Percona Kubernetes Operators with OpenEBS Local StorageNetwork volumes in Kubernetes provide great flexibility, but still, nothing beats local volumes from direct-attached storage in the sense of database performance.

I want to explore ways to deploy both Percona Kubernetes Operators (Percona Kubernetes Operator for Percona XtraDB Cluster and Percona Kubernetes Operator for Percona Server for MongoDB) using local volumes, both on the bare-metal deployments or in cloud deployments.

Simple Ways

There are two ways available out of the box to deploy using local storage, which you can use immediately.

You can specify in cluster deployment yaml volume specification, using either hostPath:

        path: /data
        type: Directory



  (which will be equal somewhat to ephemeral storage in EC2):

      emptyDir: {}

While this will work, it is a very rigid way to force local storage, and it is not really the “Kubernetes way”, as we will lose the capability to manage data volumes.  We want to see a more uniform way with Persistent Volumes and Persistent Volumes Claims.

Persistent Volumes

Recognizing limitations with hostPath and emptyDir, Kubernetes introduced Local Persistent Volumes.

Unfortunately, while this will work for the simple deployment of single pods, it does not work with dynamically created volumes which we need for Operators. We need the support of Dynamic Volume Provisioning.

There are several projects to combine Dynamic Volume Provisioning with Local Persistent Volumes, but I did not have much success with them, and the only project which worked for me is OpenEBS.

OpenEBS provides much more than just Local Persistent Volumes, but in this blog, I want to touch only OpenEBS Local PV Hostpath.

OpenEBS Local PV Hostpath

This is actually quite simple, and this is what I like about OpenEBS. First, we will define storage classes:

For the local nvme storage:

apiVersion: storage.k8s.io/v1
kind: StorageClass
  name: localnvme
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /data/openebs
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

And for the local ssd:

apiVersion: storage.k8s.io/v1
kind: StorageClass
  name: localssd
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /mnt/data/ebs
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

And after that we can deploy Operators using StorageClass, for example  in cluster deployment yaml:

        storageClassName: localnvme
        accessModes: [ "ReadWriteOnce" ]
            storage: 200Gi

The cluster will be deployed using localnvme StorageClass, and using space on /data/openebs.

Now we can observe used volumes with kubectl get pv:

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS       REASON   AGE
pvc-325e11ff-9082-432c-b484-c0c7a3d1c949   200Gi      RWO            Delete           Bound    pxc/datadir-cluster2-pxc-2   localssd                    2d18h
pvc-3f940d14-8363-450e-a238-a9ff09bee5bc   200Gi      RWO            Delete           Bound    pxc/datadir-cluster3-pxc-1   localnvme                   43h
pvc-421cbf46-73a6-45f1-811e-1fca279fe22d   200Gi      RWO            Delete           Bound    pxc/datadir-cluster2-pxc-1   localssd                    2d18h
pvc-53ee7345-bc53-4808-97d6-2acf728a57d7   200Gi      RWO            Delete           Bound    pxc/datadir-cluster3-pxc-0   localnvme                   43h
pvc-b98eca4a-03b0-4b5f-a152-9b45a35767c6   200Gi      RWO            Delete           Bound    pxc/datadir-cluster2-pxc-0   localssd                    2d18h
pvc-fbe499a9-ecae-4196-b29e-c4d35d69b381   200Gi      RWO            Delete           Bound    pxc/datadir-cluster3-pxc-2   localnvme                   43h

There we can see data volumes from two deployed clusters. OpenEBS Local PV Hostpath is now my go-to method to deploy clusters with the local storage.


Webinar October 12: Percona Server for MongoDB – A Solid and Compatible Alternative to MongoDB

Percona Server MongoDB Italian Webinar

Percona Server MongoDB Italian WebinarThis webinar will be delivered in Italian.

MongoDB is the most used document-oriented database but some of the more requested features are available only in the Enterprise version. Percona Server for MongoDB (PSMDB) is an alternative to MongoDB with its open source, solid, and 100% compatible server. In this webinar, in Italian, you will learn about PSMDB and our Italian financial customer, Fabrick, will discuss its experience with Percona Support Services.

Please join Corrado Pandiani, Sr Consultant, and Maurizio La Plagia Head, of IT PFM & Smart Banking at Fabrick, on Monday, October 12, 2020, at 9 am EDT/ 3 pm CET for the webinar “Percona Server for MongoDB: A Solid and Compatible Alternative to MongoDB“.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.


New MongoDB Exporter Released with Percona Monitoring and Management 2.10.0

MongoDB Exporter Percona Monitoring and Management

MongoDB Exporter Percona Monitoring and ManagementWith Percona Monitoring and Management (PMM) 2.10.0, Percona is releasing a new MongoDB exporter for Prometheus. It is a complete rewrite from scratch with a totally new approach to collect and expose metrics from MongoDB diagnostic commands.

The MongoDB exporter in the 0.11.x branch exposes only a static list of handpicked metrics with custom names and labels. The new exporter uses a totally different approach: it exposes ALL available metrics returned by MongoDB internal diagnostic commands and the metric naming (or renaming) follows concrete rules that apply the same for all metrics.

For example, if we run

db.getSiblingDB('admin').runCommand({"getDiagnosticData": 1});

 the command returns a structure that looks like this:

     "data" : {
         "start" : ISODate("2020-08-23T22:25:26Z"),
         "serverStatus" : {
             "start" : ISODate("2020-08-23T22:25:26Z"),
             "host" : "f9cd25606ada",
             "version" : "4.2.8",
             "process" : "mongod",
             "pid" : NumberLong(1),
             "uptime" : 186327,
             "uptimeMillis" : NumberLong(186327655),
             "uptimeEstimate" : NumberLong(186327),
             "localTime" : ISODate("2020-08-23T22:25:26Z"),
             "asserts" : {
                 "regular" : 0,
                 "warning" : 0,
                 "msg" : 0,
                 "user" : 62,
                 "rollovers" : 0
             "connections" : {
                 "current" : 25,
                 "available" : 838835,
                 "totalCreated" : 231,
                 "active" : 1
             "electionMetrics" : {
                 "stepUpCmd" : {
                     "called" : NumberLong(0),
                     "successful" : NumberLong(0)
                 "priorityTakeover" : {
                     "called" : NumberLong(0),
                     "successful" : NumberLong(0)
                 "catchUpTakeover" : {
                     "called" : NumberLong(0),
                     "successful" : NumberLong(0)

In the new exporter, the approach to expose all metrics is to traverse the result of the diagnostic commands like


looking for values to expose. In this case, we have


, inside it we found


 and inside asserts there are metrics to expose (because they are numbers):








, and


. In this method of metric gathering, the metric name is the composition of the metrics keys we had to follow, for example, it will produce a metric like this:





If we open the web interface for the exporter at http://localhost:9216, we won’t find that metric I just mentioned. Why? Because to make the metric names shorter and to be able to group some metrics under the same name, the new exporter implements a metric name prefix rename and we are converting some metric suffixes to labels.

Prefix Renaming Table

The string


is prepended to all metrics as the Prometheus job name. Unlike < v2.0 mongodb_exporter, there won’t be




 in the job name.

Metric Prefix New Prefix
serverStatus.wiredTiger.transaction ss_wt_txn
serverStatus.wiredTiger ss_wt
serverStatus ss
replSetGetStatus rs
systemMetrics sys
local.oplog.rs.stats.wiredTiger oplog_stats_wt
local.oplog.rs.stats oplog_stats
collstats_storage.wiredTiger collstats_storage_wt
collstats_storage.indexDetails collstats_storage_idx
collStats.storageStats collstats_storage
collStats.latencyStats collstats_latency

Prefix labeling table:

Metric Prefix New Prefix
collStats.storageStats.indexDetails. index_name
globalLock.activeQueue. count_type
globalLock.locks. lock_type
serverStatus.asserts. assert_type
serverStatus.connections. conn_type
serverStatus.globalLock.currentQueue. count_type
serverStatus.metrics.commands. cmd_name
serverStatus.metrics.cursor.open. csr_type
serverStatus.metrics.document. doc_op_type
serverStatus.opLatencies. op_type
serverStatus.opReadConcernCounters. concern_type
serverStatus.opcounters. legacy_op_type
serverStatus.opcountersRepl. legacy_op_type
serverStatus.transactions.commitTypes. commit_type
serverStatus.wiredTiger.concurrentTransactions. txn_rw_type
serverStatus.wiredTiger.perf. perf_bucket
systemMetrics.disks. device_name

Because of the metric renaming and labeling, we will find that the metric




  will become


  and the metric name will be used as a label:

# HELP mongodb_ss_asserts serverStatus.asserts.
# TYPE mongodb_ss_asserts untyped
mongodb_ss_asserts{assert_type="msg"} 0
mongodb_ss_asserts{assert_type="regular"} 0
mongodb_ss_asserts{assert_type="rollovers"} 0
mongodb_ss_asserts{assert_type="user"} 62
mongodb_ss_asserts{assert_type="warning"} 0

Advantages Of The New Exporter

Since the new exporter will automatically collect all available metrics, it is now possible to collect new metrics in the PMM dashboards and as new MongoDB versions expose new metrics, they will automatically become available without the need to manually add metrics and upgrade the exporter. Also, since there are clear rules for metric renaming and how labels are created, metric names are more consistent even when new metrics are added.

How It Works

As mentioned previously, this new exporter exposes all metrics by traversing the JSON output of each MongoDB diagnostic command.
Those commands are:

{"getDiagnosticData": 1}

which includes:
replSetGetStatus (will be fetched separately if MongoDB <= v3.6)
Oplog collection stats
OS system metrics:
Disk usage

{"replSetGetStatus": 1}

{"serverStatus": 1}

and it is possible also to specify database.collections pairs lists to get stats for collections usage and indexes by running these commands for each collection:

{"$collStats": {"latencyStats": {"histograms": true}}}


Enabling Compatibility Mode

The new exporter has a parameter,


, which enables a special compatibility mode. In this mode, the old exporter metrics are also exposed along with the new metrics. This way, existing dashboards should work without requiring any change, and it is the default mode in PMM 2.10.0.

Example: in compatibility mode, all metrics having the


  prefix and the




  suffix like

# HELP mongodb_ss_wt_txn_transaction_checkpoint_min_time_msecs serverStatus.wiredTiger.transaction.
# TYPE mongodb_ss_wt_txn_transaction_checkpoint_min_time_msecs untyped
mongodb_ss_wt_txn_transaction_checkpoint_min_time_msecs 14

will be also exposed using the old naming convention as

# HELP mongodb_mongod_wiredtiger_transactions_checkpoint_milliseconds mongodb_mongod_wiredtiger_transactions_checkpoint_milliseconds
# TYPE mongodb_mongod_wiredtiger_transactions_checkpoint_milliseconds untyped
mongodb_mongod_wiredtiger_transactions_checkpoint_milliseconds{type="max"} 71
mongodb_mongod_wiredtiger_transactions_checkpoint_milliseconds{type="min"} 14

and the suffix is used as a label.


When starting the exporter with


, it will output the result of each diagnostic command to the standard error. This makes it easier to check the values returned by each command to verify the metric renaming and values.


This exporter is going to be released as part of PMM starting with version 2.10.0 and will also be released as an independent exporter in the repo’s release page.

Currently, the exporter resides in the


  branch and the old exporter is in the master branch but, exporter


  will be moved to the


 branch and


branch will be used for the new exporter code.

How to Contribute

Using the Makefile

In the main directory, there is a


to help you with development and testing tasks. Use make without parameters to get help. These are the available options:

Command Description
init Install linters
build Build the binaries
format Format source code
check Run checks/linters
check-license Check license in headers.
help Display this help message.
test Run all tests (need to start the sandbox first)
test-cluster Starts MongoDB test cluster. Use env var TEST_MONGODB_IMAGE to set flavor and version.
Example: TEST_MONGODB_IMAGE=mongo:3.6 make test-cluster
test-cluster-clean Stops MongoDB test cluster

Initializing the Development Environment

First, you need to have Go and Docker installed on your system, and then in order to install tools to format, test, and build the exporter, you need to run this command:

make init

It will install






, and




Starting the Sandbox

The testing sandbox starts in MongoDB instances as follows:

  • 3 instances for shard 1 at ports 17001, 17002, 17003
  • 3 instances for shard 2 at ports 17004, 17005, 17006
  • 3 config servers at ports 17007, 17008, 17009
  • 1 mongos server at port 17000
  • 1 stand-alone instance at port 27017

All instances are currently running without user and password so, for example, to connect to the mongos, you can just use:

mongo mongodb://

The sandbox can be started using the provided



make test-cluster

  and it can be stopped using

make test-cluster-clean


Running Tests

To run the unit tests, just run

make test


Formating Code

Before submitting code, please run make format to format the code according to the standards.

Known Issues

  • Replicaset lag sometimes shows strange values.
  • Elements that use next metrics have been removed from dashboards:

So, these dashboards have been updated:

  • dashboard “MongoDB RocksDB Details” –> removed dashboard completely
  • dashboard “MongoDB MMAPv1 Details”, element “MMAPv1 Journaling Time” –> remove element on the dashboard
  • dashboard “MongoDB MMAPv1 Details”, element “MMAPv1 Lock Ratios”, parameter “Lock (pre-3.2 only)” –> removed chart on the element on the dashboard

Final Thoughts

This new exporter shouldn’t affect any existing dashboard since the compatibility mode exposes all old-style metrics along with the new ones. We deprecated only a few metrics that were already meaningless because they are only valid and exposed for old MongoDB versions like mongodb_mongod_global_lock_ratio and mongodb_version_info.

At Percona, we built this new MongoDB exporter with the idea in mind of having an exporter capable of exposing all available metrics, with no hard-coded metrics and not tied to any particular MongoDB version. We would like to encourage users to help us by using this version and providing feedback. We also accept (and encourage) code fixes and improvements.

Also, learn more about the new Percona Customer Portal rolling out starting with the 2.10.0 release of Percona Monitoring and Management.


MongoDB 101: 5 Configuration Options That Impact Security (And How to Set Them)

MongoDB Security

MongoDB SecurityAs with any database platform, MongoDB security is of paramount importance to keeping your data safe.  MongoDB and other data platforms like Redis and Elasticsearch are often in the news for data breaches because of misconfigured settings in the database.  So how do you keep you and your company’s data from being compromised and from becoming another statistic?

We’ll show you five configuration options, as well as others that are required to go along with them, for your MongoDB deployment that will help keep your data secure while allowing use by users and applications with least-privileged access using modern authentication methods, keeping your data encrypted on disk and over the wire, and to see who is accessing your data as well. These configuration options are across the following areas in security: authentication, authorization, encryption, and auditing.

MongoDB Security Overview

MongoDB security is composed of four main areas of focus, authentication (who), authorization(what), encryption (how), and auditing (when).  This section is intended to give you a high-level overview of the different security focus areas for MongoDB.

Authentication is how you identify yourself to MongoDB.  There are many ways to authenticate oneself to a MongoDB database, from standard username and password using the SCRAM (Salted Challenge Response Authentication Mechanism) protocol, certificate-based authentication to tying into an identity management solution such as LDAP (Lightweight Directory Access Protocol), Active Directory and Kerberos.

Authorization is how MongoDB determines what you, as an authenticated user, can do.  MongoDB supports authorization using the RBAC (Role-Based Access Control) method.  MongoDB lets you create roles which are groupings of privileges that any user granted that role can do.  MongoDB comes with a comprehensive set of built-in roles as well as giving you the flexibility to create your own custom roles.  Common roles like read-only and write are there of course, but also ones useful for monitoring any node, backup and restore, and user administration. Additionally, MongoDB also supports LDAP authorization which allows you to sync LDAP groups with roles to simplify management.

Encryption is how your data can be encrypted while in flight (Transport) and while on disk (At Rest).   Transport encryption keeps your data encrypted while it is sent to and from your application to MongoDB.  MongoDB supports TLS/SSL encryption for data in-flight using x.509 Certificates, and here’s an example of setting up Transport EncryptionEncryption at Rest keeps your data safe from an external party who might get a copy of your data files as they’ll be completely unreadable in their encrypted form.

Auditing shows you when users connected, when privileges were changed, various admin events, users attempt something they shouldn’t, etc. based on filter criteria you can set.  This is helpful in compliance situations where you have to be able to show what who was on the database at what time, what privileges they had, when privileges were changed, etc.

Tip: Don’t confuse auditing as a way to track users’ activities in real-time, but rather think of it as a way to create a tamper-proof, append-only log file that you can go back to that shows what was happening and by whom during a specific time.

Tip:  Auditing is an expensive operation and will impact performance, be sure that you’re getting value from it and your IT Compliance team is able to actively use it, before setting it up.

Configuration Options

So while knowing the important areas of MongoDB Security is great, how do we ensure they are enabled or set up correctly?  And which ones are the most important?  We’ll now go through 5 configuration options that will help you secure your MongoDB environment!  We’ll also list some required configuration options that will work in conjunction with our 5 most important configuration options to keep your data safe.  We’ll break these configuration options into their security focus areas.

MongoDB uses a configuration file in the YAML file format.   The configuration file is usually found in the following locations, depending on your Operating System:


  • On Linux, a default /etc/mongod.conf configuration file is included when using a package manager to install MongoDB.
  • On Windows, a default <install directory>/bin/mongod.cfg configuration file is included during the installation.
  • On macOS, a default /usr/local/etc/mongod.conf configuration file is included when installing from MongoDB’s official Homebrew tap.


Our first configuration option, security.authorization, is perhaps the most important for enabling security on your MongoDB Deployment.  This configuration option not only enforces MongoDB using authentication so that a user must at least login using credentials but it also simultaneously engages role-based access control which limits what a user can do.

  authorization: enabled


Tip:  If you set this configuration option up before creating a user in MongoDB, you could use the localhost exception in order to create your first user.  This is especially helpful in cases of automation or other situations where you want to have all your configuration options configured only once and then come in and add users.

There are several other authentication configuration options that are required for your MongoDB deployment:

  • security.clusterAuthMode – The authentication mode used between replica set or sharded cluster nodes to authenticate.  This will typically be either keyFile or x509.  Acceptable values are:
    • keyFile – default, only accepts keyfiles
    • x509 – uses only x509 certificates for cluster authentication
    • sendKeyFile – only used when transitioning from keyFile to x509 certificate authentication.   Accepts keyFiles and x509 certificates
    • sendX509 – only used when transitioning from x509 certificate authentication to keyFile authentication.   Accepts x509 certificates and keyFiles
  • security.keyFile – sets the destination of the keyFile if using keyFile based authentication.  Note that the user MongoDB is running as must have read-only or read/write level permissions on the keyfile, with no permissions granted to other users.


The security.authorization configuration option that enabled authentication is also the most important configuration option for setting up authorization since it also gives us roles that allow us to authorize users to have specific permissions.   While there are some authorization configuration options for the inbuilt authorization system, most of the options are useful for integrating LDAP or other external authentication mechanisms with your roles.  Read more about setting up LDAP Authorization, as well as a great blog post discussing how to set it up.


Transport Encryption ensures that your data is encrypted between your application and the database, it also can be used to encrypt data between members of your replica set and sharded cluster.  The most important configuration option here is net.tls.mode. This configuration option is new in MongoDB 4.2, previous to MongoDB 4.2, this configuration option is named net.ssl.mode.   This configuration option decides how strictly you want to enforce TLS encryption.   The options for this configuration option are:

  • Disabled – signifies that there is no encryption whatsoever.
  • requireTLS – signifies that all traffic, regardless of origin, is encrypted.  requireTLS is the most secure setting for this configuration option.
  • allowTLS – signifies that there is no encryption going on between members of the replica set or sharded cluster, but the DB server will accept both encrypted and non-encrypted traffic from the application hosts.  Only used for transitioning between disabled to requireTLS in a rolling restart fashion.
  • preferTLS – signifies that there is encryption going on between members of the replica set or sharded cluster and that the DB server will accept both encrypted and non-encrypted traffic from the application hosts. Only used for transitioning between disabled to requireTLS in a rolling restart fashion.
    mode: requireTL


Additional required configuration options for transport encryption are:

  • net.tls.certificateKeyFile – location of the .pem file with the certificate and it’s key to be used for application connections.  Note that the user MongoDB is running as must have read permissions on this file.
  • net.tls.clusterFile – location of the .pem file used for transport encryption between replica set or cluster members.   Note that the user MongoDB is running as must have read permissions on this file.
  • net.tls.CAFile – location of the .pem file with the root certificate chain from the Certificate Authority.  Note that the user MongoDB is running as must have read permissions on this file.

Data at Rest Encryption ensures that your data can’t be read by someone who steals your database’s data files unless they also steal the key.  This prevents someone from reading your MongoDB data files at the file system level.  Here the most important configuration option is security.enableEncryption.

  enableEncryption: true

Additional required configuration options for Data At Rest Encryption are:

  • security.encryptionCipherMode – form of encryption to use, options are AES256-CBC and AES256-GCM

Percona Server for MongoDB Specific Configuration Options:

Percona Server for MongoDB has integration with HashiCorp Vault for secret management for your Data at Rest Encryption. Read the documentation for Vault and Using Vault to Store the Master Key for Data at Rest Encryption on Percona Server for MongoDB.

Important configuration options for the Vault Integration are:

  • security.vault.serverName – server name that your vault server is on
  • Security.vault.port – port for vault connectivity
  • security.vault.tokenFile – location of file with vault token
  • Security.vault.secret – location for secrets, since these are set up per node, this should have a distinguishing characteristic such as node name in it
  • security.vault.serverCAFile – location of CAFile (Certificate Authority) on your local mongodb node
  • security.vault.rotateMasterKey – only used to rotate the master key

MongoDB Enterprise Specific Data At Rest Encryption Configuration Options:

Currently, MongoDB Enterprise does not have Vault Integration for Encryption at rest except in MongoDB Atlas.  MongoDB Enterprise does support the KMIP protocol and you can integrate MongoDB with any Key Management tool that utilizes the KMIP protocol.  Documentation can be found here.

Important configuration options to support Key Management through the KMIP protocol are:

  • security.kmip.serverName – server name where your Key Management tool resides
  • security.kmip.port – port for your key management tool
  • security.kmip.serverCAfile – path on your MongoDB hosts of a CA file (Certificate Authority) for secure connection to your Key Management Tool
  • security.kmip.clientCertificateFile – path to the client certificate used for authentication to your Key Management tool
  • security.kmip.rotateMasterKey – only used to rotate the master key


Auditing allows IT Security Compliance teams to track and log activities that are run against the MongoDB database.  There are several important auditing configuration options for MongoDB,  auditLog.filter is the most important as it decides what exactly you are setting up in your auditing log.

  filter:  { <field1>: <expression1>, ... }


For example, if we only wanted to have an audit log entry created every time someone created or removed a collection, we would set the auditLog.Filter as such:

  filter: { atype: { $in: [ "createCollection", "dropCollection" ] } }


If we wanted to audit everyone with a specific role, we could set the auditFilter as such:

  filter:{ roles: { role: "readWrite", db: "appdb" } }


Additional required configuration options for auditing are:

  • auditLog.destination – whether the audit log will be written to a file, to the console, or to the syslog
  • auditLog.path – if outputting to a file, the destination directory, and file name of the audit log.   Note that the user MongoDB runs as must have read and write permissions to this directory.
  • auditLog.format – the format the audit log is output to, options are JSON and BSON, with JSON being the more commonly used format.

Finally, while auditing is important to track and log activity in your database, including accessing PII or other sensitive data, you don’t want to expose PII in your auditing or other log files.  To accomplish this you must set up log redaction on your MongoDB Replica Set or Sharded Cluster.   The important configuration option for log redaction is security.redactClientLogData.   Acceptable values for this configuration option are true and false.

  redactClientLogData: true


In this blog post, we’ve gone over five important MongoDB configuration options to ensure you have a more secure MongoDB deployment as well as some other configuration options that help the five keep your data secure.  We hope that these configuration options will help you build more secure MongoDB deployments and avoid being a statistic of a data breach.   Thanks for reading!

Additional Resources:

MongoDB Top Five Security Concerns

Securing MongoDB Webinar

Our latest resource, Using Open Source Software to Ensure the Security of Your MongoDB Database, documents how to deploy a secure, enterprise-grade, MongoDB deployment without worrying about license fees, giving organizations the flexibility to deploy consistent models across their entire infrastructure.

Download “Using Open Source Software to Ensure the Security of Your MongoDB Database”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com