Sep
21
2018
--

End-of-Life Reminder for Percona Server for MongoDB 3.2

Percona Server for MongoDB

Percona Server for MongoDBAs noted on our Software and Platform Lifecycle page, Percona Server for MongoDB 3.2 will be end of life on September 30, 2018. According to the Percona Services Lifecycle Policy, customers with an active support contract with Percona will continue receiving support; however, end-of-life software receives no new builds, bug fixes, security fixes, or features.

Each software package Percona produces has a defined migration pathway for upgrading to a newer version. If you are currently running Percona Server for MongoDB 3.2, we encourage you to upgrade to version 3.4 or newer. Please reach out to us if you need assistance in migrating to a newer release, Percona is happy to help.

The post End-of-Life Reminder for Percona Server for MongoDB 3.2 appeared first on Percona Database Performance Blog.

Sep
21
2018
--

This Week in Data with Colin Charles 53: It’s MariaDB Week PLUS Percona Live Europe Update

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

This week is clearly what I’d call a “MariaDB week” — plenty of announcements coming from MariaDB Corporation and MariaDB Foundation.

It started with Alibaba Cloud and MariaDB Announce the Launch of ApsaraDB RDS for MariaDB TX, which makes Alibaba Cloud the first public cloud to offer the enterprise offering of MariaDB, MariaDB TX 3.0. It is not available yet as of this announcement for rolling out from the interface, but I expect it will be soon. Exciting, as you can already get MariaDB Server on Amazon RDS for MariaDB, and you can join the waitlist preview for Azure.

MariaDB Corporation has received more funding from ServiceNow Ventures in the Series C round, and has gained a new board member in Pat Casey. ServiceNow is a user of MariaDB, and “ServiceNow’s platform runs on up to 85,000 MariaDB databases that serve more than 25 billion queries per hour.” There was an excellent keynote session at M|18 about how ServiceNow uses MariaDB. The Register refers to this as “protecting ServiceNow’s toolchain”.

For good measure, MariaDB acquired Clustrix as well. This is the second acquisition after MammothDB earlier in the year. It is worth reading the TechCrunch take on this. Clustrix, a Y Combinator company, has been around since 2006 and raised $72 million. The price of the acquisition was not announced. For a bit of behind the scenes chatter from ex-employee shareholders, Hacker News delivers.

From a MariaDB Foundation standpoint, we see Otto Kekäläinen, the MariaDB Foundation CEO stepping down. Thanks for all your hard work Otto! And maybe you missed it, but not long ago, Percona Became a Bronze Sponsor of MariaDB Foundation.

Speaking of conferences, the tutorial schedule and a sneak peek of sessions for Percona Live Europe Frankfurt have been announced. In addition, the Call for Papers – 2019 Annual MariaDB User Conference closes October 31, 2018.

Releases

Link List

Upcoming Appearances

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

 

The post This Week in Data with Colin Charles 53: It’s MariaDB Week PLUS Percona Live Europe Update appeared first on Percona Database Performance Blog.

Sep
18
2018
--

Percona Server for MongoDB 3.2.21-3.12 Is Now Available

MongoRocks

Percona Server for MongoDB 3.2Percona announces the release of Percona Server for MongoDB 3.2.21-3.12 on September 18, 2018. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.2 Community Edition. It supports MongoDB 3.2 protocols and drivers.

Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine, as well as several enterprise-grade features. It requires no changes to MongoDB applications or code.

This release updates the MongoDB portion of the Percona Server for MongoDB code base to
to 3.2.21.

The Percona Server for MongoDB 3.2.21-3.12 release notes are available in the official documentation.

The post Percona Server for MongoDB 3.2.21-3.12 Is Now Available appeared first on Percona Database Performance Blog.

Sep
17
2018
--

Upcoming Webinar Wed 9/19: Percona Server for MongoDB vs MongoDB Enterprise

MongoDB Enterprise Advanced

MongoDB Enterprise AdvancedPlease join Percona’s Senior Support Engineer, Adamo Tonete, as he presents Percona Server for MongoDB vs MongoDB Enterprise on Wednesday, September 19th, 2018, at 12:30 PM PDT (UTC-7) / 3:30 PM EDT (UTC-4).

 

In this webinar we will evaluate MongoDB Community, MongoDB Enterprise Advanced, and Percona Server for MongoDB side-by-side to better inform the decision making process. Percona Server for MongoDB features enterprise-grade functionalities and runs on tools like Percona Monitoring and Management to monitor MongoDB. Even more, it is 100% free and open source. It is also worth mentioning that MongoDB Enterprise Advanced comes with encryption and LDAP authorization. Accordingly, we will walk through those features and much more.

Register for this webinar to learn more about Percona Server for MongoDB vs MongoDB Enterprise.

The post Upcoming Webinar Wed 9/19: Percona Server for MongoDB vs MongoDB Enterprise appeared first on Percona Database Performance Blog.

Sep
17
2018
--

Percona Server for MongoDB 3.6.7-1.5 Is Now Available

MongoRocks

Percona Server for MongoDBPercona announces the release of Percona Server for MongoDB 3.6.7-1.5 on September 17, 2018. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.6 Community Edition. It supports MongoDB 3.6 protocols and drivers.

Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine, as well as several enterprise-grade features. Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.6.7 and does not include any additional changes.

The Percona Server for MongoDB 3.6.7-1.5 release notes are available in the official documentation.

The post Percona Server for MongoDB 3.6.7-1.5 Is Now Available appeared first on Percona Database Performance Blog.

Sep
13
2018
--

Percona Toolkit 3.0.12 Is Now Available

percona toolkit

percona toolkitPercona announces the release of Percona Toolkit 3.0.12 on September 13, 2018.

Percona Toolkit is a collection of advanced open source command-line tools, developed and used by the Percona technical staff, that are engineered to perform a variety of MySQL®, MongoDB® and system tasks that are too difficult or complex to perform manually. With over 1,000,000 downloads, Percona Toolkit supports Percona Server for MySQL, MySQL®, MariaDB®, Percona Server for MongoDB and MongoDB.

Percona Toolkit, like all Percona software, is free and open source. You can download packages from the website or install from official repositories.

This release includes the following changes:

Fixed bugs:

 

  • PT-1611: pt-archiver failed to output UTF-8 characters.
  • PT-1603: pt-table-sync incorrectly calculated chunk boundaries in case of unsorted ENUM fields in indexes.
  • PT-1574: pt-online-schema-change failed on tables with a nullable unique key and a row with NULL values.
  • PT-1572: ENUM fields usage in keys was improved, resulting in higher speed for expressions with sorted ENUM items.
  • PT-1422: pt-mysql-summary could hang when NULL values appear in the processlist Time column.

Documentation change:

  • PT-1321: The required MySQL privileges were detailed in pt-online-schema-change documentation

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.

The post Percona Toolkit 3.0.12 Is Now Available appeared first on Percona Database Performance Blog.

Sep
08
2018
--

Percona Monitoring and Management (PMM) 1.14.1 Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

We’re releasing hotfix 1.14.1 to address three issues found post-release of 1.14.0:

  • PMM-2963: Upgrading to PMM 1.14.0 fails due to attempting to create already existing Dashboard
    • Our upgrade script incorrectly tried to create dashboards that already existed, and generating failure message:
      A folder or dashboard in the general folder with the same name already exists
  • PMM-2958: Grafana did not update to 5.1 when upgrading from versions older than 1.11
    • We identified a niche case where PMM installations that were upgraded from < 1.11 would fail to upgrade Grafana to correct release 5.1 (Users were left on Grafana 5.0)

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management (PMM) 1.14.1 Is Now Available appeared first on Percona Database Performance Blog.

Sep
06
2018
--

MongoDB: Investigate Queries with explain() and Index Usage (part 2)

MongoDB explain() method optimization tool

MongoDB explain() method optimization toolThis is the second part of a two parts series. In MongoDB: index usage and MongoDB explain() we introduced the main index types supported by MongoDB, and how to create and use them. In this second article, we are going to see some examples on how to use explain() method to investigate queries. Do you need to optimize MongoDB queries? You’ll see how to use explain() to find out how your query uses indexes. Or, perhaps, that it doesn’t!

What is explain()

explain() is a method that you can apply to simple queries or to cursors to investigate the query execution plan. The execution plan is how MongoDB resolves a query. Looking at all the information returned by explain() we can see find out stuff like:

  • how many documents were scanned
  • how many documents were returned
  • which index was used
  • how long the query took to be executed
  • which alternative execution plans were evaluated

…and other useful information.

The aim of using explain() is to find out how to improve the query. For example, by creating missing indexes or by rewriting it in order to use existing indexes more correctly. If you are familiar with the EXPLAIN command in MySQL, the goals of MongoDB’s explain() method are exactly the same.

Explainable object

You can apply the explain() method to a query or a cursor in the following way, as per any other method:

MongoDB > db.mycollection.find().explain()

However, the preferred way to investigate queries in the mongo shell is to create first the explainable object.

We can create an explainable object like this:

MongoDB > var myexp = db.mycollection.explain()

Once you have created the explainable object, then any kind of operation can be run against it to investigate a query or cursor execution plan. For example:

MongoDB > myexp.find()
MongoDB > myexp.update()
MongoDB > myexp.remove()
MongoDB > myexp.aggregate()

Restaurants test database

To see some examples we need a collection with some data.
For our purposes we can use the New York restaurants database. You can download this from the following url:

https://www.w3resource.com/mongodb-exercises/retaurants.zip

Unzip the archive, and import the JSON file into MongoDB:

$ unzip retaurants.zip
$ mongoimport --host 127.0.0.1 -d test -c restaurants --file retaurants.json

This collection has 3772 documents, all the restaurants in New York City. Here is a document sample.

MongoDB > use test
switched to db test
MongoDB > db.restaurants.find().pretty().limit(1)
{
	"_id" : ObjectId("5b71b3281979e24aa18c0121"),
	"address" : {
		"building" : "1007",
		"coord" : [
			-73.856077,
			40.848447
		],
		"street" : "Morris Park Ave",
		"zipcode" : "10462"
	},
	"borough" : "Bronx",
	"cuisine" : "Bakery",
	"grades" : [
		{
			"date" : ISODate("2014-03-03T00:00:00Z"),
			"grade" : "A",
			"score" : 2
		},
		{
			"date" : ISODate("2013-09-11T00:00:00Z"),
			"grade" : "A",
			"score" : 6
		},
		{
			"date" : ISODate("2013-01-24T00:00:00Z"),
			"grade" : "A",
			"score" : 10
		},
		{
			"date" : ISODate("2011-11-23T00:00:00Z"),
			"grade" : "A",
			"score" : 9
		},
		{
			"date" : ISODate("2011-03-10T00:00:00Z"),
			"grade" : "B",
			"score" : 14
		}
	],
	"name" : "Morris Park Bake Shop",
	"restaurant_id" : "30075445"
}

Explain() verbosity

The explain() method has three verbosity modes.

  • queryPlanner – this is the default mode. At this level, explain provides information about the winning plan, including the index used or if a collection scan is needed (COLLSCAN)
  • executionStats – this mode includes all the information provided by the queryPlanner, plus the statistics. Statistics include details such as the number of documents examined and returned, the execution time in milliseconds, and so on.
  • allPlansExecution – this mode includes all the information provided by the executionStats plus information about the discarded execution plans

We’ll see the explain() output in the following examples.

Example 1

It’s time to use the restaurants collection to run our first example: find out all the restaurants in the Manhattan borough.

Let’s create first the explainable object with the executionStats mode.

MongoDB > var exp = db.restaurants.explain("executionStats")

Then let’s investigate the query.

MongoDB > exp.find( { borough: "Manhattan"} )
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.restaurants",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"borough" : {
				"$eq" : "Manhattan"
			}
		},
		"winningPlan" : {
			"stage" : "COLLSCAN",
			"filter" : {
				"borough" : {
					"$eq" : "Manhattan"
				}
			},
			"direction" : "forward"
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 1883,
		"executionTimeMillis" : 1,
		"totalKeysExamined" : 0,
		"totalDocsExamined" : 3772,
		"executionStages" : {
			"stage" : "COLLSCAN",
			"filter" : {
				"borough" : {
					"$eq" : "Manhattan"
				}
			},
			"nReturned" : 1883,
			"executionTimeMillisEstimate" : 0,
			"works" : 3774,
			"advanced" : 1883,
			"needTime" : 1890,
			"needYield" : 0,
			"saveState" : 29,
			"restoreState" : 29,
			"isEOF" : 1,
			"invalidates" : 0,
			"direction" : "forward",
			"docsExamined" : 3772
		}
	},
	"serverInfo" : {
		"host" : "Admins-MBP",
		"port" : 27017,
		"version" : "3.6.4",
		"gitVersion" : "d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856"
	},
	"ok" : 1
}

Here we can see the output of the explain(). First of all, we can clearly distinguish the “queryPlanner” and the “executionStats” modes. We won’t describe each one of the values, as some are really intuitive. Let’s have a look just at some of them:

queryPlanner.winningPlan.stage = “COLLSCAN”
This provides very important information about the winning plan: it means that MongoDB needs to do a collection scan. The query it’s not optimized because all the documents must be read.

queryPlanner.winningPlan.rejectedPlans = []
It’s empty. There aren’t rejected plans. When the query needs to be executed with COLLSCAN the only execution plan is the winning plan. We don’t have any indexes in the collection, apart the one on _id, so there aren’t other execution plans.

executionStats.nReturned = 1883
The number of documents returned is 1883, the number of restaurants located in Manhattan.

executionStats.totalDocsExamined = 3772
The number of documents examined is exactly the number of documents in the collection. This was expected because the query uses COLLSCAN

executionStats.executionTimeMillis = 1
The execution time of the query. It’s just 1 millisecond. This might seem good, but remember that this is the time needed to scan just 3772 documents, a very small test collection. Think about what this time could be in the case of a collection with millions of documents!

How can we improve the query?

In this case it’s simple. Let’s try to create a single field index on borough, the only condition we have in the find(). Then let’s try to explain the same query again.

MongoDB > db.restaurants.createIndex( {borough: 1} )
{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 1,
	"numIndexesAfter" : 2,
	"ok" : 1
}
MongoDB > exp.find( { borough: "Manhattan"} )
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.restaurants",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"borough" : {
				"$eq" : "Manhattan"
			}
		},
		"winningPlan" : {
			"stage" : "FETCH",
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"borough" : 1
				},
				"indexName" : "borough_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"borough" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"borough" : [
						"[\"Manhattan\", \"Manhattan\"]"
					]
				}
			}
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 1883,
		"executionTimeMillis" : 1,
		"totalKeysExamined" : 1883,
		"totalDocsExamined" : 1883,
		"executionStages" : {
			"stage" : "FETCH",
			"nReturned" : 1883,
			"executionTimeMillisEstimate" : 0,
			"works" : 1884,
			"advanced" : 1883,
			"needTime" : 0,
			"needYield" : 0,
			"saveState" : 14,
			"restoreState" : 14,
			"isEOF" : 1,
			"invalidates" : 0,
			"docsExamined" : 1883,
			"alreadyHasObj" : 0,
			"inputStage" : {
				"stage" : "IXSCAN",
				"nReturned" : 1883,
				"executionTimeMillisEstimate" : 0,
				"works" : 1884,
				"advanced" : 1883,
				"needTime" : 0,
				"needYield" : 0,
				"saveState" : 14,
				"restoreState" : 14,
				"isEOF" : 1,
				"invalidates" : 0,
				"keyPattern" : {
					"borough" : 1
				},
				"indexName" : "borough_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"borough" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"borough" : [
						"[\"Manhattan\", \"Manhattan\"]"
					]
				},
				"keysExamined" : 1883,
				"seeks" : 1,
				"dupsTested" : 0,
				"dupsDropped" : 0,
				"seenInvalidated" : 0
			}
		}
	},
	"serverInfo" : {
		"host" : "Admins-MBP",
		"port" : 27017,
		"version" : "3.6.4",
		"gitVersion" : "d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856"
	},
	"ok" : 1
}

Interpreting the output

Now, the output is completely different. Let’s have a look at some of the most relevant values.

queryPlanner.winningPlan.inputStage.stage = “IXSCAN”
This is very important. IXSCAN means that now MongoDB doesn’t need to do a collection scan but an index can be used to find the documents.

queryPlanner.winningPlan.inputStage.indexName = “borough_1”
The name of the index used. This is the default name of an index: the name of the field plus the _1 for ascending or _-1 for descending order.

queryPlanner.winningPlan.inputStage.direction = “forward”
MongoDB traverses the index in a forward direction.

executionStats.nRertuned = 1883
The number of documents returned. Obviously this is the same as before.

executionStats.totalKeysExamined = 1883
The number of keys examined in the index.

executionStats.totalDocsExamined  = 1883
Now the number of documents examined corresponds to the number of elements examined in the index.

We have optimized the query.

Example 2

Now we would like to examine a query to find out all the restaurants with Italian cuisine that received a grade score of greater than 50.

MongoDB > var exp = db.restaurants.explain()
MongoDB > exp.find({$and: [  {"cuisine" : {$eq :"Italian"}},  {"grades.score" : {$gt : 50}}  ]  })
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.restaurants",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"$and" : [
				{
					"cuisine" : {
						"$eq" : "Italian"
					}
				},
				{
					"grades.score" : {
						"$gt" : 50
					}
				}
			]
		},
		"winningPlan" : {
			"stage" : "COLLSCAN",
			"filter" : {
				"$and" : [
					{
						"cuisine" : {
							"$eq" : "Italian"
						}
					},
					{
						"grades.score" : {
							"$gt" : 50
						}
					}
				]
			},
			"direction" : "forward"
		},
		"rejectedPlans" : [ ]
	},
	"serverInfo" : {
		"host" : "Admins-MBP",
		"port" : 27017,
		"version" : "3.6.4",
		"gitVersion" : "d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856"
	},
	"ok" : 1
}

We have a COLLSCAN again. Let’s try to improve the query creating an index on the cuisine field.

MongoDB > var exp = db.restaurants.explain("executionStats")
MongoDB > db.restaurants.createIndex({cuisine:1})
{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 2,
	"numIndexesAfter" : 3,
	"ok" : 1
}
MongoDB > exp.find({$and: [  {"cuisine" : {$eq :"Italian"}},  {"grades.score" : {$gt : 50}}  ]  })
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.restaurants",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"$and" : [
				{
					"cuisine" : {
						"$eq" : "Italian"
					}
				},
				{
					"grades.score" : {
						"$gt" : 50
					}
				}
			]
		},
		"winningPlan" : {
			"stage" : "FETCH",
			"filter" : {
				"grades.score" : {
					"$gt" : 50
				}
			},
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"cuisine" : 1
				},
				"indexName" : "cuisine_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"cuisine" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"cuisine" : [
						"[\"Italian\", \"Italian\"]"
					]
				}
			}
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 6,
		"executionTimeMillis" : 4,
		"totalKeysExamined" : 325,
		"totalDocsExamined" : 325,
		"executionStages" : {
			"stage" : "FETCH",
			"filter" : {
				"grades.score" : {
					"$gt" : 50
				}
			},
			"nReturned" : 6,
			"executionTimeMillisEstimate" : 0,
			"works" : 326,
			"advanced" : 6,
			"needTime" : 319,
			"needYield" : 0,
			"saveState" : 2,
			"restoreState" : 2,
			"isEOF" : 1,
			"invalidates" : 0,
			"docsExamined" : 325,
			"alreadyHasObj" : 0,
			"inputStage" : {
				"stage" : "IXSCAN",
				"nReturned" : 325,
				"executionTimeMillisEstimate" : 0,
				"works" : 326,
				"advanced" : 325,
				"needTime" : 0,
				"needYield" : 0,
				"saveState" : 2,
				"restoreState" : 2,
				"isEOF" : 1,
				"invalidates" : 0,
				"keyPattern" : {
					"cuisine" : 1
				},
				"indexName" : "cuisine_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"cuisine" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"cuisine" : [
						"[\"Italian\", \"Italian\"]"
					]
				},
				"keysExamined" : 325,
				"seeks" : 1,
				"dupsTested" : 0,
				"dupsDropped" : 0,
				"seenInvalidated" : 0
			}
		}
	},
	"serverInfo" : {
		"host" : "Admins-MBP",
		"port" : 27017,
		"version" : "3.6.4",
		"gitVersion" : "d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856"
	},
	"ok" : 1
}

The query has improved. The created index cuisine_1 is used, but still we have 325 documents examined with only 6 documents returned. Let’s see if we can do better by creating instead a compound index that uses both the fields in the condition: cuisine and grades.score.

MongoDB > db.restaurants.createIndex({cuisine:1, "grades.score":1})
{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 3,
	"numIndexesAfter" : 4,
	"ok" : 1
}
MongoDB > exp.find({$and: [  {"cuisine" : {$eq :"Italian"}},  {"grades.score" : {$gt : 50}}  ]  })
{
	"queryPlanner" : {
		"plannerVersion" : 1,
		"namespace" : "test.restaurants",
		"indexFilterSet" : false,
		"parsedQuery" : {
			"$and" : [
				{
					"cuisine" : {
						"$eq" : "Italian"
					}
				},
				{
					"grades.score" : {
						"$gt" : 50
					}
				}
			]
		},
		"winningPlan" : {
			"stage" : "FETCH",
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"cuisine" : 1,
					"grades.score" : 1
				},
				"indexName" : "cuisine_1_grades.score_1",
				"isMultiKey" : true,
				"multiKeyPaths" : {
					"cuisine" : [ ],
					"grades.score" : [
						"grades"
					]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"cuisine" : [
						"[\"Italian\", \"Italian\"]"
					],
					"grades.score" : [
						"(50.0, inf.0]"
					]
				}
			}
		},
		"rejectedPlans" : [
			{
				"stage" : "FETCH",
				"filter" : {
					"grades.score" : {
						"$gt" : 50
					}
				},
				"inputStage" : {
					"stage" : "IXSCAN",
					"keyPattern" : {
						"cuisine" : 1
					},
					"indexName" : "cuisine_1",
					"isMultiKey" : false,
					"multiKeyPaths" : {
						"cuisine" : [ ]
					},
					"isUnique" : false,
					"isSparse" : false,
					"isPartial" : false,
					"indexVersion" : 2,
					"direction" : "forward",
					"indexBounds" : {
						"cuisine" : [
							"[\"Italian\", \"Italian\"]"
						]
					}
				}
			}
		]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 6,
		"executionTimeMillis" : 1,
		"totalKeysExamined" : 7,
		"totalDocsExamined" : 6,
		"executionStages" : {
			"stage" : "FETCH",
			"nReturned" : 6,
			"executionTimeMillisEstimate" : 0,
			"works" : 9,
			"advanced" : 6,
			"needTime" : 1,
			"needYield" : 0,
			"saveState" : 0,
			"restoreState" : 0,
			"isEOF" : 1,
			"invalidates" : 0,
			"docsExamined" : 6,
			"alreadyHasObj" : 0,
			"inputStage" : {
				"stage" : "IXSCAN",
				"nReturned" : 6,
				"executionTimeMillisEstimate" : 0,
				"works" : 8,
				"advanced" : 6,
				"needTime" : 1,
				"needYield" : 0,
				"saveState" : 0,
				"restoreState" : 0,
				"isEOF" : 1,
				"invalidates" : 0,
				"keyPattern" : {
					"cuisine" : 1,
					"grades.score" : 1
				},
				"indexName" : "cuisine_1_grades.score_1",
				"isMultiKey" : true,
				"multiKeyPaths" : {
					"cuisine" : [ ],
					"grades.score" : [
						"grades"
					]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"cuisine" : [
						"[\"Italian\", \"Italian\"]"
					],
					"grades.score" : [
						"(50.0, inf.0]"
					]
				},
				"keysExamined" : 7,
				"seeks" : 1,
				"dupsTested" : 7,
				"dupsDropped" : 1,
				"seenInvalidated" : 0
			}
		}
	},
	"serverInfo" : {
		"host" : "Admins-MBP",
		"port" : 27017,
		"version" : "3.6.4",
		"gitVersion" : "d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856"
	},
	"ok" : 1
}

Now, the winning plan uses the new compound index cuisine_1_grades.score_1 and we have only 6 documents examined. Please note also that now we have a rejected plan, the one that uses the single field index cuisine_1 previously created.

We have optimized the query.

Example 3

Let’s find out all the restaurants that don’t prepare any “American” cuisine in Brooklyn and achieved a grade of ‘A’. We want to see the  results ordered by cuisine, descending.

At this point you should be a little familiar with the explain() output, so in the next box we truncate it for the sake of simplicity, leaving only the relevant parts.

MongoDB > var exp.restaurants.explain("executionStats")
MongoDB > exp.find( {"cuisine" : {$ne : "American "},
... "grades.grade" :"A",
... "borough": "Brooklyn"}).sort({"cuisine":-1})
{
	"queryPlanner" : {
...
...
		"winningPlan" : {
			"stage" : "FETCH",
			"filter" : {
				"$and" : [
					{
						"borough" : {
							"$eq" : "Brooklyn"
						}
					},
					{
						"grades.grade" : {
							"$eq" : "A"
						}
					}
				]
			},
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"cuisine" : 1
				},
				"indexName" : "cuisine_1",
				"isMultiKey" : false,
				"multiKeyPaths" : {
					"cuisine" : [ ]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "backward",
				"indexBounds" : {
					"cuisine" : [
						"[MaxKey, \"American \")",
						"(\"American \", MinKey]"
					]
				}
			}
		},
...
...
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 493,
		"executionTimeMillis" : 9,
		"totalKeysExamined" : 2518,
		"totalDocsExamined" : 2517,
...
...
	"serverInfo" : {
		"host" : "Admins-MBP",
		"port" : 27017,
		"version" : "3.6.4",
		"gitVersion" : "d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856"
	},
	"ok" : 1
}

Looking at the winning plan, we see that we have IXSCAN on the index cuisine_1. The important thing to notice here is the choice of the index on cuisine field—it’s because we have used sort({cuisine:-1}). There is not a SORT stage because the documents are already extracted using the index, and so they are already sorted. Just notice, too, direction:”backward” —this is because we specified descending order in the query. If we try to execute a slightly different query, changing the sorting to name:1 instead of cuisine:-1 we’ll see a completely different winning plan.

MongoDB > exp.find( {"cuisine" : {$ne : "American "},
... "grades.grade" :"A",
... "borough": "Brooklyn"}).sort({"name":1})
...
		"winningPlan" : {
			"stage" : "SORT",
			"sortPattern" : {
				"name" : -1
			},
			"inputStage" : {
				"stage" : "SORT_KEY_GENERATOR",
				"inputStage" : {
					"stage" : "FETCH",
					"filter" : {
						"$and" : [
							{
								"grades.grade" : {
									"$eq" : "A"
								}
							},
							{
								"$nor" : [
									{
										"cuisine" : {
											"$eq" : "American "
										}
									}
								]
							}
						]
					},
					"inputStage" : {
						"stage" : "IXSCAN",
						"keyPattern" : {
							"borough" : 1
						},
						"indexName" : "borough_1",
						"isMultiKey" : false,
						"multiKeyPaths" : {
							"borough" : [ ]
						},
						"isUnique" : false,
						"isSparse" : false,
						"isPartial" : false,
						"indexVersion" : 2,
						"direction" : "forward",
						"indexBounds" : {
							"borough" : [
								"[\"Brooklyn\", \"Brooklyn\"]"
							]
						}
					}
				}
			}
		},
...
...
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 493,
		"executionTimeMillis" : 13,
		"totalKeysExamined" : 684,
		"totalDocsExamined" : 684,
...
...

In this case, we have fewer documents examined but since the cuisine_1 index cannot be used, a SORT stage is needed, and the index used to fetch the document is borough_1. While MongoDB has examined fewer documents, the execution time is worse because of the extra stage used to sort the documents.

Let’s return now to the original query. We can also notice that the number of documents examined is still too high (2517) compared to the documents returned (493). That’s not optimal. Let’s see if we can further improve the query by adding another compound index on (cuisine,borough,grades.grade).

MongoDB > db.restaurants.createIndex({cuisine:1,borough:1,"grades.grade":1})
{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 4,
	"numIndexesAfter" : 5,
	"ok" : 1
}
MongoDB > exp.find( {"cuisine" : {$ne : "American "},
... "grades.grade" :"A",
... "borough": "Brooklyn"}).sort({"cuisine":-1})
...
...		"winningPlan" : {
			"stage" : "FETCH",
			"filter" : {
				"$nor" : [
					{
						"cuisine" : {
							"$eq" : "American "
						}
					}
				]
			},
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"cuisine" : 1,
					"borough" : 1,
					"grades.grade" : 1
				},
				"indexName" : "cuisine_1_borough_1_grades.grade_1",
				"isMultiKey" : true,
				"multiKeyPaths" : {
					"cuisine" : [ ],
					"borough" : [ ],
					"grades.grade" : [
						"grades"
					]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "backward",
				"indexBounds" : {
					"cuisine" : [
						"[MaxKey, \"American \")",
						"(\"American \", MinKey]"
					],
					"borough" : [
						"[\"Brooklyn\", \"Brooklyn\"]"
					],
					"grades.grade" : [
						"[\"A\", \"A\"]"
					]
				}
			}
		},
...
...
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 493,
		"executionTimeMillis" : 6,
		"totalKeysExamined" : 591,
		"totalDocsExamined" : 493,
...
...

Now, MongoDB uses the new index and does not need the extra sorting stage. The number of documents examined is the same as the number of documents returned. Also, the execution time is better.

We have optimized the query.

Example 4

This is the final example. Let’s find out the restaurants where the grades array contains a grade of ‘A’ and a score of 9 for a specific date.

MongoDB > exp.find({"grades.date": ISODate("2014-08-11T00:00:00Z"),
... "grades.grade":"A" ,
... "grades.score" : 9})
{
	"queryPlanner" : {
...
...
		"winningPlan" : {
			"stage" : "COLLSCAN",
			"filter" : {
				"$and" : [
					{
						"grades.date" : {
							"$eq" : ISODate("2014-08-11T00:00:00Z")
						}
					},
					{
						"grades.grade" : {
							"$eq" : "A"
						}
					},
					{
						"grades.score" : {
							"$eq" : 9
						}
					}
				]
			},
			"direction" : "forward"
		},
...

A COLLSCAN again. In the query we notice that all the conditions refer to embedded fields in an array object. So let’s try to create a multikey index. Let’s just a create the index on the date field only and see what happens.

MongoDB > db.restaurants.createIndex({"grades.date":1})
{
	"createdCollectionAutomatically" : false,
	"numIndexesBefore" : 5,
	"numIndexesAfter" : 6,
	"ok" : 1
}
MongoDB > exp.find({"grades.date": ISODate("2014-08-11T00:00:00Z"),
... "grades.grade":"A" ,
... "grades.score" : 9})
{
	"queryPlanner" : {
...
...
		"winningPlan" : {
			"stage" : "FETCH",
			"filter" : {
				"$and" : [
					{
						"grades.grade" : {
							"$eq" : "A"
						}
					},
					{
						"grades.score" : {
							"$eq" : 9
						}
					}
				]
			},
			"inputStage" : {
				"stage" : "IXSCAN",
				"keyPattern" : {
					"grades.date" : 1
				},
				"indexName" : "grades.date_1",
				"isMultiKey" : true,
				"multiKeyPaths" : {
					"grades.date" : [
						"grades"
					]
				},
				"isUnique" : false,
				"isSparse" : false,
				"isPartial" : false,
				"indexVersion" : 2,
				"direction" : "forward",
				"indexBounds" : {
					"grades.date" : [
						"[new Date(1407715200000), new Date(1407715200000)]"
					]
				}
			}
		},
		"rejectedPlans" : [ ]
	},
	"executionStats" : {
		"executionSuccess" : true,
		"nReturned" : 15,
		"executionTimeMillis" : 0,
		"totalKeysExamined" : 22,
		"totalDocsExamined" : 22,
...
...

MongoDB uses the index and the winning plan is already good enough. You can try as an exercise to create a compound index including other embedded fields of the array, like {“grades.date”:1, “grades.grade”:1, “grades.score”:1} and see what happens. You will probably see that the index we have created only on date is good enough. Enlarging the compound index will generate only rejected plans. This is because the date field is the most selective.

Hint: when dealing with compound indexes, please remember that the order of the fields is important. The first field should be the most selective one, and the last one should be the less selective. Or, in cases where you don’t need to put lot of fields in the index, the most selective ones are probably all that you need to improve the queries.

Conclusion

That’s all folks. In this two part series we have seen the indexes available in MongoDB and how to use explain() to investigate queries and find out how to improve their performance. We don’t expect now that you’ll know everything, but we hope this will be a good a starting point for you to practice MongoDB query optimization using explain(). You’ll find plenty more information in the manual and on the internet about indexes and explain(), if you are motivated to find out more.

While you are here

If you found this article interesting, you might like to check out some of our other resources. For example, you might find this recorded webinar MongoDB Sharding 101 by my colleague Adamo Tonete to be useful. Or perhaps Tim Vaillancourt’s MongoDB backup and recovery field guide would provide some useful information. If you want to catch our webinars as they happen, please subscribe to receive notifications in good time.

The post MongoDB: Investigate Queries with explain() and Index Usage (part 2) appeared first on Percona Database Performance Blog.

Sep
05
2018
--

Percona Monitoring and Management (PMM) 1.14.0 Is Now Available

Percona Monitoring and Management

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.

Percona Monitoring and Management

We’ve included a plethora of visual improvements in this release, including:

  • PostgreSQL Metrics Collection – Visualize PostgreSQL performance!
  • Identify New Queries in Query Analytics
  • New Dashboard: Compare System Parameters
  • New Dashboard: PERFORMANCE_SCHEMA Wait Events Analysis
  • Dashboard Updates – Advanced Data Exploration, MyRocks, TokuDB, InnoDB Metrics
  • Disable SSL between Prometheus and Exporters
  • Dashboards grouped by Folder – We’ve organized the Dashboard drop-down to present a cleaner interface

We addressed 16 new features and improvements, and fixed 20 bugs.

PostgreSQL Metrics Collection

The PMM team is very proud to bring you native support for PostgreSQL! We’ve shipped a new dashboard called PostgreSQL Overview, and we now provide the ability to add PostgreSQL instances as native, first-class citizens as part of PMM. This means you can add PostgreSQL + Linux monitoring capabilities through the standard pmm-admin add postgresql syntax, see our documentation links for more details!

../_images/1.14.0-1.png

Identify New Queries in Query Analytics

A long-awaited feature is the ability to visually identify new queries that have appeared in Query Analytics – those queries who’s first seen time is within the selected time range. New queries will be highlighted in a soft blue band for quick identification, and we’ve provided a button called First Seen which you can toggle to display only those newly seen queries. A common use case for this feature is potentially during code release / deployments, where you want to review which new queries have been deployed and to review their performance characteristics.

../_images/1.14.0-2.jpg

New Dashboard: Compare System Parameters

We’ve introduced a new dashboard to let you compare System Parameters across multiple servers so at a glance you can understand provisioning or configuration differences. This might be of help when comparing a pool of identical slaves or other logical groups of instances.

../_images/1.14.0-3.jpg

New Dashboard: PERFORMANCE_SCHEMA Wait Events Analysis

We’ve added a new dashboard that lets you drill down into great detail on one or several PERFORMANCE_SCHEMA wait event categories in order to visualize them over time.

../_images/1.14.0-4.jpg

Dashboards grouped by Folder

At long last we’ve addressed the sprawl of the long list of 30+ Dashboards, and grouped them into categories which match the pre-existing right-side navigation system. This should leave you with a more organized, less cluttered list of Dashboards.

../_images/1.14.0-5.jpg

Dashboard Updates – Advanced Data Exploration, MyRocks, TokuDB, InnoDB Metrics

We’ve improved four dashboards with minor but helpful improvements:

  • Advanced Data Exploration dashboard with the addition of a graph element plotting the Metric Rates, which will help you understand the scraping efficiency of this metric series, or whether scrapes have failed / are failing.
  • InnoDB Metrics to present the graph elements in two columns – previously we’d inconsistently use three columns or two columns, making it hard to visualize trends across graphs.
  • MyRocks formulas were improved to be more precise
  • TokuDB has many new graphs to expand our coverage of this storage engine

Disable SSL between PMM Server and Exporters

Lastly, we’ve delivered on a feature request from a Percona Customer to optionally disable SSL between PMM Server and Exporters, with the advantage that if you do not need encrypted traffic for your metric series, you can reduce the CPU overhead on PMM Server. We’d love to hear your feedback on this feature!

pmm-admin add mysql --disable-ssl ...

New Features & Improvements

  • PMM-1362: Update descriptions on MySQL InnoDB Metrics (Advanced) Dashboard – thanks to Yves Trudeau
  • PMM-2304: New Dashboard: Compare System Parameters
  • PMM-2331: Advanced Data Exploration: add graph for showing exporter scrapers over time intervals
  • PMM-2356: Grouping dashboards in folders with Grafana5
  • PMM-2472: Identify new queries in QAN
  • PMM-2486: Allow the disabling of SSL by means of an option – thanks to Dongchan Sung
  • PMM-2597: Improve MyRocks dashboard – thanks to Przemek Malkowski for the valuable ideas
  • PMM-2704: PostgreSQL Metrics Collection
  • PMM-2772: Display InnoDB Metrics dashboard using consistent two column view
  • PMM-2775: Display PERFORMANCE_SCHEMA Wait Events Analysis
  • PMM-2769: Display TokuDB Dashboard Improvements
  • PMM-2797: MySQL Performance Schema – Filter HOSTS
  • PMM-2798: Filter hosts on NUMA dashboard
  • PMM-2833: Added granularity interval for scraping AWS API – thanks to Aleksandr Stepanov
  • PMM-2846: Increase MySQL Max Connections in PMM Server

Fixed Bugs

  • PMM-946: QAN sparklines drop to zero when data is not available
  • PMM-1987: pt-archiver rule for agent_log is not correct – thanks to Yves Trudeau for providing a fix
  • PMM-2013: Styling of QAN allows overlapping content
  • PMM-2028: nginx shows “414 Request-URI Too Large” for 150 hosts – thanks to Nickolay Ihalainen for the bug report and fix
  • PMM-2166: Add RDS instance page refresh will head to “Page Not Found” error
  • PMM-2457: Improve External Exporter help documentation for duration interval
  • PMM-2459: Cross-Graph Crosshair not enabled on the PXC/Galera Cluster
  • PMM-2477: Frequent Access Denied prompts while using AWS Marketplace image
  • PMM-2566: CPU busy graph shows incorrect values
  • PMM-2763: Unknown version is available on Update widget
  • PMM-2784: What’s new link on Update widget has wrong URL
  • PMM-2793: Network Overview needs to be in OS menu, not insights
  • PMM-2796: Overview NUMA Metrics dashboard should be renamed to NUMA Overview
  • PMM-2801: Prometheus Exporters Overview – CPU metrics are strange
  • PMM-2804: Prometheus Graph is empty with PMM 1.13
  • PMM-2811: SQL to get Hosts in QAN – thanks to Forums member Fan
  • PMM-2821: Clean local storage if status is “You are up to date” and use animation for refresh button
  • PMM-2828: Weird Latency Graphs
  • PMM-2841: Change memory defaults for Prometheus 1.8 and use additional environment variable
  • PMM-2856: RDS/Aurora disk related graphs are empty
  • PMM-2885: System Overview dashboard has incorrect values

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

The post Percona Monitoring and Management (PMM) 1.14.0 Is Now Available appeared first on Percona Database Performance Blog.

Sep
04
2018
--

MongoDB: index usage and MongoDB explain() (part 1)

MongoDB index types

MongoDB index typesIn this two part series, I’m going to explain explain. No, the repetition is not a pun or a typo. I’m talking about explain(), the most useful MongoDB feature to investigate a query. Here, I’ll introduce the index types available in MongoDB, their properties and how to create and use them. In the next article we’ll see explain() in action on some real examples.

Using explain(), questions like these will find a proper answer:

  • Is the query using an index?
  • How is it using the index?
  • How many documents are scanned?
  • How many documents are returned?
  • For how many milliseconds does the query run?

And many others. The explain() command provides a lot of information.

If you are familiar with MySQL, then you probably know about the command EXPLAIN. In MongoDB we have a similar feature, and the goal of using it is exactly the same: find out how we can improve a query in order to reduce the execution time as far as possible. Based on the information provided, we can decide to create a new index or to rewrite the query to take advantage of indexes that already exist. The same as you do when using MySQL.

Indexes supported by MongoDB

The concept of an index in MongoDB is the same as in relational databases. An index is generally a small structure—in comparison to the collection size—that provides a better way to access documents more quickly. Without an index, the only way that MongoDB has to retrieve the documents is to do a collection scan: reading sequentially all the documents in the collection. This is exactly the same as the full scan table in MySQL. Another similarity to MySQL is that the indexes in MongoDB have a structure based on the well known B-Tree. To understand explain() you’ll need to understand the index structures.

Let’s have an overview of the index types available in MongoDB, and their features and properties. We focus, in particular, on the single, compound and multikey index types. These are by far the most used and most useful in the majority of cases. We’ll present also the other types, but when using the explain(), in the second part of this article series, we’ll use only single field and compound indexes in the examples.

Single Field Indexes

This is an index built on a single field of the documents. The entries could be a single value, such as a string or a number, but also could be an embedded document.

By default each collection has a single field index automatically created on the _id field, the primary key.

The index can be defined in ascending or descending order.

Let’s see some examples.

Assume we have a collection people containing the following type of document:

{
  "_id": 1,
  "person": { name: "John", surname: "Brown" },
  "age": 34,
  "city": "New York"
}

We can define, for example, a single field index on the age field.

db.people.createIndex( { age : 1} )

In this case we have created an ascending index. If we had wanted to create a descending index we would have used this syntax:

db.people.createIndex( {age : -1} )

With this kind of index we can improve all the queries that find documents with a condition an the age field, like the following:

db.people.find( { age : 20 } ) 
db.people.find( { name : "Antony", age : 30 } )
db.people.find( { age : { $gt : 25} } )

It’s interesting to highlight that the index can be used to improve even the sorting of the results. In this case, it doesn’t matter whether you have defined the index as ascending or descending. MongoDB can traverse the items in the index in both directions.

db.people.find().sort( { age: 1} )
db.people.find().sort( {age : -1} )

Both these queries use the index to retrieve all of the documents in the specified order.

The next query can use the index to retrieve the documents in the order specified by the sort.

db.people.find( { age : { $lt : 25 } } ).sort( { age : -1} )

Indexes on embedded documents

We can even define an index on an embedded document.

db.people.createIndex( { person: 1 } )

In this case, each item in the index is the embedded document as a whole. The index can be used when the condition in the query matches exactly the embedded document. This query can retrieve the document using the index:

db.people.find( { person : {name : "John", surname: "Brown" } } )

but the next two examples are not able to use the index, and MongoDB will do a collection scan:

db.people.find( { person : {surname : "Brown", name: "John" } } )
db.prople.find( person.name: "John", person.surname: "Brown" )

More useful is to create indexes on embedded fields rather than on embedded documents. For doing this we can use the dot notation.

db.people.createIndex( { "person.name": 1} )
db.people.createIndex( { "person.surname": 1 } )

We have created two separate ascending indexes on the name and surname embedded fields. Now, queries like the following can rely on the new indexes to be resolved.

db.people.find( { "person.name": "John" } )
db.people.find( { "person.surname": "Brown, "person.name": "John" } )

Compound indexes

A compound index is an index on multiple fields. Using the same people collection we can create a compound index combining the city and age field.

db.people.createIndex( {city: 1, age: 1, person.surname: 1  } )

In this case, we have created a compound index where the first entry is the value of city field; the second is the value of the age field; and the third is the person.name. All the fields here are defined in ascending order.

Queries such as the following can benefit from the index:

db.people.find( { city: "Miami", age: { $gt: 50 } } )
db.people.find( { city: "Boston" } )
db.people.find( { city: "Atlanta", age: {$lt: 25}, "person.surname": "Green" } )

The order we define the fields is important. In fact, only the left prefix of the index can be used to retrieve the documents. In the examples above the index can be used because the fields in the find() are always a left-prefix of the index definition. The following queries will not use the index because the field city is not specified in the query.  A collection scan is required to solve them.

db.people.find( { age: 20 } )
db.people.find( { age: { $gt: 40 }, "person.surname": "Brown" } )
db.people.find( { "person.surname": "Green" }

The ascending or descending order is more important in the case of compound indexes. In fact we need to pay attention when defining the order in the index because not all the queries can rely on it, in particular when using the sort() function.

Assume we have the compound index defined as follow:

db.people.createIndex( {city: 1, age: -1} )

The specified sort direction in the documents, when use sort(), must match the same pattern of the index definition or the inverse pattern. Any other pattern will not be supported by the index.

So, the following queries will use the index for sorting:

db.people.find( ).sort( city: 1, age: -1)
db.people.find( ).sort( city: -1, age: 1)

The following queries will not use the index:

db.people.find( ).sort( { city: 1, age: 1} )
db.people.find( ).sort( { city: -1, age: -1} )

The following queries will use the index both for retreiving the documents and for sorting.

db.people.find( { city: "New York" } ).sort( city: 1, age: -1)
db.people.find( { city: "Miami", age:{ $gt: 50 } } ).sort( city: -1, age: 1)

They use the left prefix and the pattern of the sort() stage matches exactly the index pattern or the inverse pattern.

Multikey indexes

This is the index type for arrays. When creating an index on an array, MongoDB will create an index entry for every element.

Let’s assume we have these documents:

{
   "_id": 1,
   "person": { name: "John", surname: "Brown" },
   "age": 34,
   "city": "New York",
   "hobbies": [ "music", "gardening", "skiing" ]
 }

The multikey index can be created as:

db.people.createIndex( { hobbies: 1} )

Queries such as these next examples will use the index:

db.people.find( { hobbies: "music" } )
db.people.find( { hobbies: "music", hobbies: "gardening" } )

Geospatial and text indexes

MongoDB supports specialized indexes also for geospatial data and for text searches. They are out of scope of the goal of this article, but it is worth mentioning them. We’ll look at these in a future post maybe.

Index options

TTL

The acronym stands for Time To Live. This is a special option that we can apply only to a single field index to permit the automatic deletion of documents after a certain time.

During index creation we can define an expiration time. After that time, all the documents that are older than the expiration time will be removed from the collection. This kind of feature is very useful when we are dealing with data that don’t need to persist in the database. A good example of this is session data.

Let’s see how to define the TTL option on the sessionlog collection.

db.sessionlog.createIndex( { "lastUpdateTime": 1 }, { expireAfterSeconds: 1800 } )

In this case, MongoDB will drop the documents from the collection automatically once half an hour (1800 seconds) has passed since the value in lastUpdateTime field.

MongoDB runs a background process every 60 seconds to drop the expired documents. So, this means that the real deletion of a document can be applied with a short delay, not precisely after expireAfterSeconds.

There are some restrictions:

  • as already mentioned only single field indexes can have the TTL option
  • the _id single field index cannot support the TTL option
  • the indexed field must be a date type
  • a capped collection cannot have a TTL index

In a replica set, documents are deleted only in the primary node; the background process works only on the primary node, with deletions correctly replicated as per any other event.

Partial indexes

A partial index is an index that contains only a subset of the values based on a filter rule. They are useful in cases where:

  • the index size can be reduced
  • we want to index the most relevant and used values in the query conditions
  • we want to index the most selective values of a field

Here’s how to create a partial index using the clause partialFilterExpression.

db.people.createIndex(
   { "city": 1, "person.surname": 1 },
   { partialFilterExpression: { age : { $lt: 30 } } }
)

We have created a compound index on city and person.surname but only for the documents with age less then 30.

In order for the partial index to be used the queries must contain a condition on the age field.

db.people.find( { city: "New Tork", age: { $eq: 20} } )

The following queries cannot use the partial index because the results would be incomplete. In these examples, MongoDB will use a collection scan:

db.people.find( { city: "Miami" } )
db.people.find( { city: "Orlando", age : { $gt: 25 } } )

Sparse indexes

Sparse indexes are a subset of partial indexes. A sparse index only contains elements for the documents that have the indexed field, even if it is null.

Since MongoDB is a schemaless database, the documents in a collection can have different fields, so an indexed field may not be present in some of them.

To create such an index use the sparse option:

db.people.createIndex( { city: 1 }, { sparse: true } )

In this case we are assuming there could be documents in the collection with the field city missing.

Regular indexes—without the sparse option—contain all the documents of the collection with a null value for the elements  whose documents don’t contain the indexed field.

Sparse indexes are based on the existence of a field in the documents, and are useful to reduce the size of the index. Since they are a subset of partial indexes, if you have to decide then you should choose partial indexes.

Defining a partial index with the following filter is the same of having a sparse index:

db.people.createIndex(
  { city: 1 },
  { partialFilterExpression: { city: { $exists: true }  } }
)

Unique indexes

MongoDB can create an index as unique. An index defined this way cannot contain duplicate entries.

To create such an index use the unique option.

db.people.createIndex( { city: 1 }, { unique: true } )

Uniqueness can be defined for compound indexes too.

db.people.createIndex( { city: 1, person.surname: 1}, { unique: true } )

By default the index on _id is automatically created as unique.

You cannot create the index as unique if you already have documents in the collection with duplicates for the proposed index.

As we know, missing fields in the documents are inserted in the index with a null value, even in the case of a unique index creation. Consequently only one null value is permitted in the index because of the uniqueness restraint. Our suggestion is to try to avoid missing fields when creating a unique index.

A couple of useful commands

Here are a couple of useful commands when dealing with indexes.

List of the indexes in a collection

Use the getIndexes() method.

MongoDB > db.restaurants.getIndexes()
[
	{
		"v" : 2,
		"key" : {
			"_id" : 1
		},
		"name" : "_id_",
		"ns" : "test.restaurants"
	},
	{
		"v" : 2,
		"key" : {
			"borough" : 1
		},
		"name" : "borough_1",
		"ns" : "test.restaurants"
	},
	{
		"v" : 2,
		"key" : {
			"cuisine" : 1
		},
		"name" : "cuisine_1",
		"ns" : "test.restaurants"
	},
	{
		"v" : 2,
		"key" : {
			"cuisine" : 1,
			"grades.score" : 1
		},
		"name" : "cuisine_1_grades.score_1",
		"ns" : "test.restaurants"
	}
]

In this sample collection we have 4 indexes, including the _id.

Drop an existing index

Use the dropIndex() method passing the name of the index to be dropped.

MongoDB > db.restaurants.dropIndex("cuisine_1_grades.score_1")
{ "nIndexesWas" : 4, "ok" : 1 }

Conclusions

In this first part of this two part series I’ve described the indexes available in MongoDB, and how you can create and use them. We’ve also discussed some of their main features. In the next part, we’ll focus on the explain() method, and using some examples we’ll show how we can investigate queries and, if possible, how we can optimize them.

You won’t have to wait long! Come back on Thursday to read about MongoDB explain().

While you are here

If you found this article interesting, you might like to check out some of our other resources. For example, you might find this recorded webinar MongoDB Sharding 101 by my colleague Adamo Tonete to be useful. Or perhaps Tim Vaillancourt’s MongoDB backup and recovery field guide would provide some useful information. If you want to catch our webinars as they happen, please subscribe to receive notifications in good time.

The post MongoDB: index usage and MongoDB explain() (part 1) appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com