Jun
15
2022
--

Moving MongoDB Cluster to a Different Environment with Percona Backup for MongoDB

Moving MongoDB Cluster to a Different Environment with Percona Backup for MongoDB

Moving MongoDB Cluster to a Different Environment with Percona Backup for MongoDBPercona Backup for MongoDB (PBM) is a distributed backup and restore tool for sharded and non-sharded clusters. In 1.8.0, we added the replset-remapping functionality that allows you to restore data on a new compatible cluster topology.

The new environment can have different replset names and/or serve on different hosts and ports. PBM handles this hard work for you. Making such migration indistinguishable from the usual restore. In this blog post, I’ll show you how to migrate to a new cluster practically.

The Problem

Usually to change a cluster topology you do lots of manual steps. PBM reduces the process.

Let’s have a look at a case where we will have an initial cluster and a desired one.

Initial cluster:

configsrv: "configsrv/conf:27017"
shards:
  - "rs0/rs0:27017,rs1:27017,rs2:27017"
  - "extra-shard/extra:27018"

The cluster consists of the configsrv configsvr replset with a single node and two shards: rs0 (3 nodes in the replset) and extra-shard (1 node in the replset). The names, hosts, and ports are not conventional across the cluster but we will resolve this.

Target cluster:

configsrv: "cfg/cfg0:27019"
shards:
  - "rs0/rs00:27018,rs01:27018,rs02:27018"
  - "rs1/rs10:27018,rs11:27018,rs12:27018"
  - "rs2/rs20:27018,rs21:27018,rs22:27018"

Here we have the cfg configsvr replset with a single node and 3 shards rs0rs2 where each shard is 3-nodes replset.

Think about how you can do this.

With PBM, all that we need is deployed cluster and logical backup made with PBM 1.5.0 or later. The following simple command will do the rest:

pbm restore $BACKUP_NAME --replset-remapping "cfg=configsrv,rs1=extra-shard"

Migration in Action

Let me show you how it looks in practice. I’ll provide details at the end of the post. In the repo, you can find all configs, scripts, and output used here.

As mentioned above, we need a backup. For this, we will deploy a cluster, seed data, and then make the backup.

Deploying the initial cluster

$> initial/deploy >initial/deploy.out
$> docker compose -f "initial/compose.yaml" exec pbm-conf \
     pbm status -s cluster
 
Cluster:
========
configsvr:
  - configsvr/conf:27019: pbm-agent v1.8.0 OK
rs0:
  - rs0/rs00:27017: pbm-agent v1.8.0 OK
  - rs0/rs01:27017: pbm-agent v1.8.0 OK
  - rs0/rs02:27017: pbm-agent v1.8.0 OK
extra-shard:
  - extra-shard/extra:27018: pbm-agent v1.8.0 OK

links: initial/deployinitial/deploy.out

The cluster is ready and we can add some data.

Seed data

We will insert the first 1000 numbers in a natural number sequence: 1 – 1000.

$> mongosh "mongo:27017/rsmap" --quiet --eval "
     for (let i = 1; i <= 1000; i++)
       db.coll.insertOne({ i })" >/dev/null

Getting the data state

These documents should be partitioned across all shards at insert time. Let’s see, in general, how. We will use thedbHash command on all shards to have the collections’ state. It will be useful for verification later.

We will also do a quick check on shards and mongos.

$> initial/dbhash >initial/dbhash.out && cat initial/dbhash.out
 
# rs00:27017  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs01:27017  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs02:27017  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# extra:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs00:27017  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 520, false ]
# extra:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 480, false ]
# mongo:27017
[ 1000, true ]

links: initial/dbhashinitial/dbhash.out

All rs0 members have the same data. So secondaries replicate from primary correctly.

The quickcheck.js used in the initial/dbhash script describes our documents. It returns the number of documents and whether these documents make the natural number sequence.

We have data for the backup. Time to make the backup.

Making a backup

$> docker compose -f initial/compose.yaml exec pbm-conf bash
pbm-conf> pbm backup --wait
 
Starting backup '2022-06-15T08:18:44Z'....
Waiting for '2022-06-15T08:18:44Z' backup.......... done
 
pbm-conf> pbm status -s backups
 
Backups:
========
FS  /data/pbm
  Snapshots:
    2022-06-15T08:18:44Z 28.23KB <logical> [complete: 2022-06-15T08:18:49Z]

We have a backup. It’s enough for migration to the new cluster.

Let’s destroy the initial cluster and deploy the target environment. (Destroying the initial cluster is not a requirement. I just don’t want to waste resources on it.)

Deploying the target cluster

pbm-conf> exit
$> docker compose -f initial/compose.yaml down -v >/dev/null
$> target/deploy >target/deploy.out

links: target/deploy, target/deploy.out

Let’s check the PBM status.

PBM Status

$> docker compose -f target/compose.yaml exec pbm-cfg0 bash
pbm-cfg0> pbm config --force-resync  # ensure agents sync from storage
 
Storage resync started
 
pbm-cfg0> pbm status -s backups
 
Backups:
========
FS  /data/pbm
  Snapshots:
    2022-06-15T08:18:44Z 28.23KB <logical> [incompatible: Backup doesn't match current cluster topology - it has different replica set names. Extra shards in the backup will cause this, for a simple example. The extra/unknown replica set names found in the backup are: extra-shard, configsvr. Backup has no data for the config server or sole replicaset] [2022-06-15T08:18:49Z]

As expected, it is incompatible with the new deployment.

See how to make it work

Resolving PBM Status

pbm-cfg0> export PBM_REPLSET_REMAPPING="cfg=configsvr,rs1=extra-shard"
pbm-cfg0> pbm status -s backups
 
Backups:
========
FS  /data/pbm
  Snapshots:
    2022-06-15T08:18:44Z 28.23KB <logical> [complete: 2022-06-15T08:18:49Z]

Nice. Now we can restore.

Restoring

pbm-cfg0> pbm restore '2022-06-15T08:18:44Z' --wait
 
Starting restore from '2022-06-15T08:18:44Z'....Started logical restore.
Waiting to finish.....Restore successfully finished!

The –wait flag blocks the shell session till the restore completes. You could not wait but check it later.

pbm-cfg0> pbm list --restore
 
Restores history:
  2022-06-15T08:18:44Z

Everything is going well so far. Almost done

Let’s verify the data.

Data verification

pbm-cfg0> exit
$> target/dbhash >target/dbhash.out && cat target/dbhash.out
 
# rs00:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs01:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs02:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs10:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs11:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs12:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs20:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ }
# rs21:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ }
# rs22:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ }
# rs00:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 520, false ]
# rs10:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 480, false ]
# rs20:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ ]
# mongo:27017
[ 1000, true ]

links: target/dbhash, target/dbhash.out

As you can see, the rs2 shard is empty. The other two have the identical dbHash and the quickcheck results as in the initial cluster. I think balancer can tell something about this

Balancer status

$> mongosh "mongo:27017" --quiet --eval "sh.balancerCollectionStatus('rsmap.coll')"
 
{
  balancerCompliant: false,
  firstComplianceViolation: 'chunksImbalance',
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1655281436, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1655281436, i: 1 })
}

We know what to do. Starting balancer and checking status again.

$> mongosh "mongo:27017" --quiet --eval "sh.startBalancer().ok"

1
 
$> mongosh "mongo:27017" --quiet --eval "sh.balancerCollectionStatus('rsmap.coll')"
 
{
  balancerCompliant: true,
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1655281457, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1655281457, i: 1 })
}
 
$> target/dbhash >target/dbhash-2.out && cat target/dbhash-2.out

# rs00:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs01:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs02:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "550f86eb459b4d43de7999fe465e39e0" }
# rs10:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs11:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs12:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "4a79c07e0cbf3c9076d6e2d81eb77f0a" }
# rs20:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "6a54e10a5526e0efea0d58b5e2fbd7c5" }
# rs21:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "6a54e10a5526e0efea0d58b5e2fbd7c5" }
# rs22:27018  db.getSiblingDB("rsmap").runCommand("dbHash").collections
{ "coll" : "6a54e10a5526e0efea0d58b5e2fbd7c5" }
# rs00:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 520, false ]
# rs10:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 480, false ]
# rs20:27018  db.getSiblingDB("rsmap").coll
    .find().sort({ i: 1 }).toArray()
    .reduce(([count = 0, seq = true, next = 1], { i }) =>
             [count + 1, seq && next == i, i + 1], [])
    .slice(0, 2)
[ 229, false ]
# mongo:27017
[ 1000, true ]

links: target/dbhash-2.out

Interesting. rs2 shard has some data. However, rs1 and rs2 haven’t changed. It’s expected that mongos moves some chunks to rs2 and updates the router config. Physically deletion of chunks on a shard is a separate step. That’s why querying data directly on a shard is inaccurate. The data could disappear at any time. The cursor returns all available documents in a replset at the moment despite the router config.

Anyway, we shouldn’t care about it anymore. It is mongos/mongod responsibility now to update router config, query right shards, and remove moved chunks from shards by demand. In the end, we have valid data through mongos.

That’s it.

But wait, we didn’t make a backup! Never forget to make another solid backup.

Making a new backup

Better to change the storage so that we will have backups for the new deployment in a different place and will not see errors about incompatible backups from the initial cluster further.

$> pbm config --file "$NEW_PBM_CONFIG" >/dev/null
$> pbm config --force-resync >/dev/null
$> pbm backup -w >/dev/null
pbm-cfg0> pbm status -s backups
 
Backups:
========
FS  /data/pbm
  Snapshots:
    2022-06-15T08:25:44Z 165.34KB <logical> [complete: 2022-06-15T08:25:49Z]

Now we’re done. And can sleep better.

One More Thing: Possible Misconfiguration

Let’s review another imaginal case to explain all possible errors.

Initial cluster: cfg, rs0, rs1, rs2, rs3, rs4, rs5

Target cluster: cfg, rs0, rs1, rs2, rs3, rs4, rs6

If we apply remapping:rs0=rs0,rs1=rs2,rs2=rs1,rs3=rs4, we will get error like “missed replsets: rs3, rs5. And nothing about rs6.

The missed rs5 should be obvious: backup topology has rs5 replset, but it is missed on target. And target rs6 does not have data to restore from. Adding rs6=rs5 fixes this.

But the missed rs3 could be confusing. Let’s visualize:

init | curr
-----+-----
cfg     cfg  # unchanged
rs0 --> rs0  # mapped. unchanged
rs1 --> rs2
rs2 --> rs1
rs3 -->      # err: no shard
rs4 --> rs3
     -> rs4  # ok: no data
rs5 -->      # err: no shard
     -> rs6  # ok: no data

When we remap the backup from rs4 to rs3, the target rs3 is reserved. The rs3 in the backup does not have a target replset now. Just remapping rs3 to available rs4 will fix it too.

This reservation avoids data duplication. That’s why we use the quick check via mongos.

Details

Compatible topology

Simply speaking, compatible topology is equal to or has a larger number of shards in the target deployment. In our example, we had initial 2 shards but restored them to 3 shards. PBM restored data on two shards only. MongoDB can distribute it with the remaining shards later when the balancer is enabled (sh.startBalancer()). The number of replset members does not matter because PBM takes backup from a member (per replset) and restores it to primary only. Other data-bearing members replicate data from the primary. So you could make a backup from a multi-members replset and then restore it to a single member replset.

You cannot restore to a different replset type like from shardsvr to configsvr.

Preconfigured environment

The cluster should be deployed with all shards added. Users and permissions should be added and assigned in advance. PBM agents should be configured to the same storage and be accessible to it from the new cluster.

Note: PBM agents store backup metadata on storage and keep the cache in MongoDB. pbm config –force-resync lets you refresh the cache from the storage. Do it on a new cluster right after deployment to see backups/oplog chunks made from the initial cluster.

Understanding replset remapping

You can remap replset names by the –replset-remapping flag or PBM_REPLSET_REMAPPING environment variable. If both sets, the flag has precedence.

For full restore, point-in-time recovery, and oplog replay, PBM CLI sends the mapping as a parameter in the command. Each command gets a separate explicit mapping (or none). It can be done only by CLI. Agents do not use the environment variable nor have the flag.

pbm status and pbm list use the flag/envvar to remap replsets in backups/oplog metadata and apply this mapping to the current deployment to show them properly. If backup and present replset names do not match, pbm list will not show these backups, and pbm status prints an error with missed replset names.

Restoring with remapping works with logical backups only.

How does PBM do this?

During restore, PBM reviews current topology and assigns members’ snapshots and oplog chunks to each shard/replset by name, respectively. The remapping changes the default assignment.

After the restore is done, PBM agents sync the router config to make the restored data “native” to this cluster.

Behind the scene

The config.shards collection describes the current topology. PBM uses it to know where and what to restore. The collection is not modified by PBM. But restored data contains some other router configurations for initial topology.

We updated two collections to replace old shard names with new ones in restored data:

  • config.databases – primary shard for non-sharded databases
  • config.chunks – shards where chunks are

After this, MongoDB knows where databases, collections, and chunks are in the new cluster.

CONCLUSION

Migration of a cluster requires much attention, knowledge, and calm. The replset-remapping functionality in Percona Backup for MongoDB reduces complexity during migration between two different environments. I would say, it is near to a routine job now.

Have a nice day ?

Jun
07
2022
--

Migration of MongoDB Enterprise/Community Edition to Percona Server for MongoDB

Migration of MongoDB to Percona Server for MongoDB

Migration of MongoDB to Percona Server for MongoDBIn this blog post, we will discuss how we can migrate from the enterprise/community edition of MongoDB to Percona Server for MongoDB. But before we begin, let’s take a second to explain why you should migrate to Percona Server for MongoDB. 

Percona Distribution for MongoDB is a single solution that combines the best and most important enterprise components from the open source community, designed and tested to work together. Percona customers benefit from no lock-in and lower total cost of ownership, along with the freedom to run their MongoDB environment wherever they want to – in a public or private cloud, on-premises, or hybrid environment.

Percona Server for MongoDB offers the same, or equivalent, security features as MongoDB Enterprise without the price tag, and Percona experts are always available to help, bringing in-depth operational knowledge of MongoDB and open source tools so you can optimize database performance. If you’d like to learn more, please click here

Anyway, let’s get back to the purpose of the blog: migrating from the enterprise/community edition of MongoDB to Percona Server for MongoDB. 

Before starting the migration process it’s recommended that you perform a full backup (if you don’t have one already). See this post for MongoDB backup best practices.

The migration procedure:

  1. Backup the config files of the Mongo process.
  2. Stop the Mongo process. If it’s a replica set, then do it in a rolling fashion.
  3. Remove the package of MongoDB community/enterprise edition. For the replica set, do it in a rolling fashion.
  4. Install the Percona Server for MongoDB (PSMDB). It can be downloaded from here. Do it in a rolling fashion for a replica set and start the Mongo service.

Detailed migration plan:

Migrate standalone MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL):

   1. Backup the mongo config and service file:

sudo cp /etc/mongod.conf /etc/mongod.conf_bkp

Debian:

sudo cp /lib/systemd/system/mongod.service /lib/systemd/system/mongod.service_bkp

RHEL/CentOS:

sudo cp /usr/lib/systemd/system/mongod.service /usr/lib/systemd/system/mongod.service_bkp

   2. Stop mongo service first and then remove mongodb-community/enterprise packages and repo:

To stop mongo services, connect to admin database and shutdown as below:

>use admin

>db.shutdownServer()

Remove the package:

Debian:

sudo apt-get remove mongodb-org mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools
sudo rm /etc/apt/sources.list.d/mongodb-org-4.0.list

RHEL/CentOS:

sudo yum erase $(rpm -qa | grep mongodb-org)

If it’s OpsManager then:

   a. Unmanage the project in OpsManager GUI.

   b. Make sure to uncheck enforce users in Opsmanager GUI. 

   c. Disable the automation agent with the below:

sudo apt disable mongodb-mms-automation-agent

   d. Remove the automation agent with:

sudo systemctl remove mongodb-mms-automation-agent

   3. Configure percona repo and install Percona Server for MongoDB (PSMDB):

Debian:

sudo wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb

sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb

Enable the repo:

sudo percona-release enable psmdb-44 release

sudo apt-get update

Install the package:

sudo apt-get install percona-server-mongodb

RHEL/CentOS:

sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

Enable the repo:

sudo percona-release setup pdmdb-44

Install the package:

sudo yum install percona-server-mongodb

   4. Copy back the mongod config and service file:

sudo cp /etc/mongod.conf_bkp /etc/mongod.conf

Debian:

sudo cp /lib/systemd/system/mongod.service_bkp /lib/systemd/system/mongod.service

RHEL/CentOS:

sudo cp /usr/lib/systemd/system/mongod.service_bkp /usr/lib/systemd/system/mongod.service

NOTE: Kindly check that the permissions and ownership of the data directory, keyfile, and log directory are properly updated for the mongod user. 

Also, if the SELinux policy is enabled, then set the necessary SELinux policy for dbPath, keyFile, and logs as below:

sudo semanage fcontext -a -t mongod_var_lib_t '/dbPath/mongod.*'
sudo chcon -Rv -u system_u -t mongod_var_lib_t '/dbPath/mongod'
sudo restorecon -R -v '/dbPath/mongod'
sudo semanage fcontext -a -t mongod_log_t '/logPath/log.*'
sudo chcon -Rv -u system_u -t mongod_log_t '/logPath/log'
sudo restorecon -R -v '/logPath/log'

   5. Enable and start mongod service:

sudo systemctl daemon-reload
sudo systemctl enable mongod
sudo systemctl start mongod
sudo systemctl status mongod

Migrate Replica set MongoDB Enterprise/Community edition to Percona Server for MongoDB (Debian/RHEL):

This migration process involves stopping the Mongo process in the hidden/secondary node first, removing the MongoDB community/enterprise edition packages, installing Percona Server for MongoDB, and starting it with the same data files. Then, step down the current primary node and repeat the same process.

   a. Make sure to check the current Primary and Secondary/hidden nodes.

db.isMaster().primary

   b. Start with the hidden node (if there is no hidden node then start with one of the secondary nodes with the least priority) first.

   c. Repeat steps from 1 to 5 from the section Migrate standalone MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL).

   d. Wait for each node to be synced with Primary. Verify it with 

rs.printSecondaryReplicationInfo()

   e. Once completed for all secondary nodes, step down the current primary with

rs.stepDown()

   f. Wait for the new node to be elected as a Primary node and repeat steps 1 to 5 from the section Migrate standalone MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL) and wait for the former primary node to be synced with the newly elected Primary.

Migrate Sharded cluster MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL):

   1. Stop the balancer first:

sh.stopBalancer()

   2. Back up the configuration and service files for shards, CSRS, and Query router.

Backup the mongo config and service file for shards and CSRS:

sudo cp /etc/mongod.conf /etc/mongod.conf_bkp

Debian:

sudo cp /lib/systemd/system/mongod.service /lib/systemd/system/mongod.service_bkp

For router:

sudo cp /etc/mongos.conf /etc/mongos.conf_bkp

sudo cp /lib/systemd/system/mongos.service /lib/systemd/system/mongos.service_bkp

RHEL/CentOS:

sudo cp /usr/lib/systemd/system/mongod.service /usr/lib/systemd/system/mongod.service_bkp

For router:

sudo cp /etc/mongos.conf /etc/mongos.conf_bkp
sudo cp /usr/lib/systemd/system/mongos.service /usr/lib/systemd/system/mongos.service_backup

   3. Start with the hidden node of the CSRS first (if there is no hidden node then start with one of the secondary nodes).

Repeat steps a to f from the section Migrate Replica set MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL)

Once migrated, the CSRS to Percona Server for MongoDB moves to Shards for the migration. Repeat steps a to f from the section Migrate Replica set MongoDB community/enterprise edition to Percona Server for MongoDB (Debian/RHEL)

   4. After the migration of the CSRS and Shards, start migrating the MongoS. Connect to one router at a time and execute the below steps followed by the remaining routers.

   5. Stop mongo service and then remove mongodb-community/enterprise packages and repo:

sudo systemctl stop mongos

Debian:

sudo apt-get remove mongodb-org mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools
sudo rm /etc/apt/sources.list.d/mongodb-org-4.0.list

RHEL/CentOS:

sudo yum erase $(rpm -qa | grep mongodb-org)

   6. Configure percona repos and install Percona Server for MongoDB (PSMDB):

Debian:

sudo wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb

Enable the repo:

sudo percona-release enable psmdb-44 release
sudo apt-get update

Install the package:

sudo apt-get install percona-server-mongodb

RHEL/CentOS:

sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

Enable the repo:

sudo percona-release setup pdmdb-44

Install the package:

sudo yum install percona-server-mongodb

Copy back the config and service file:

sudo cp /etc/mongos.conf_bkp /etc/mongos.conf

Debian:

sudo cp /lib/systemd/system/mongos.service_bkp /lib/systemd/system/mongos.service

RHEL/CentOS:

sudo cp /usr/lib/systemd/system/mongos.service_bkp /usr/lib/systemd/system/mongos.service

NOTE: Kindly check that the permissions and ownership of keyfile and log directory are properly updated for the mongod user.

   7. Enable and start mongos service:

sudo systemctl daemon-reload
sudo systemctl enable mongos
sudo systemctl start mongos
sudo systemctl status mongos

   8. Re-enable the balancer with below:

sh.startBalancer()

Conclusion:

To learn more about the enterprise-grade features available in the license-free Percona Server for MongoDB, we recommend going through our blog MongoDB: Why Pay for Enterprise When Open Source Has You Covered? 

We also encourage you to try our products for MongoDB like Percona Server for MongoDB, Percona Backup for MongoDB, or Percona Operator for MongoDB.

Jun
03
2022
--

Migration of a MongoDB Replica Set to a Sharded Cluster

Migration of a MongoDB Replica Set to a Sharded Cluster

Migration of a MongoDB Replica Set to a Sharded ClusterIn this blog post, we will discuss how can we migrate from a replica set to sharded cluster. 

Before moving to migration let me briefly explain Replication and Sharding and why do we need to shard a replica Set.

Replication: It creates additional copies of data and allows for automatic failover to another node in case Primary went down. It also helps to scale our reads if the application is fine to read data that may not be the latest.

Sharding: It allows horizontal scaling of data writes by allowing data partition in multiple servers by using a shard key. Here, we should understand that a shard key is very important to distribute the data evenly across multiple servers. 

Why Do We Need a Sharded Cluster?

We need sharding due to the below reasons:

  1. By adding shards, we can reduce the number of operations each shard manages. 
  2. It increases the Read/Write capacity by distributing the Reads/Writes across multiple servers. 
  3. It also gives high availability as we deploy the replicas for the shards, config servers, and multiple MongoS.

Sharded cluster will include two more components which are Config Servers and Query routers i.e. MongoS.

Config Servers: It keeps metadata for the sharded cluster. The metadata comprises a list of chunks on each shard and the ranges that define the chunks. The metadata indicates the state of all the data and its components within the cluster. 

Query Routers(MongoS): It caches metadata and uses it to route the read or write operations to the respective shards. It also updates the cache when there are any metadata changes for the sharded cluster like Splitting of chunks or shard addition etc. 

Note: Before starting the migration process it’s recommended that you perform a full backup (if you don’t have one already).

The Procedure of Migration:

  1. Initiate at least a three-member replica set for the Config Server ( another member can be included as a hidden node for the backup purpose).
  2. Perform necessary OS, H/W, and disk-level tuning as per the existing Replica set.
  3. Setup the appropriate clusterRole for the Config servers in the mongod config file.
  4. Create at least two more nodes for the Query routers ( MongoS )
  5. Set appropriate configDB parameters in the mongos config file.
  6. Repeat step 2 from above to tune as per the existing replica set.
  7. Apply proper SELinux policies on all the newly configured nodes of Config server and MongoS.
  8. Add clusterRole parameter into existing replica set nodes in a rolling fashion.
  9. Copy all the users from the replica set to any MongoS.
  10. Connect to any MongoS and add the existing replica set as Shard. 

Note: Do not enable sharding on any database until the shard key is finalized. If it’s finalized then we can enable the sharding.

Detailed Migration Plan:

Here, we are assuming that a Replica set has three nodes (1 primary, and 2 secondaries)

  1. Create three servers to initiate a 3-member replica set for the Config Servers.

Perform necessary OS, H/W, and disk-level tuning. To know more about it, please visit our blog on Tuning Linux for MongoDB.

  1. Install the same version of Percona Server for MongoDB as the existing replica set from here.
  2. In the config file of the config server mongod, add the parameter clusterRole: configsvr and port: 27019  to start it as config server on port 27019.
  3. If SELinux policy is enabled then set the necessary SELinux policy for dbPath, keyFile, and logs as below.
sudo semanage fcontext -a -t mongod_var_lib_t '/dbPath/mongod.*'

sudo chcon -Rv -u system_u -t mongod_var_lib_t '/dbPath/mongod'

sudo restorecon -R -v '/dbPath/mongod'

sudo semanage fcontext -a -t mongod_log_t '/logPath/log.*'

sudo chcon -Rv -u system_u -t mongod_log_t '/logPath/log'

sudo restorecon -R -v '/logPath/log'

sudo semanage port -a -t mongod_port_t -p tcp 27019

Start all the Config server mongod instances and connect to any one of them. Create a temporary user on it and initiate the replica set.

> use admin

> rs.initiate()

> db.createUser( { user: "tempUser", pwd: "<password>", roles:[{role: "root" , db:"admin"}]})

Create a role anyResource with action anyAction as well and assign it to “tempUser“.

>db.getSiblingDB("admin").createRole({ "role": "pbmAnyAction",

      "privileges": [

         { "resource": { "anyResource": true },

           "actions": [ "anyAction" ]

         }

      ],

      "roles": []

   });

> 

>db.grantRolesToUser( "tempUser", [{role: "pbmAnyAction", db: "admin"}]  )

> rs.add("config_host[2-3]:27019")

Now our Config server replica set is ready, let’s move to deploying Query routers i.e. MongoS.

  1. Create two instances for the MongoS and tune the OS, H/W, and disk. To do it follow our blog Tuning Linux for MongoDB or point 1 from the above Detailed migration.
  2. In mongos config file, adjust the configDB parameter and include only non-hidden nodes of Config servers ( In this blog post, we have not mentioned starting hidden config servers).
  3. Apply SELinux policies if it’s enabled, then follow step 4 and keep the same keyFile and start the MongoS on port 27017.
  4. Add the below parameter in mongod.conf on the Replica set nodes. Make sure the services are restarted in a rolling fashion i.e. start with the Secondaries then step down the existing Primary and restart it with port 27018.
clusterRole: shardsvr

Login to any MongoS and authenticate using “tempUser” and add the existing replica set as a shard.

> sh.addShard( "replicaSetName/<URI of the replica set>") //Provide URI of the replica set

Verify it with:

> sh.status() or db.getSiblingDB("config")['shards'].find()

Connect to the Primary of the replica set and copy all the users and roles. To authenticate/authorize mention the replica set user.

> var mongos = new Mongo("mongodb://put MongoS URI string here/admin?authSource=admin") //Provide the URI of the MongoS with tempUser for authentication/authorization.

>db.getSiblingDB("admin").system.roles.find().forEach(function(d) {

mongos.getDB('admin').getCollection('system.roles').insert(d)});

>db.getSiblingDB("admin").system.users.find().forEach(function(d) { mongos.getDB('admin').getCollection('system.users').insert(d)});

  1.  Connect to any MongoS and verify copied users on it. 
  2.  Shard the database if shardKey is finalized (In this post, we are not sharing this information as it’s related to migration of Replica set to Sharded cluster only).

Shard the database:

>sh.enableSharding("<db>")

Shard the collection with hash-based shard key:

>sh.shardCollection("<db>.<coll1>", { <shard key field> : "hashed" } )

Shard the collection with range based shard key:

sh.shardCollection("<db>.<coll1>", { <shard key field> : 1, ... } )

Conclusion

Migration of a MongoDB replica set to a sharded cluster is very important to scale horizontally, increase the read/write operations, and also reduce the operations each shard manages.

We encourage you to try our products like Percona Server for MongoDB, Percona Backup for MongoDB, or Percona Operator for MongoDB. You can also visit our site to know “Why MongoDB Runs Better with Percona”.

May
27
2022
--

Physical Backup Support in Percona Backup for MongoDB

Physical Backup Support in Percona Backup for MongoDB

Percona Backup for MongoDB (PBM) is a backup utility custom-built by Percona to help solve the needs of customers and users who don’t want to pay for proprietary software like MongoDB Enterprise and Ops Manager but want a fully-supported open-source backup tool that can perform cluster-wide consistent backups in MongoDB.

Version 1.7.0 of PBM was released in April 2022 and comes with a technical preview of physical backup functionality. This functionality enables users to benefit from the reduced recovery time. 

With logical backups, you extract the data from the database and store it in some binary or text format, and for recovery, you write all this data back to the database. For huge data sets, it is a time-consuming operation and might take hours and days. Physical backups take all your files which belong to the database from the disk itself, and recovery is just putting these files back. Such recovery is much faster as it does not depend on the database performance at all.

In this blog post, you will learn about the architecture of the physical backups feature in PBM, see how fast it is compared to logical backups, and try it out yourself.

Tech Peek

Architecture Review

In general, physical backup means a copy of a database’s physical files. In the case of Percona Server for MongoDB (PSMDB) these are WiredTiger

*.wt

  Btree, config, metadata, and journal files you can usually find in

/data/db

.  The trick is to copy all those files without stopping the cluster and interrupting running operations. And to be sure that data in files is consistent and no data will be changed during copying. Another challenge is to achieve consistency in a sharded cluster. In other words, how to be sure that we are gonna be able to restore data to the same cluster time across all shards.

PBM’s physical backups are based on the backupCursors feature of PSMDB (PSMDB). This implies that to use this feature, you should use Percona Server for MongoDB.

Backup

On each replica set, PBM uses

$backupCursor

to retrieve a list of files that need to be copied to achieve backup. Having that list next step is to ensure cluster-wide consistency. For that, each replica set posts a cluster time of the latest observed operation. The backup leader picks the most recent one. This will be the common backup timestamp (recovery timestamp) saved as the

last_write_ts

in the backup metadata. After agreeing on the backup time, the

pbm-agent

on each cluster opens a

$backupCursorExtend

. The cursor will return its result only after the node reaches the given timestamp. Thus the returned list of logs (journals) will contain the “common backup timestamp”. At that point, we have a list of all files that have to be in the backup. So each node copies them to the storage, saves metadata, closes cursors, and calls it a backup. Here is a blog post explaining Backup Cursors in great detail.

Of course, PBM does a lot more behind the scenes starting from electing appropriate nodes for the backup, coordinating operations across the cluster, logging, error handling, and many more. But all these subjects are for other posts.

Backup’s Recovery Timestamp

Restoring any backup PBM returns the cluster to some particular point in time. Here we’re talking about the time, not in terms of wall time but MongoDB’s cluster-wide logical clock. So the point that point-in-time is consistent across all nodes and replica sets in a cluster. In the case of logical or physical backup, that time is reflected in the

complete

section of

pbm list

  of

pbm status

  outputs. E.g.:

    2022-04-19T15:36:14Z 22.29GB <physical> [complete: 2022-04-19T15:36:16]
    2022-04-19T14:48:40Z 10.03GB <logical> [complete: 2022-04-19T14:58:38]

This time is not the time when a backup has finished, but the time at which cluster state was captured (hence the time the cluster will be returned to after the restore). In PBM’s logical backups, the recovery timestamp tends to be closer to the backup finish. To define it, PBM has to wait until the snapshot on all replica sets finishes. And then it starts oplog capturing from the backup start up to that time. Doing physical backups, PBM would pick a recovery timestamp right after a backup start. Holding the backup cursor open guarantees the checkpoint data won’t change during the backup, and PBM can define complete-time right ahead.

Restore

There are a few considerations for restoration.

First of all, files in the backup may contain operations beyond the target time (

commonBackupTimestamp

). To deal with that, PBM uses a special function of the replication subsystem’s startup process to set the limit of the oplog being restored. It’s done by setting the

oplogTruncateAfterPoint

value in the local DB’s

replset.oplogTruncateAfterPoint

 collection.

Along with the

oplogTruncateAfterPoint

 database needs some other changes and clean-up before start. This requires a series of restarts of PSMDB in a standalone mode.

Which in turn brings some hassle to the PBM operation. To communicate and coordinate its work across all agents, PBM relies on PSMDB itself. But once a cluster is taken down, PBM has to switch to communication via storage. Also, during standalone runs, PBM is unable to store its logs in the database. Hence, at some point during restore,

pbm-agent

logs are being available only in agents’ stderr. And

pbm logs

 won’t have access to them. We’re planning to solve this problem by the physical backups GA.

Also, we had to decide on the restore strategy in a replica set. One way is to restore one node, then delete all data on the rest and let the PSMDB replication do the job. Although it’s a bit easier, it means until InitialSync finishes, the cluster will be of little use. Besides, logical replication at this stage almost neglects all the speed benefits (later on that) the physical restore brings to the table. So we went with the restoration of each node in a replica set. And making sure after the cluster starts, no node will spot any difference and won’t start ReSync.

As with the PBM’s logical backups, the physical once currently can be restored to the cluster with the same topology, meaning replica set names in the backup and the target cluster should match. Although it won’t be an issue for logical backups starting from the next PBM version. And later this feature will be extended to the physical backups as well. Along with that, the number of replica sets in the cluster could be more than those in the backup but not vice-versa. Meaning all data in the backup should be restored. 

Performance Review

We used the following setup:

  • Cluster: 3-node replica set. Each mongod+pbm-agent on Digital Ocean droplet: 16GB, 8vCPU (CPU optimized).
  • Storage: nyc3.digitaloceanspaces.com
  • Data: randomly generated, ~1MB documents

physical backup MongoDB

In general, a logical backup should be more beneficial on small databases (a few hundred megabytes). Since on such a scale, the extra overhead on top of data that physical files bring still makes a difference. Basically reading/writing only user data during logical backup means less data needs to be transferred over the network. But as the database grows, overhead on logical read(select) and mostly write(insert) became a bottleneck for the logical backups. As for the physical backup, the speed is almost always bounded only by the network bandwidth to/from remote storage. In our tests, restoration time from physical backups has linear dependency on the dataset size, whereas logical restoration time grows non-linearly. The more data you have, the longer it takes to replay all the data and rebuild indexes. For example, for a 600GB dataset physical restore took 5x less time compared to logical. 

But on a small DB size, the difference is neglectable – a couple of minutes. So the main benefit of logical backups lay beyond the performance. It’s flexibility. Logical backups allow partial backup/restore of the database (on the roadmap for PBM). You can choose particular databases and/or collections to work with.  As physical backups work directly with database storage-engine files, they operate in an all-or-nothing frame.

Hands-on

PBM Configuration

In order to start using PBM with PSMDB or MongoDB, install all the necessary packages according to the installation instructions. Please note that starting from version 1.7.0 the user running the

pbm-agent

 process should also have the read/write access to PSMDB data directory for the purpose of performing operations with datafiles during physical backup or restore. 

Considering the design, starting from 1.7.0 the default user for

pbm-agent

is changed from

pbm

to

mongod

. So unless PSMDB runs under a different user than

mongod

, no extra actions are required. Otherwise, please carefully re-check your configuration and provide the necessary permissions to ensure proper PBM functioning.

In addition, keep in mind that for using PBM physical backups, you should run Percona Server for MongoDB starting from versions 4.2.15-16 and 4.4.6-8 and higher – this is where hotBackups and backup cursors were introduced.

Creating a Backup

With the new PBM version, you can specify what type of backup you wish to make: physical or logical. By default when no type is selected, PBM makes a logical backup.

> pbm backup
Starting backup '2022-04-20T11:12:53Z'....
Backup '2022-04-20T11:12:53Z' to remote store 's3://https://storage.googleapis.com/pbm-bucket' has started

> pbm backup -t physical
Starting backup '2022-04-20T12:34:06Z'....
Backup '2022-04-20T12:34:06Z' to remote store 's3://https://storage.googleapis.com/pbm-bucket' has started

> pbm status -s cluster -s backups
Cluster:
========
rs0:
  - rs0/mongo1.perconatest.com:27017: pbm-agent v1.7.0 OK
  - rs0/mongo2.perconatest.com:27017: pbm-agent v1.7.0 OK
  - rs0/mongo3.perconatest.com:27017: pbm-agent v1.7.0 OK
Backups:
========
S3 us-east-1 s3://https://storage.googleapis.com/pbm-bucket
  Snapshots:
    2022-04-20T12:34:06Z 797.38KB <physical> [complete: 2022-04-20T12:34:09]
    2022-04-20T11:12:53Z 13.66KB <logical> [complete: 2022-04-20T11:12:58]

Point-in-Time Recovery

Point-in-Time Recovery is currently supported only for logical backups. It means that a logical backup snapshot is required for pbm-agent to start periodically saving consecutive slices of the oplog. You can still make a physical backup while PITR is enabled, it won’t break or change the oplog saving process. 

The restoration process to the specific point in time will also use a respective logical backup snapshot and oplog slices which will be replayed on top of the backup.

Checking the Logs

During physical backup, PBM logs are available via

pbm logs

command as well as for all other operations. 

> pbm logs -e backup/2022-04-20T12:34:06Z
2022-04-20T12:34:07Z I [rs0/mongo2.perconatest.com:27017] [backup/2022-04-20T12:34:06Z] backup started
2022-04-20T12:34:12Z I [rs0/mongo2.perconatest.com:27017] [backup/2022-04-20T12:34:06Z] uploading files
2022-04-20T12:34:54Z I [rs0/mongo2.perconatest.com:27017] [backup/2022-04-20T12:34:06Z] uploading done
2022-04-20T12:34:56Z I [rs0/mongo2.perconatest.com:27017] [backup/2022-04-20T12:34:06Z] backup finished

As for restore,

pbm logs

command doesn’t provide information about restore from a physical backup. It’s caused by peculiarities of the restore procedure and will be improved in the upcoming PBM versions. However,

pbm-agent

still saves log locally, so it’s possible to check information about restore process on each node: 

> sudo journalctl -u pbm-agent.service | grep restore
pbm-agent[12560]: 2022-04-20T19:37:56.000+0000 I [restore/2022-04-20T12:34:06Z] restore started
.......
pbm-agent[12560]: 2022-04-20T19:38:22.000+0000 I [restore/2022-04-20T12:34:06Z] copying backup data
.......
pbm-agent[12560]: 2022-04-20T19:38:39.000+0000 I [restore/2022-04-20T12:34:06Z] preparing data
.......
pbm-agent[12560]: 2022-04-20T19:39:12.000+0000 I [restore/2022-04-20T12:34:06Z] restore finished <nil>
pbm-agent[12560]: 2022-04-20T19:39:12.000+0000 I [restore/2022-04-20T12:34:06Z] restore finished successfully

Restoring from a Backup

The restore process from a physical backup is similar to a logical one but requires several extra steps after the restore is finished by PBM.

> pbm restore 2022-04-20T12:34:06Z
Starting restore from '2022-04-20T12:34:06Z'.....Restore of the snapshot from '2022-04-20T12:34:06Z' has started. Leader: mongo1.perconatest.com:27017/rs0

After starting the restore process, pbm CLI returns the leader node ID, so it’s possible to track the restore progress by checking logs of the pbm-agent leader. In addition, status is written to the metadata file created on the remote storage. The status file is created in the root of the storage path and has the format

.pbm.restore/<restore_timestamp>.json

. As an option it’s also possible to pass

-w

flag during restore which will block the current shell session and wait for the restore to finish:

> pbm restore 2022-04-20T12:34:06Z -w
Starting restore from '2022-04-20T12:34:06Z'....Started physical restore. Leader: mongo2.perconatest.com:27017/rs0
Waiting to finish...........................Restore successfully finished!

After the restore is complete, it’s required to perform the following steps:

  • Restart all
    mongod

    (and

    mongos

     if present) nodes

  • Restart all pbm-agents
  • Run the following command to resync the backup list with the storage:

    $ pbm config --force-resync

Conclusion

MongoDB allows users to store enormous amounts of data. Especially if we talk about sharded clusters, where users are not limited by a single storage volume size limit. Database administrators often have to implement various home-grown solutions to ensure timely backups and restores of such big clusters. The usual approach is a storage-level snapshot. Such solutions do not guarantee data consistency and provide false confidence that data is safe.

Percona Backup for MongoDB with physical backup and restore capabilities enable users to backup and restore data fast and at the same time comes with data-consistency guarantees. 

Physical Backup functionality is in the Technical Preview stage. We encourage you to read more about it in our documentation and try it out. In case you face any issues feel free to contact us on the forum or raise the JIRA issue.

May
24
2022
--

Store and Manage Logs of Percona Operator Pods with PMM and Grafana Loki

Store and Manage Logs of Percona Operator Pods with PMM and Grafana Loki

While it is convenient to view the log of MySQL or MongoDB pods with

kubectl logs

, sometimes the log is purged when the pod is deleted, which makes searching historical logs a bit difficult. Grafana Loki, an aggregation logging tool from Grafana, can be installed in the existing Kubernetes environment to help store historical logs and build a comprehensive logging stack.

What Is Grafana Loki

Grafana Loki is a set of tools that can be integrated to provide a comprehensive logging stack solution. Grafana-Loki-Promtail is the major component of this solution. The Promtail agent collects and ships the log to the Loki datastore and then visualizes logs with Grafana. While collecting the logs, Promtail can label, convert and filter the log before sending. Next step, Loki receives the logs and indexes the metadata of the log. At this step, the logs are ready to be visualized in the Grafana and the administrator can use Grafana and Loki’s query language, LogQL, to explore the logs. 

 

     Grafana Loki

Installing Grafana Loki in Kubernetes Environment

We will use the official helm chart to install Loki. The Loki stack helm chart supports the installation of various components like

promtail

,

fluentd

,

Prometheus

and

Grafana

$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm search repo grafana

NAME                                CHART VERSION APP VERSION DESCRIPTION
grafana/grafana                     6.29.2       8.5.0      The leading tool for querying and visualizing t...
grafana/grafana-agent-operator      0.1.11       0.24.1     A Helm chart for Grafana Agent Operator
grafana/fluent-bit                  2.3.1        v2.1.0     Uses fluent-bit Loki go plugin for gathering lo...
grafana/loki                        2.11.1       v2.5.0     Loki: like Prometheus, but for logs.
grafana/loki-canary                 0.8.0        2.5.0      Helm chart for Grafana Loki Canary
grafana/loki-distributed            0.48.3       2.5.0      Helm chart for Grafana Loki in microservices mode
grafana/loki-simple-scalable        1.0.0        2.5.0      Helm chart for Grafana Loki in simple, scalable...
grafana/loki-stack                  2.6.4        v2.4.2     Loki: like Prometheus, but for logs.

For a quick introduction, we will install only

loki-stack

and

promtail

. We will use the Grafana pod, that is deployed by Percona Monitoring and Management to visualize the log. 

$ helm install loki-stack grafana/loki-stack --create-namespace --namespace loki-stack --set promtail.enabled=true,loki.persistence.enabled=true,loki.persistence.size=5Gi

Let’s see what has been installed:

$ kubectl get all -n loki-stack
NAME                            READY   STATUS    RESTARTS   AGE
pod/loki-stack-promtail-xqsnl   1/1     Running   0          85s
pod/loki-stack-promtail-lt7pd   1/1     Running   0          85s
pod/loki-stack-promtail-fch2x   1/1     Running   0          85s
pod/loki-stack-promtail-94rcp   1/1     Running   0          85s
pod/loki-stack-0                1/1     Running   0          85s

NAME                          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/loki-stack-headless   ClusterIP   None           <none>        3100/TCP   85s
service/loki-stack            ClusterIP   10.43.24.113   <none>        3100/TCP   85s

NAME                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/loki-stack-promtail   4         4         4       4            4           <none>          85s

NAME                          READY   AGE
statefulset.apps/loki-stack   1/1     85s

Promtail

and

loki-stack

pods have been created, together with a service that

loki-stack

will use to publish the logs to Grafana for visualization.

You can see the promtail pods are deployed by a

daemonset

 which spawns a promtail pod in every node. This is to make sure the logs from every pod in all the nodes are collected and shipped to the

loki-stack

  pod for centralizing storing and management.

$ kubectl get pods -n loki-stack -o wide
NAME                        READY   STATUS    RESTARTS   AGE    IP           NODE                 NOMINATED NODE   READINESS GATES
loki-stack-promtail-xqsnl   1/1     Running   0          2m7s   10.42.0.17   phong-dinh-default   <none>           <none>
loki-stack-promtail-lt7pd   1/1     Running   0          2m7s   10.42.1.12   phong-dinh-node1     <none>           <none>
loki-stack-promtail-fch2x   1/1     Running   0          2m7s   10.42.2.13   phong-dinh-node2     <none>           <none>
loki-stack-promtail-94rcp   1/1     Running   0          2m7s   10.42.3.11   phong-dinh-node3     <none>           <none>
loki-stack-0                1/1     Running   0          2m7s   10.42.0.19   phong-dinh-default   <none>           <none>

Integrating Loki With PMM Grafana

Next, we will add Loki as a data source of PMM Grafana, so we can use PMM Grafana to visualize the logs. You can do it from the GUI or with kubectl CLI. 

Below is the step to add data source from PMM GUI:

Navigate to

https://<PMM-IP-addres>:9443/graph/datasources

  then select

Add data source

 

Integrating Loki With PMM Grafana

Then Select Loki

Next, in the Settings, in the HTTP URL box, enter the DNS records of the

loki-stack

  service 

Loki settings

You can also use the below command to add the data source. Make sure to specify the name of PMM pod, in this command,

monitoring-0

  is the PMM pod. 

$ kubectl -n default exec -it monitoring-0 -- bash -c "curl 'http://admin:verysecretpassword@127.0.0.1:3000/api/datasources' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{ \"orgId\": 1, \"name\": \"Loki\", \"type\": \"loki\", \"typeLogoUrl\": \"\", \"access\": \"proxy\", \"url\": \"http://loki-stack.loki-stack.svc.cluster.local:3100\", \"password\": \"\", \"user\": \"\", \"database\": \"\", \"basicAuth\": false, \"basicAuthUser\": \"\", \"basicAuthPassword\": \"\", \"withCredentials\": false, \"isDefault\": false, \"jsonData\": {}, \"secureJsonFields\": {}, \"version\": 1, \"readOnly\": false }'"

Exploring the Logs in PMM Grafana

Now, you can explore the pod logs in Grafana UI, navigate to Explore, and select Loki in the dropdown list as the source of metrics:

Exploring the Logs in PMM Grafana

In the Log Browser, you can select the appropriate labels to form your first LogQL query, for example, I select the following attributes

 

{app="percona-xtradb-cluster", component="pxc", container="logs",pod="cluster1-pxc-1"} 

 

Click Show logs  and you can see all the logs of

cluster-pxc-1

pod

You can perform a simple filter with |= “message-content”. For example, filtering all the messages related to State transfer  by

{app="percona-xtradb-cluster", component="pxc", container="logs",pod="cluster1-pxc-1"} |= "State transfer"

Conclusion

Deploying Grafana Loki in the Kubernetes environment is feasible and straightforward. Grafana Loki can be integrated easily with Percona Monitoring and Management to provide both centralized logging and comprehensive monitoring when running Percona XtraDB Cluster and Percona Server for MongoDB in Kubernetes.

It would be interesting to know if you have any thoughts while reading this, please share your comments and thoughts.

May
16
2022
--

Running Rocket.Chat with Percona Server for MongoDB on Kubernetes

Running Rocket.Chat with Percona Server for MongoDB on Kubernetes

Our goal is to have a Rocket.Chat deployment which uses highly available Percona Server for MongoDB cluster as the backend database and it all runs on Kubernetes. To get there, we will do the following:

  • Start a Google Kubernetes Engine (GKE) cluster across multiple availability zones. It can be any other Kubernetes flavor or service, but I rely on multi-AZ capability in this blog post.
  • Deploy Percona Operator for MongoDB and database cluster with it
  • Deploy Rocket.Chat with specific affinity rules
    • Rocket.Chat will be exposed via a load balancer

Rocket.Chat will be exposed via a load balancer

Percona Operator for MongoDB, compared to other solutions, is not only the most feature-rich but also comes with various management capabilities for your MongoDB clusters – backups, scaling (including sharding), zero-downtime upgrades, and many more. There are no hidden costs and it is truly open source.

This blog post is a walkthrough of running a production-grade deployment of Rocket.Chat with Percona Operator for MongoDB.

Rock’n’Roll

All YAML manifests that I use in this blog post can be found in this repository.

Deploy Kubernetes Cluster

The following command deploys GKE cluster named

percona-rocket

in 3 availability zones:

gcloud container clusters create --zone us-central1-a --node-locations us-central1-a,us-central1-b,us-central1-c percona-rocket --cluster-version 1.21 --machine-type n1-standard-4 --preemptible --num-nodes=3

Read more about this in the documentation.

Deploy MongoDB

I’m going to use helm to deploy the Operator and the cluster.

Add helm repository:

helm repo add percona https://percona.github.io/percona-helm-charts/

Install the Operator into the percona namespace:

helm install psmdb-operator percona/psmdb-operator --create-namespace --namespace percona

Deploy the cluster of Percona Server for MongoDB nodes:

helm install my-db percona/psmdb-db -f psmdb-values.yaml -n percona

Replica set nodes are going to be distributed across availability zones. To get there, I altered the affinity keys in the corresponding sections of psmdb-values.yaml:

antiAffinityTopologyKey: "topology.kubernetes.io/zone"

Prepare MongoDB

For Rocket.Chat to connect to our database cluster, we need to create the users. By default, clusters provisioned with our Operator have

userAdmin

user, its password is set in

psmdb-values.yaml

:

MONGODB_USER_ADMIN_PASSWORD: userAdmin123456

For production-grade systems, do not forget to change this password or create dedicated secrets to provision those. Read more about user management in our documentation.

Spin up a client Pod to connect to the database:

kubectl run -i --rm --tty percona-client1 --image=percona/percona-server-mongodb:4.4.10-11 --restart=Never -- bash -il

Connect to the database with

userAdmin

:

[mongodb@percona-client1 /]$ mongo "mongodb://userAdmin:userAdmin123456@my-db-psmdb-db-rs0-0.percona/admin?replicaSet=rs0"

We are going to create the following:

  • rocketchat

    database

  • rocketChat

    user to store data and connect to the database

  • oplogger

    user to provide access to oplog for rocket chat

    • Rocket.Chat uses Meteor Oplog tailing to improve performance. It is optional.
use rocketchat
db.createUser({
  user: "rocketChat",
  pwd: passwordPrompt(),
  roles: [
    { role: "readWrite", db: "rocketchat" }
  ]
})

use admin
db.createUser({
  user: "oplogger",
  pwd: passwordPrompt(),
  roles: [
    { role: "read", db: "local" }
  ]
})

Deploy Rocket.Chat

I will use helm here to maintain the same approach. 

helm install -f rocket-values.yaml my-rocketchat rocketchat/rocketchat --version 3.0.0

You can find rocket-values.yaml in the same repository. Please make sure you set the correct passwords in the corresponding YAML fields.

As you can see, I also do the following:

  • Line 11: expose Rocket.Chat through
    LoadBalancer

    service type

  • Line 13-14: set number of replicas of Rocket.Chat Pods. We want three – one per each availability zone.
  • Line 16-23: set affinity to distribute Pods across availability zones

Load Balancer will be created with a public IP address:

$ kubectl get service my-rocketchat-rocketchat
NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
my-rocketchat-rocketchat   LoadBalancer   10.32.17.26   34.68.238.56   80:32548/TCP   12m

You should now be able to connect to

34.68.238.56

and enjoy your highly available Rocket.Chat installation.

Rocket.Chat installation

Clean Up

Uninstall all helm charts to remove MongoDB cluster, the Operator, and Rocket.Chat:

helm uninstall my-rocketchat
helm uninstall my-db -n percona
helm uninstall psmdb-operator -n percona

Things to Consider

Ingress

Instead of exposing Rocket.Chat through a load balancer, you may also try ingress. By doing so, you can integrate it with cert-manager and have a valid TLS certificate for your chat server.

Mongos

It is also possible to run a sharded MongoDB cluster with Percona Operator. If you do so, Rocket.Chat will connect to mongos Service, instead of the replica set nodes. But you will still need to connect to the replica set directly to get oplogs.

Conclusion

We encourage you to try out Percona Operator for MongoDB with Rocket.Chat and let us know on our community forum your results.

There is always room for improvement and a time to find a better way. Please let us know if you face any issues with contributing your ideas to Percona products. You can do that on the Community Forum or JIRA. Read more about contribution guidelines for Percona Operator for MongoDB in CONTRIBUTING.md.

Percona Operator for MongoDB contains everything you need to quickly and consistently deploy and scale Percona Server for MongoDB instances into a Kubernetes cluster on-premises or in the cloud. The Operator enables you to improve time to market with the ability to quickly deploy standardized and repeatable database environments. Deploy your database with a consistent and idempotent result no matter where they are used.

May
12
2022
--

Percona Operator for MongoDB and Kubernetes MCS: The Story of One Improvement

Percona Operator for MongoDB and Kubernetes MCS

Percona Operator for MongoDB supports multi-cluster or cross-site replication deployments since version 1.10. This functionality is extremely useful if you want to have a disaster recovery deployment or perform a migration from or to a MongoDB cluster running in Kubernetes. In a nutshell, it allows you to use Operators deployed in different Kubernetes clusters to manage and expand replica sets.Percona Kubernetes Operator

For example, you have two Kubernetes clusters: one in Region A, another in Region B.

  • In Region A you deploy your MongoDB cluster with Percona Operator. 
  • In Region B you deploy unmanaged MongoDB nodes with another installation of Percona Operator.
  • You configure both Operators, so that nodes in Region B are added to the replica set in Region A.

In case of failure of Region A, you can switch your traffic to Region B.

Migrating MongoDB to Kubernetes describes the migration process using this functionality of the Operator.

This feature was released in tech preview, and we received lots of positive feedback from our users. But one of our customers raised an internal ticket, which was pointing out that cross-site replication functionality does not work with Multi-Cluster Services. This started the investigation and the creation of this ticket – K8SPSMDB-625

This blog post will go into the deep roots of this story and how it is solved in the latest release of Percona Operator for MongoDB version 1.12.

The Problem

Multi-Cluster Services or MCS allows you to expand network boundaries for the Kubernetes cluster and share Service objects across these boundaries. Someone calls it another take on Kubernetes Federation. This feature is already available on some managed Kubernetes offerings,  Google Cloud Kubernetes Engine (GKE) and AWS Elastic Kubernetes Service (EKS). Submariner uses the same logic and primitives under the hood.

MCS Basics

To understand the problem, we need to understand how multi-cluster services work. Let’s take a look at the picture below:

multi-cluster services

  • We have two Pods in different Kubernetes clusters
  • We add these two clusters into our MCS domain
  • Each Pod has a service and IP-address which is unique to the Kubernetes cluster
  • MCS introduces new Custom Resources –
    ServiceImport

    and

    ServiceExport

    .

    • Once you create a
      ServiceExport

      object in one cluster,

      ServiceImport

      object appears in all clusters in your MCS domain.

    • This
      ServiceImport

        object is in

      svc.clusterset.local

      domain and with the network magic introduced by MCS can be accessed from any cluster in the MCS domain

Above means that if I have an application in the Kubernetes cluster in Region A, I can connect to the Pod in Kubernetes cluster in Region B through a domain name like

my-pod.<namespace>.svc.clusterset.local

. And it works from another cluster as well.

MCS and Replica Set

Here is how cross-site replication works with Percona Operator if you use load balancer:

MCS and Replica Set

All replica set nodes have a dedicated service and a load balancer. A replica set in the MongoDB cluster is formed using these public IP addresses. External node added using public IP address as well:

  replsets:
  - name: rs0
    size: 3
    externalNodes:
    - host: 123.45.67.89

All nodes can reach each other, which is required to form a healthy replica set.

Here is how it looks when you have clusters connected through multi-cluster service:

Instead of load balancers replica set nodes are exposed through Cluster IPs. We have ServiceExports and ServiceImports resources. All looks good on the networking level, it should work, but it does not.

The problem is in the way the Operator builds MongoDB Replica Set in Region A. To register an external node from Region B to a replica set, we will use MCS domain name in the corresponding section:

  replsets:
  - name: rs0
    size: 3
    externalNodes:
    - host: rs0-4.mongo.svc.clusterset.local

Now our rs.status() will look like this:

"name" : "my-cluster-rs0-0.mongo.svc.cluster.local:27017"
"role" : "PRIMARY"
...
"name" : "my-cluster-rs0-1.mongo.svc.cluster.local:27017"
"role" : "SECONDARY"
...
"name" : "my-cluster-rs0-2.mongo.svc.cluster.local:27017"
"role" : "SECONDARY"
...
"name" : "rs0-4.mongo.svc.clusterset.local:27017"
"role" : "UNKNOWN"

As you can see, Operator formed a replica set out of three nodes using

svc.cluster.local

domain, as it is how it should be done when you expose nodes with

ClusterIP

Service type. In this case, a node in Region B cannot reach any node in Region A, as it tries to connect to the domain that is local to the cluster in Region A. 

In the picture below, you can easily see where the problem is:

The Solution

Luckily we have a Special Interest Group (SIG), a Kubernetes Enhancement Proposal (KEP) and multiple implementations for enabling Multi-Cluster Services. Having a KEP is great since we can be sure the implementations from different providers (i.e GCP, AWS) will follow the same standard more or less.

There are two fields in the Custom Resource that control MCS in the Operator:

spec:
  multiCluster:
    enabled: true
    DNSSuffix: svc.clusterset.local

Let’s see what is happening in the background with these flags set.

ServiceImport and ServiceExport Objects

Once you enable MCS by patching the CR with

spec.multiCluster.enabled: true

, the Operator creates a

ServiceExport

object for each service. These ServiceExports will be detected by the MCS controller in the cluster and eventually a

ServiceImport

for each

ServiceExport

will be created in the same namespace in each cluster that has MCS enabled.

As you see, we made a decision and empowered the Operator to create

ServiceExport

objects. There are two main reasons for doing that:

  • If any infrastructure-as-a-code tool is used, it would require additional logic and level of complexity to automate the creation of required MCS objects. If Operator takes care of it, no additional work is needed. 
  • Our Operators take care of the infrastructure for the database, including Service objects. It just felt logical to expand the reach of this functionality to MCS.

Replica Set and Transport Encryption

The root cause of the problem that we are trying to solve here lies in the networking field, where external replica set nodes try to connect to the wrong domain names. Now, when you enable multi-cluster and set

DNSSuffix

(it defaults to

svc.clusterset.local

), Operator does the following:

  • Replica set is formed using MCS domain set in
    DNSSuffix

     

  • Operator generates TLS certificates as usual, but adds
    DNSSuffix

    domains into the picture

With this approach, the traffic between nodes flows as expected and is encrypted by default.

Replica Set and Transport Encryption

Things to Consider

MCS APIs

Please note that the operator won’t install MCS APIs and controllers to your Kubernetes cluster. You need to install them by following your provider’s instructions prior to enabling MCS for your PSMDB clusters. See our docs for links to different providers.

Operator detects if MCS is installed in the cluster by API resources. The detection happens before controllers are started in the operator. If you installed MCS APIs while the operator is running, you need to restart the operator. Otherwise, you’ll see an error like this:

{
  "level": "error",
  "ts": 1652083068.5910048,
  "logger": "controller.psmdb-controller",
  "msg": "Reconciler error",
  "name": "cluster1",
  "namespace": "psmdb",
  "error": "wrong psmdb options: MCS is not available on this cluster",
  "errorVerbose": "...",
  "stacktrace": "..."
}

ServiceImport Provisioning Time

It might take some time for

ServiceImport

objects to be created in the Kubernetes cluster. You can see the following messages in the logs while creation is in progress: 

{
  "level": "info",
  "ts": 1652083323.483056,
  "logger": "controller_psmdb",
  "msg": "waiting for service import",
  "replset": "rs0",
  "serviceExport": "cluster1-rs0"
}

During testing, we saw wait times up to 10-15 minutes. If you see your cluster is stuck in initializing state by waiting for service imports, it’s a good idea to check the usage and quotas for your environment.

DNSSuffix

We also made a decision to automatically generate TLS certificates for Percona Server for MongoDB cluster with

*.clusterset.local

domain, even if MCS is not enabled. This approach simplifies the process of enabling MCS for a running MongoDB cluster. It does not make much sense to change the

DNSSuffix

 field, unless you have hard requirements from your service provider, but we still allow such a change. 

If you want to enable MCS with a cluster deployed with an operator version below 1.12, you need to update your TLS certificates to include

*.clusterset.local

SANs. See the docs for instructions.

Conclusion

Business relies on applications and infrastructure that serves them more than ever nowadays. Disaster Recovery protocols and various failsafe mechanisms are routine for reliability engineers, not an annoying task in the backlog. 

With multi-cluster deployment functionality in Percona Operator for MongoDB, we want to equip users to build highly available and secured database clusters with minimal effort.

Percona Operator for MongoDB is truly open source and provides users with a way to deploy and manage their enterprise-grade MongoDB clusters on Kubernetes. We encourage you to try this new Multi-Cluster Services integration and let us know your results on our community forum. You can find some scripts that would help you provision your first MCS clusters on GKE or EKS here.

There is always room for improvement and a time to find a better way. Please let us know if you face any issues with contributing your ideas to Percona products. You can do that on the Community Forum or JIRA. Read more about contribution guidelines for Percona Operator for MongoDB in CONTRIBUTING.md.

Mar
07
2022
--

Manage Your Data with Mingo.io and Percona Distribution for MongoDB Operator

Mingo.io and Percona Distribution for MongoDB Operator

Mingo.io and Percona Distribution for MongoDB OperatorDeploying MongoDB on Kubernetes has never been simpler with Percona Distribution for MongoDB Operator. It provides you with an enterprise-ready MongoDB cluster with no manual burden. In addition to that, you also get automated day-to-day operations – scaling, backups, upgrades.

But once you have your cluster up and running and have a good grasp on managing it, you still need to manage the data itself – collections, indexes, shards. We at Percona focus on infrastructure, data consistency, and the health of the cluster, but data is an integral part of any database that should be managed. In this blog post, we will see how the user can manage, browse and query the data of a MongoDB cluster with mingo.io. Mingo.io is a tool that I have discovered recently and it reminded me of PHPMyAdmin, but with a great user interface, feature set, and that it is for MongoDB, not MySQL.

This post provides a step-by-step guide on how to deploy MongoDB locally, connect it with Mingo.io, and manage the data. We want to have a local setup so that anyone would be able to try it at home.

Deploy the Operator and the Database

For local installation, I will use microk8s and minimal cr.yaml for MongoDB, which deploys a one-node replica set with sharding. It is enough for the demo.

Start Kubernetes Cluster

As I use Ubuntu, I will install microk8s from snap:

# snap install microk8s --classic

We need to start microk8s and enable Domain Name Service (DNS) and storage support:

# microk8s start
# microk8s enable dns storage

Kubernetes cluster is ready, let’s fetch kubeconfig from it to use regular kubectl:

# microk8s config > /home/percona/.kube/config

MongoDB Up and Running

Install the Operator:

kubectl apply -f https://raw.githubusercontent.com/spron-in/blog-data/master/mingo/bundle.yaml

Deploy MongoDB itself:

kubectl apply -f https://raw.githubusercontent.com/spron-in/blog-data/master/mingo/cr-minimal.yaml

Connect to MongoDB

To connect to MongoDB we need a connection string with the login and password. In this example, we expose mongos with a NodePort service type. Find the Port to connect to:

$ kubectl get services
NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
…
minimal-cluster-mongos   NodePort    10.152.183.46   <none>        27017:32609/TCP   32m

You can connect to the MongoDB cluster using the IP address of your machine and port 32609. 

To demonstrate the full power of Mingo, we will need to create a user first to manage the data:

Get userAdmin password

$ kubectl get secrets minimal-cluster -o yaml | awk '$1~/MONGODB_USER_ADMIN_PASSWORD/ {print $2}' | base64 --decode
4OnZ2RZ3SpRVtKbUxy

Run MongoDB client Pod

kubectl run -i --rm --tty percona-client --image=percona/percona-server-mongodb:4.4.10-11 --restart=Never -- bash -il

Connect to Mongo

mongo "mongodb://userAdmin:4OnZ2RZ3SpRVtKbUxy@minimal-cluster-mongos.default.svc.cluster.local/admin?ssl=false"

Create the user in mongo shell:

db.createUser( { user: "test", pwd: "mysUperpass", roles: [ "userAdminAnyDatabase", "readWriteAnyDatabase", "dbAdminAnyDatabase" ] } )

The connection strict that I would use in Mingo would look like this:

mongodb://test:mySuperpass@10.10.3.4:32609

IP address and port most likely would be different for you.

Mingo at Work

MongoDB cluster is up and running. Let’s have Mingo installed and configured.

First, download the latest version of Mingo from https://mingo.io/ On the first opening after installation you will be welcomed with the message about Mingo’s security.

Once you are prompted for your first MongoDB connection, paste your mongo URL into the field and submit. Now you are connected and ready to manage your data.

Mingo.io

Now, let’s try two examples of how Mingo works.

Example 1

Let’s assume you have a collection called “Blog” and you want to rename it to “Articles”. First, locate the collection in the sidebar, right-click it and select “Rename collection…” in the context menu. Type the new collection name and hit Enter. That’s it.

 

Example 2

Let’s assume you have a collection called “orders” with the field “shippedDate”, and you want to rename this field to shipmentDate in every document that was created during 2020. How would you do this in the old way? Would it be easy? Let’s see how you can do this in Mingo.

Instead of complicated queries with dates, we can use a simple shorthand { “OrderDate”: #2020 } and press Submit to see the results. Now just expand any document and then right-click on shippedDate. From the context menu select “Rename Fields ? Filtered documents…” and then just insert the correct name and hit Rename.

Example 3

Let’s customize the way we see the “Orders” collection, for example.

First, click CMD+T (or CTRL+T on Windows), start typing the name of the collection in the Finder until you see it. Then, using the up and down arrow keys, select the collection and press Enter. Alternatively, you can click on the collection in the sidebar.

When the collection is shown, Mingo picks a couple of columns to show. To add another column, click on the + icon and select the field to add. You can rearrange the columns by dragging their headers. Click on the column header to see all the actions for a column and explore them.

Projects

On top of regular MongoDB connections, Mingo offers “Projects”. A project is like a wrapper on several connections to different databases, such as development, production, testing. Mingo treats these databases as siblings and simplifies routine tasks, such as synchronization, backups, shared settings for sibling collections and many more.

Have a Quick Glance at Mingo in Action

 

Conclusion

Deploying and managing MongoDB clusters on Kubernetes is still in its early phase. Percona is committed to delivering enterprise-grade solutions to deploy and manage databases on Kubernetes.

We are focused on infrastructure and data consistency, whereas Mingo.io empowers users to perform data management on any MongoDB cluster.

Percona Distribution for MongoDB Operator contains everything you need to quickly and consistently deploy and scale Percona Server for MongoDB instances into a Kubernetes cluster on-premises or in the cloud. The Operator enables you to improve time to market with the ability to quickly deploy standardized and repeatable database environments. Deploy your database with a consistent and idempotent result no matter where they are used.

Mingo

As NodeJS developers, heavily using MongoDB for our projects, we have long dreamed of a better MongoDB admin tool.

Existing solutions lacked usability and features to make browsing and querying documents enjoyable. They would easily analyze data, list documents, and build aggregations, but they felt awkward with simple everyday tasks, such as finding a document quickly, viewing its content in a nice layout, or doing common actions with one click. Since dreaming only gets you so far, we started working on a new MongoDB admin.

Our goal at Mingo is to create a MongoDB GUI with superb user experience, modern design, and productive features to speed up your work and make you fall in love with your data.

Feb
16
2022
--

Monitoring MongoDB Collection Stats with Percona Monitoring and Management

Monitoring MongoDB Collection Stats

Monitoring MongoDB Collection StatsOne approach to get to know a MongoDB system we are not familiar with is to start by checking the busiest collections. MongoDB provides the top administrative command for this purpose.

From the mongo shell, we can run db.adminCommand(“top”) to get a snapshot of all the collections at a specific point in time:

...
		"test.testcol" : {
			"total" : {
				"time" : 17432,
				"count" : 58
			},
			"readLock" : {
				"time" : 358,
				"count" : 57
			},
			"writeLock" : {
				"time" : 17074,
				"count" : 1
			},
			"queries" : {
				"time" : 100,
				"count" : 1
			},
			"getmore" : {
				"time" : 0,
				"count" : 0
			},
			"insert" : {
				"time" : 17074,
				"count" : 1
			},
			"update" : {
				"time" : 0,
				"count" : 0
			},
			"remove" : {
				"time" : 0,
				"count" : 0
			},
			"commands" : {
				"time" : 0,
				"count" : 0
			}
		}
		...

In the extract above we can see some details about the testcol collection. For each operation type, we have the amount of server time spent (measured in microseconds), and a counter for the number of operations issued. The total section is simply the sum for all operation types against the collection.

In this case, I executed a single insert and then a find command to check the results, hence the counters are one for each of those operations.

So by storing samples of the above values in a set of metrics, we can easily see the trends and historical usage of a MongoDB instance.

Top  Metrics in Percona Monitoring and Management (PMM)

PMM 2.26 includes updates to mongodb_exporter so we now have the ability to get metrics using the output of the MongoDB top command.

For example:

MongoDB Percona Monitoring and Management

As you can see, the metric names correlate with the output of the top command we saw. In this release, the dbname and collection name is part of the metric name (that is subject to change in the future based on community feedback).

Collection and Index Stats

In addition to this, PMM 2.26 also includes the ability to gather database, collection, and index statistics. Using these metrics we can monitor collection counts, data growth, and index usage over time.

For example, we can collect the following information about a database:

PMM Collection and Index Stats

And here are the colstats metrics for the test.testcol collection:

PMM metrics

Note: By default, PMM won’t collect this information if you have more than 200 collections (this number is subject to change). The reason is to avoid too much overhead in metrics collection, which may adversely affect performance. You can override this behavior by using the –max-collections-limit option.

Enabling the Additional Collectors

The PMM client starts by default only with the diagnosticdata and replicasetstatus collectors enabled. We can run the PMM client with the –enable-all-collectors argument, to make all the new metrics available. For example:

pmm-admin add mongodb --username=mongodb_exporter --password=percona --host=127.0.0.1 --port=27017 --enable-all-collectors

If we want to override the limit mentioned above, use something like this:

pmm-admin add mongodb --username=mongodb_exporter --password=percona --host=127.0.0.1 --port=27017 --enable-all-collectors -–max-collections-limit=500

We also have the ability to enable only some of the extra collectors. For example, if you want all collectors except topmetrics, specify:

pmm-admin add mongodb --username=mongodb_exporter --password=percona --host=127.0.0.1 --port=27017 --enable-all-collectors --disable-collectors=topmetrics

We can also filter which databases and collections we are interested in getting metrics about. The optional argument –stats-collections can be set with a namespace.

For example:

–stats-collections=test (get data for all collections in test db)
–stats-collections=test.testcol (get data only from testcol collection of testdb)

Also, check the documentation page for more information.

Creating Dashboards

We can easily create dashboards using the newly available metrics. For example, to see the index usages, we can use the following promSQL expression:

{__name__ =~ "mongodb_.*_accesses_ops"}

This is what the dashboard would look like:

MongoDB dashboard

Note: If the image above is too small, you can open it in a new browser tab to see it better.

Conclusion

We can use these new metrics to create dashboards with the collections most written to (or most read from). We can also see what the characteristics of the workload are at a glance to determine if we are dealing with a read-intensive, mixed, or write-intensive system.

One of the most important uses for these new collectors and metrics is performance tuning. By knowing more about your top or “hottest” collections, you will likely be able to better tune your queries and indexing to improve overall performance. These new metrics on collections will be very helpful for performance tuning

Keep in mind these features are currently a technical preview, so they are disabled by default.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

Download Percona Monitoring and Management Today

Feb
07
2022
--

Merging Empty Chunks in MongoDB

Empty Chunks in MongoDB

Empty Chunks in MongoDBI recently wrote about one of the problems we can encounter while working with sharded clusters, which is Finding Undetected Jumbo Chunks in MongoDB. Another issue that we might run into is dealing with empty chunk management.

Chunk Maintenance

As we know, there is also an autoSplitter process that partitions chunks when they become too big. There is also a balancer process that takes care of moving chunks to ensure even distribution between all shards. So as data grows, chunks are partitioned and perhaps moved over to other shards and all is well.

But what happens when we delete data? It can be the case that some chunks are now empty. If we delete a lot of data, perhaps a significant number of the chunks will be empty. This can be a significant issue for sharded collections with a TTL index.

Potential Issues

One of the potential problems when dealing with a high percentage of empty chunks is uneven data distribution. The balancer will make sure the number of chunks on each shard is roughly the same, but it does not take into account whether the chunks are empty or not. So you might end up with a cluster that looks balanced, but in reality, a few shards have way more data than the rest.

To deal with this problem, the first step is to identify empty chunks.

Identifying Empty Chunks

To illustrate this, let’s consider a client’s collection that is sharded by the “org_id” field. Let’s assume the collection currently has the following chunks ranges:

minKey –> 1
1 -–> 5
5 —-> 10
10 –> 15
15 —-> 20
….

We can use the dataSize command to determine the size of a chunk. This command receives the chunk range as part of the arguments. For example, to check how many documents we have on the third chunk, we would run:

db.runCommand({ dataSize: "mydatabase.clients", keyPattern: { org_id: 1 }, min: { org_id: 5 }, max: { org_id: 10 } })

This returns a document like the following:

{
    "size" : 0,
    "numObjects" : 0,
    "millis" : 30,
    "ok" : 1,
    "operationTime" : Timestamp(1641829163, 2),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1641829163, 3),
        "signature" : {
            "hash" : BinData(0,"LbBPsTEahzG/v7I6oe7iyvLr/pU="),
            "keyId" : NumberLong("7016744225173049401")
        }
    }
}

If the size is 0 we know we have an empty chunk, and we can consider merging it with either the chunk that comes right after it (with the range 10 ? 15) or the one just before it (with the range 1 ? 5).

Merging Chunks

Assuming we take the first option, here is the mergeChunks command that helps us get this done:

db.adminCommand( {
   mergeChunks: "database.collection",
   bounds: [ { "field" : "5" },
             { "field" : "15" } ]
} )

The new chunk ranges now would be as follows:

minKey –> 1
1 —-> 5
5 —-> 15
15 —-> 20
….

One caveat is that the chunks we want to merge might not be on the same shard. If that is the case we need to move them together first, using the moveChunk command.

Putting it All Together

Following the above logic, we can iterate through all the chunks in shard key order and check their size.  If we find an empty chunk, we merge it with the chunk just before it. If the chunks are not on the same shard, we move them together. The following script can be used to print all the commands required:

var mergeChunkInfo = function(ns){
    var chunks = db.getSiblingDB("config").chunks.find({"ns" : ns}).sort({min:1}).noCursorTimeout(); 
    //some counters for overall stats at the end
    var totalChunks = 0;
    var totalMerges = 0;
    var totalMoves = 0;
    var previousChunk = {};
    var previousChunkInfo = {};
    var ChunkJustChanged = false;
 
    chunks.forEach( 
        function printChunkInfo(currentChunk) { 

        var db1 = db.getSiblingDB(currentChunk.ns.split(".")[0]) 
        var key = db.getSiblingDB("config").collections.findOne({_id:currentChunk.ns}).key; 
        db1.getMongo().setReadPref("secondary");
        var currentChunkInfo = db1.runCommand({datasize:currentChunk.ns, keyPattern:key, min:currentChunk.min, max:currentChunk.max, estimate:true });
        totalChunks++;
    
        // if the current chunk is empty and the chunk before it was not merged in the previous iteration (or was the first chunk) we have candidates for merging
        if(currentChunkInfo.size == 0 && !ChunkJustChanged) {     
          // if the chunks are contiguous
          if(JSON.stringify(previousChunk.max) == JSON.stringify(currentChunk.min) ) {
            // if they belong to the same shard, merge with the previous chunk
            if(previousChunk.shard.toString() == currentChunk.shard.toString() ) {
              print('db.runCommand( { mergeChunks: "' + currentChunk.ns.toString() + '",' + ' bounds: [ ' + JSON.stringify(previousChunk.min) + ',' + JSON.stringify(currentChunk.max) + ' ] })');
              // after a merge or move, we don't consider the current chunk for the next iteration. We skip to the next chunk. 
              ChunkJustChanged=true;
              totalMerges++;
            } 
            // if they contiguous but are on different shards, we need to have both chunks to the same shard before merging, so move the current one and don't merge for now
            else {              
              print('db.runCommand( { moveChunk: "' + currentChunk.ns.toString() + '",' + ' bounds: [ ' + JSON.stringify(currentChunk.min) + ',' + JSON.stringify(currentChunk.max) + ' ], to: "' + previousChunk.shard.toString() + '" });');
              // after a merge or move, we don't consider the current chunk for the next iteration. We skip to the next chunk. 
              ChunkJustChanged=true;
              totalMoves++;            
            }
          }
          else {
            // chunks are not contiguous (this shouldn't happen unless this is the first iteration)
            previousChunk=currentChunk;
            previousChunkInfo=currentChunkInfo;
            ChunkJustChanged=false; 
          }          
        }
        else {
          // if the current chunk is not empty or we already operated with the previous chunk let's continue with the next chunk pair
          previousChunk=currentChunk;
          previousChunkInfo=currentChunkInfo;
          ChunkJustChanged=false; 
        }
      }
    )

    print("***********Summary Chunk Information***********");
    print("Total Chunks: "+totalChunks);
    print("Total Move Commands to Run: "+totalMoves);
    print("Total Merge Commands to Run: "+totalMerges);
}

We can invoke it from the Mongo shell as follows:

mergeChunkInfo("mydb.mycollection")

The script will generate all the commands needed to merge pairs of chunks where at least one is empty. After running the generated commands, this should cut the number of empty chunks in half. Running the script multiple times will eventually get rid of all the empty chunks.

Final Notes

Most people are aware of the problems with jumbo chunks; now we have seen how empty chunks can also be problematic in certain scenarios.

It is a good idea to stop the balancer before attempting any operation that modifies chunks (like merging the empty chunks). This ensures that no conflicting operations happen at the same time. Don’t forget to enable back the balancer afterward.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com