May
31
2018
--

MongoDB: deploy a replica set with transport encryption (part 3/3)

MongoDB Encryption Replica Sets

MongoDB Encryption Replica SetsIn this third and final post of the series, we look at how to configure transport encryption on a deployed MongoDB replica set. Security vulnerabilities can arise when internal personnel have legitimate access to the private network, but should not have access to the data. Encrypting intra-node traffic ensures that no one can “sniff” sensitive data on the network.

In part 1 we described MongoDB replica sets and how they work.
In part 2 we provided a step-by-step guide to deploy a simple 3-node replica set, including information on replica set configuration.

Enable Role-Based Access Control

In order for the encryption to be used in our replica set, we need first to activate Role-Based Access Control (RBAC). By default, a MongoDB installation permits anyone to connect and see the data, as in the sample deployment we created in part 2. Having RBAC enabled is mandatory for encryption.

RBAC governs access to a MongoDB system. Users are created and assigned privileges to access specific resources, such as databases and collections. Likewise, for carrying out administrative tasks, users need to be created with specific grants. Once activated, every user must authenticate themselves in order to access MongoDB.

Prior to activating RBAC, let’s create an administrative user. We’ll connect to the PRIMARY member and do the following:

rs-test:PRIMARY> use admin
switched to db admin
rs-test:PRIMARY> db.createUser({user: 'admin', pwd: 'secret', roles:['root']})
Successfully added user: { "user" : "admin", "roles" : [ "root" ] }

Let’s activate the RBAC in the configuration file /etc/mongod.conf on each node

security:
      authorization: enabled

and restart the daemon

sudo service mongod restart

Now to connect to MongoDB we issue the following command:

mongo -u admin -p secret --authenticationDatabase "admin"

Certificates

MongoDB supports X.509 certificate authentication for use with a secure TLS/SSL connection. The members can use X.509 certificates to verify their membership of the replica set.

In order to use encryption, we need to create certificates on all the nodes and have a certification authority (CA) that signs them. Since having a certification authority can be quite costly, we decide to use self-signed certificates. For our purposes, this solution ensures encryption and has no cost. Using a public CA is not necessary inside a private infrastructure.

To proceed with certificate generation we need to have openssl installed on our system and certificates need to satisfy these requirements:

  • any certificate needs to be signed by the same CA
  • the common name (CN) required during the certificate creation must correspond to the hostname of the host
  • any other field requested in the certificate creation should be a non-empty value and, hopefully, should reflect our organization details
  • it is also very important that all the fields, except the CN, should match those from the certificates for the other cluster members

The following guide describes all the steps to configure internal X.509 certificate-based encryption.

1 – Connect to one of the hosts and generate a new private key using openssl

openssl genrsa -out mongoCA.key -aes256 8192

We have created a new 8192 bit private key and saved it in the file mongoCA.key
Remember to enter a strong passphrase when requested.

2 – Sign a new CA certificate

Now we are going to create our “fake” local certification authority that we’ll use later to sign each node certificate.

During certificate creation, some fields must be filled out. We could choose these randomly but they should correspond to our organization’s details.

root@psmdb1:~# openssl req -x509 -new -extensions v3_ca -key mongoCA.key -days 365 -out
    mongoCA.crt
    Enter pass phrase for mongoCA.key:
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [AU]:US
    State or Province Name (full name) [Some-State]:California
    Locality Name (eg, city) []:San Francisco
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company Ltd
    Organizational Unit Name (eg, section) []:DBA
    Common Name (e.g. server FQDN or YOUR name) []:psmdb
    Email Address []:corrado@mycompany.com

3 – Issue self-signed certificates for all the nodes

For each node, we need to generate a certificate request and sign it using the CA certificate we created in the previous step.

Remember: fill out all the fields requested the same for each host, but remember to fill out a different common name (CN) that must correspond to the hostname.

For the first node issue the following commands.

openssl req -new -nodes -newkey rsa:4096 -keyout psmdb1.key -out psmdb1.csr
openssl x509 -CA mongoCA.crt -CAkey mongoCA.key -CAcreateserial -req -days 365 -in psmdb1.csr -out psmdb1.crt
cat psmdb1.key psmdb1.crt > psmdb1.pem

for the second node

openssl req -new -nodes -newkey rsa:4096 -keyout psmdb2.key -out psmdb2.csr
openssl x509 -CA mongoCA.crt -CAkey mongoCA.key -CAcreateserial -req -days 365 -in psmdb2.csr -out psmdb2.crt
cat psmdb2.key psmdb2.crt > psmdb2.pem

and for the third node

openssl req -new -nodes -newkey rsa:4096 -keyout psmdb3.key -out psmdb3.csr
openssl x509 -CA mongoCA.crt -CAkey mongoCA.key -CAcreateserial -req -days 365 -in psmdb3.csr -out psmdb3.crt
cat psmdb3.key psmdb3.crt > psmdb3.pem

4 – Place the files

We could execute all of the commands in the previous step on the same host, but now we need to copy the generated files to the proper nodes:

  • Copy to each node the CA certifcate file: mongoCA.crt
  • Copy each self signed certifcate <hostname>.pem into the relative member
  • Create on each member a directory that only the MongoDB user can read, and copy both files there
sudo mkdir -p /etc/mongodb/ssl
sudo chmod 700 /etc/mongodb/ssl
sudo chown -R mongod:mongod /etc/mongodb
sudo cp psmdb1.pem /etc/mongodb/ssl
sudo cp mongoCA.crt /etc/mongodb/ssl

Do the same on each host.

5 – Configure mongod

Finally, we need to instruct mongod about the certificates to enable the encryption.

Change the configuration file /etc/mongod.conf on each host adding the following rows:

net:
   port: 27017
   ssl:
      mode: requireSSL
      PEMKeyFile: /etc/mongodb/ssl/psmdb1.pem
      CAFile: /etc/mongodb/ssl/mongoCA.crt
      clusterFile: /etc/mongodb/ssl/psmdb1.pem
   security:
      authorization: enabled
      clusterAuthMode: x509

Restart the daemon

sudo service mongodb restart

Make sure to put the proper file names on each host (psmdb2.pem on psmdb2 host and so on)

Now, as long as we have made no mistakes, we have a properly configured replica set that is using encrypted connections.

Issue the following command to connect on node psmdb1:

mongo admin --ssl --sslCAFile /etc/mongodb/ssl/mongoCA.crt
--sslPEMKeyFile /etc/mongodb/ssl/psmdb1.pem
-u admin -p secret --host psmdb1

Access the first two articles in this series

  • Part 1: Introduces basic replica set concepts, how it works and what its main features
  • Part 2:  Provides a step-by-step guide to configure a three-node replica set

The post MongoDB: deploy a replica set with transport encryption (part 3/3) appeared first on Percona Database Performance Blog.

Mar
12
2018
--

Mass Upgrade MongoDB Versions: from 2.6 to 3.6

Upgrade MongoDB

Upgrade MongoDBIn this blog post, we’re going to look at how to upgrade MongoDB when leaping versions.

Here at Percona, we see every type of upgrade you could imagine. Lately, however, I see an increase very old version upgrades (OVE). This is when you upgrade MongoDB from a version more than one step before the version you are upgrading to. Some examples are:

  • 2.6 -> 3.2
  • 2.6 -> 3.4
  • 2.6 -> 3.6
  • 3.0 -> 3.4
  • 3.0 -> 3.6
  • 3.2 -> 3.6

Luckily, they all have the same basic path to upgrade. Unluckily, it’s not an online upgrade. You need to dump/import again on the new version. For a relatively small system with few indexes and less than 100GB of data, this isn’t a big deal.

Many people might ask: “Why can’t I just upgrade 2.6->3.0->3.2->3.4->3.6?” You can do this and it could be fine, but it is hazardous. How many times have you upgraded five major versions in one row with no issues? How much work is involved with testing one driver change let alone five? What about the changes in the optimizer, storage engine, and driver versions, bulk feature differences and moving from stand-alone config servers to a replica set way they are now? Upgrading to the new election protocol, which implies more overhead?

Even if you navigate all of that, you still have to worry about what you didn’t think about. In a perfect world, you would have an ability to build a fresh duplicate cluster on 3.6 and then run production traffic on it to make sure things still work.

My advice is to only plan an in-place upgrade for a single version, and even then you should talk to an in-house expert or consulting firm to make sure you are making the right changes for future support.

As such, I am going to break things down into two areas:

  • Upgrade from the previous version (3.4.12 -> 3.6.3, for example)
  • Upgrading using dump/import

Upgrading from previous versions when using a Replica Set and in place

Generally speaking, if you are taking this path the manual is a great help (https://docs.mongodb.com/manual/release-notes/3.6-upgrade-replica-set/#upgrade-process). However, this is specific to 3.6, and my goal is to make this a bit more generic. As such, let’s break it down into steps acceptable in all systems.

Read the upgrade page for your version. At the end of the process below, you might have extra work to do.

  1. Set the setFeatureCompatibilityVersion to the previous version ‘db.adminCommand( { setFeatureCompatibilityVersion: “3.4” } )’
  2. Make your current primary prefer to be primary using something like below, where I assume the primary you want is the first node in the list
    >x=rs.config()
    >x.members[0].priority=1000
    >rs.reconfig(x)
  3. Now in reverse order from rs.config().members, take the highest member ID and stop one node at a time
    1. Stop the mongod node
    2. Run yum/apt upgrade, or replace the binary files with new ones
    3. Try to start the process manually, this might fail if you failed to note and fix configuration file changes
      1. A good example of this is requirements to set the engine to MMAPv1 moving from 3.0 -> 3.2, or how “smallfiles” was removed as an option and could cause the host not to start.
    4. Once started on the new version, make sure replication can keep up with ‘rs.printSlaveReplicationInfo()’
    5. Repeat this process one at a time until only node “0” (your primary) is done.
  4. Reverse your work from step three, and remove priority on the primary node. This might cause an election, but it rarely changes the primary.
  5. If the primary has not changed, run ‘rs.stepdown(300, 30)’. This tells it to let someone else be primary, gives the secondaries 30 seconds to catch up, and doesn’t allow itself to be prior for 270 more seconds.
  6. Inside those 270 seconds, you must shutdown the node and repeat step four (but only for this one node).
  7. You are done with a replica set, however, check the nodes on anything you needed to do on the Mongos layer.
    1. In MongoDB 3.6, we require config servers to be a replica set. This is easily done if the configdb configuration line on a mongos is “xxx/host1:port,host2:port2,host3:port” (Replica Set) or “host1:port,host2:port,host3:port” (Non-Replica Set). If you do not do this BEFORE upgrading mongos, it will fail to start. Treat Configs as a replica set upgrade if they are already in one.
  8. You can do one shard/replica set at a time, but if you do, the balancer MUST be off during this time to prevent odd confusion and rollbacks.
  9. You’re done!

As you can see, this is pretty generic. But it is a good set of rules to follow since each version might have a deprecated feature, removed configuration options or other changes. By reading the documentation, you should be OK. However, having someone who has done tens to hundreds of these already is very helpful.

Back to the challenge at hand, how would you like to follow this process five times in a row per replica-set/shard if you were moving to 2.6->3.6? What would be the risk of human error in all of that? Hopefully your starting just from an operational reason why we advise against OVE’s. But that’s only one side. During each of these iterations, you also need to redeploy the application, test to ensure it still works by running some type of Load or UAT system — including upgrading the driver for each version and applying builds for that driver (as some functions may change). I don’t know about you, but as a DBA, architecture, product owner and support manager this is just to much risk.

What are our options for doing this in a much more straightforward fashion, without causing engineer fatigue, splitting risk trees and other such concerns?

Upgrading using the dump/import method

Before we get into this one, we should talk about a couple of points. You do have a choice about online and offline modes for this. I will only cover the offline mode. You need to collect and apply operations occurring during this process for the online mode, and I do not recommend this for our support customers. It is something I have helped do for our consulting customer. This is because we can make sure the process works for your environment, and at the end of the day my job to make sure data is available and safe over anything else.

If you’re sharded, this must be done in parallel. You should use MCB (https://github.com/Percona-Lab/mongodb_consistent_backup). This is a good idea even if you’re not sharded, as it works with sharded and plain replica sets to ensure all the backups and config servers (if applicable) are “dumped” to the same point in time.

Next, if you are not using virtualization or the cloud, you’ll need to order in 2x the hardware and have a place for the old equipment. While not optimal, you might consider the above approach only for just the last version even with its risk if you don’t have the budget for anything else. With virtualization or cloud, people can typically use more hardware for a short time, and the cost is only the use of the equipment for that time. This is easily budgeted as part of the upgrade cost against the risks of not upgrading.

  1. Use MCB to take a backup and save it. An example config is:
    production:
         host: localhost
         port: 27017
         log_dir: /var/log/mongodb-consistent-backup
         backup:
             method: mongodump
             name: upgrade
             location: /mongo_backups/upgrade_XX_to_3.6
         replication:
             max_lag_secs: 10
         sharding:
             balancer:
                 wait_secs: [1+] (default: 300)
                 ping_secs: [1+] (default: 3)
         archive:
             method: tar
         tar:
             compression: none
  2. It figures out if it’s sharded or not. Additionally, it reaches out and maybe even backs up from another secondary as needed. When done, you will have a structure like:
    production:
    >find /mongo_backups/upgrade_XX_to_3.6
    >/mongo_backups/upgrade_XX_to_3.6/upgrade
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/rs1
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/rs1/rs1.tar
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/rs2
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/rs2/rs2.tar
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/config
    >/mongo_backups/upgrade_XX_to_3.6/upgrade_<datetime>/config/config.tar
    and so on...
  3. Now that we have a backup, we can build new hardware for rs1. As I said, we will focus only on this one replica set in this example (a backup would just have that folder in a single replica set back up):
    • Setup all nodes with a 3.6 compatible configuration file. You do not need to keep the same engine, use WiredTiger (default) if you’re not sure.
    • Disable authentication for now.
    • Start the nodes and ensure the replica sets are working and healthy (rs.initiate, rs.add, rs.status).
    • Run import using mongorestore and –oplog on the extracted tar file.
    • Drop admin.users. If the salt has changed, you’ll need to recreate all users (2.6 -> 3.0+).
    • Once the restore is complete, use rs.printReplicationInfo or PMM to verify when the replication is caught up from the import.
    • Start up your application pointing to the new location using the new driver you’ve already tested on this version, grab a beer and you’re done!

Hopefully, you can see how much more comfortable this route is. You know all the new features are working, and you do not need to do anything else (like in the old system) to make sure you have switched to replica-set configs or something.

In this process to upgrade MongoDB, if you used MCB you can do the same for sharding. However, you will keep all of you existing sharding, which the default dump/restore sharded does for you. It should be noted that in a future version they could change the layout of the config servers and this process might need adaption. If you think is the case, drop a question in the Percona Forums, Twitter, or even the contact-us page and we will be glad to help.

I want to thank you for reading this blog on how to upgrade MongoDB, and hope it helps. This is just a base guideline and there are many specific things per-version to consider that are outside of the scope of this blog. Percona has support and experts to help guide you if you have any questions.

Jun
15
2016
--

Installing MongoDB 3.2 and upgrading MongoDB replica set

Upgrading MongoDB replica set

Upgrading MongoDB replica setIn this post, we’ll examine a couple of ways for upgrading MongoDB replica set.

With the release of MongoDB 3.2, comes a rash of new features and improvements. One of these enhancements is improved replica sets. From MongoDB: “A replica set in MongoDB is a group of mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments.”

Config servers are replica sets!

This is HUGE. It signals a significant advancement in backups, metadata stability and overall maturity. It is a very long-awaited feature that shows MongoDB is maturing. It means:

  • Mongos’ can retry connection vs error
  • Unified and consistent backups!
  • Up to 50 secondaries
    • Remove bugs with Mongos’ not near config servers!

How do we activate all these new awesome features? Let’s do it!

Upgrading to 3.2

  • Replace binaries and restart one secondary at a time
  • Then primaries as well
  • Restart configs in reverse order
    • If configdb=con1, con2, con3
      • Restart con3, con2, and then finally con1 with 3.2
      • Do con1 as FAST as possible, while the balancer is also disabled
    • You no longer need to restart a mongos –upgrade (as of 3.2)
    • Restart all mongos, this will reset ALL connections at some point (whether you do at once or space it out).

Upgrading the replset to the new protocol

This is by far the easiest upgrade bit but DON’T do it until you know your stable on 3.2. Log into each primary and run:

>cfg = rs.conf();
 {
     "_id" : "r1",
     "version" : 2,
     "members" : [
         {
             "_id" : 0,
             "host" : "localhost:17001"
         },
         {
             "_id" : 1,
             "host" : "localhost:17002"
         },
         {
            "_id" : 2,
            "host" : "localhost:17003",
         }
     ]
}
>cfg.protocolVersion=1;
>rs.reconfig(cfg);
{
    "ok" : 1,
    "$gleStats" : {
        "lastOpTime" : Timestamp(1464947003, 1),
        "electionId" : ObjectId("7fffffff0000000000000018")
    }
}

Or:

>db.getSiblingDB(‘config’).shards.forEach(function(shard){
    x = new Mongo(shard.host); /* Assumes no auth needed */
    conf =x.getDB("local").system.replset.findOne()
    conf.protcolVersion=1;conf.version++;
    x.getDB(‘admin’).runCommand({ replSetReconfig: conf });
});

The quick upgrade scripts

  • Upgrade_all_to_3.2.4.sh

Does what it says: kills every process and launches them on 3.2 binaries with no other changes.

  • Upgrade_replica_proto_version.sh

Simply runs the quick rs.reconfig() on each primary, adds the new settings to enable to new replication features.

Let’s upgrade the configs the right way!

This is not included as part of a normal upgrade so only do this AFTER you’re stable and don’t do it before upgrading the protocolVersion we just talked about. (I mean it! Disregard this advice and your life will seriously not be awesome!)

Upgrading to a Config ReplicaSet ( the official way)

  1. Run rs.initiate on the first config in the list (must be 3.2.4+)
    • Must be a fully configured document with configsrv:true defined.
  2. Restart same config server adding
    • configsvrMode = sccc
    • replSet = <name used in rs.initiate()>
    • storageEngine= WiredTiger
  3. Start the new config servers for the other two nodes (should be a new dbpath and port)
  4. Add those nodes to the replSet and check their status
  5. Remove the second original config server from the running
  6. Restart the 1st node you set “sccc” on to not have that setting
  7. At this point, the 1st node will transition to removed if using MMAP.
  8. Restart a mongos with a new configdb line
    • –configdb <replSetName>/node1:port,node2:port,…
    • Only replset members should be listed
  9. Verify you can work and query through mongos
  10. Repeat on all mongos
  11. Remove the 1st node with rs.remove
  12. Shutdown final original config and enable balancer

There is also an easy way.

The easy way, with a small maintenance window, which lets you just restore a good backup and have a nice and simple rollback plan:

  1. Stop all mongos after backing up the config directory
  2. Run rs.initiate on first config server
  3. Stop the 2nd, then the 3rd, restarting them with an empty dbpath directory
  4. Check the rs.status now
  5. Stop the 1st config server and restart with an empty dbpath directory
  6. Check Status
  7. Restart all mongos, adding <replSetName>/ to the front of the configdb line.
  8. Done!

Oh look there is a quick script we have made for you:

  • Upgrade_config_to_repliaset.sh
    • Kill all  config and mongos processes
    • Restart the first config server on non-standard port
    • Mongodump config database
    • Restart c1 as WiredTiger, clearing that data path
    • Import dump back into first config server
    • Restart on normal port
    • Initialize Replica Set
    • Restart second and third config server after clearing dbpath folder
    • After the initial sync, start all the mongos.
    • Done and script exits!

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com