Mass Upgrade MongoDB Versions: from 2.6 to 3.6

Upgrade MongoDB

Upgrade MongoDBIn this blog post, we’re going to look at how to upgrade MongoDB when leaping versions.

Here at Percona, we see every type of upgrade you could imagine. Lately, however, I see an increase very old version upgrades (OVE). This is when you upgrade MongoDB from a version more than one step before the version you are upgrading to. Some examples are:

  • 2.6 -> 3.2
  • 2.6 -> 3.4
  • 2.6 -> 3.6
  • 3.0 -> 3.4
  • 3.0 -> 3.6
  • 3.2 -> 3.6

Luckily, they all have the same basic path to upgrade. Unluckily, it’s not an online upgrade. You need to dump/import again on the new version. For a relatively small system with few indexes and less than 100GB of data, this isn’t a big deal.

Many people might ask: “Why can’t I just upgrade 2.6->3.0->3.2->3.4->3.6?” You can do this and it could be fine, but it is hazardous. How many times have you upgraded five major versions in one row with no issues? How much work is involved with testing one driver change let alone five? What about the changes in the optimizer, storage engine, and driver versions, bulk feature differences and moving from stand-alone config servers to a replica set way they are now? Upgrading to the new election protocol, which implies more overhead?

Even if you navigate all of that, you still have to worry about what you didn’t think about. In a perfect world, you would have an ability to build a fresh duplicate cluster on 3.6 and then run production traffic on it to make sure things still work.

My advice is to only plan an in-place upgrade for a single version, and even then you should talk to an in-house expert or consulting firm to make sure you are making the right changes for future support.

As such, I am going to break things down into two areas:

  • Upgrade from the previous version (3.4.12 -> 3.6.3, for example)
  • Upgrading using dump/import

Upgrading from previous versions when using a Replica Set and in place

Generally speaking, if you are taking this path the manual is a great help ( However, this is specific to 3.6, and my goal is to make this a bit more generic. As such, let’s break it down into steps acceptable in all systems.

Read the upgrade page for your version. At the end of the process below, you might have extra work to do.

  1. Set the setFeatureCompatibilityVersion to the previous version ‘db.adminCommand( { setFeatureCompatibilityVersion: “3.4” } )’
  2. Make your current primary prefer to be primary using something like below, where I assume the primary you want is the first node in the list
  3. Now in reverse order from rs.config().members, take the highest member ID and stop one node at a time
    1. Stop the mongod node
    2. Run yum/apt upgrade, or replace the binary files with new ones
    3. Try to start the process manually, this might fail if you failed to note and fix configuration file changes
      1. A good example of this is requirements to set the engine to MMAPv1 moving from 3.0 -> 3.2, or how “smallfiles” was removed as an option and could cause the host not to start.
    4. Once started on the new version, make sure replication can keep up with ‘rs.printSlaveReplicationInfo()’
    5. Repeat this process one at a time until only node “0” (your primary) is done.
  4. Reverse your work from step three, and remove priority on the primary node. This might cause an election, but it rarely changes the primary.
  5. If the primary has not changed, run ‘rs.stepdown(300, 30)’. This tells it to let someone else be primary, gives the secondaries 30 seconds to catch up, and doesn’t allow itself to be prior for 270 more seconds.
  6. Inside those 270 seconds, you must shutdown the node and repeat step four (but only for this one node).
  7. You are done with a replica set, however, check the nodes on anything you needed to do on the Mongos layer.
    1. In MongoDB 3.6, we require config servers to be a replica set. This is easily done if the configdb configuration line on a mongos is “xxx/host1:port,host2:port2,host3:port” (Replica Set) or “host1:port,host2:port,host3:port” (Non-Replica Set). If you do not do this BEFORE upgrading mongos, it will fail to start. Treat Configs as a replica set upgrade if they are already in one.
  8. You can do one shard/replica set at a time, but if you do, the balancer MUST be off during this time to prevent odd confusion and rollbacks.
  9. You’re done!

As you can see, this is pretty generic. But it is a good set of rules to follow since each version might have a deprecated feature, removed configuration options or other changes. By reading the documentation, you should be OK. However, having someone who has done tens to hundreds of these already is very helpful.

Back to the challenge at hand, how would you like to follow this process five times in a row per replica-set/shard if you were moving to 2.6->3.6? What would be the risk of human error in all of that? Hopefully your starting just from an operational reason why we advise against OVE’s. But that’s only one side. During each of these iterations, you also need to redeploy the application, test to ensure it still works by running some type of Load or UAT system — including upgrading the driver for each version and applying builds for that driver (as some functions may change). I don’t know about you, but as a DBA, architecture, product owner and support manager this is just to much risk.

What are our options for doing this in a much more straightforward fashion, without causing engineer fatigue, splitting risk trees and other such concerns?

Upgrading using the dump/import method

Before we get into this one, we should talk about a couple of points. You do have a choice about online and offline modes for this. I will only cover the offline mode. You need to collect and apply operations occurring during this process for the online mode, and I do not recommend this for our support customers. It is something I have helped do for our consulting customer. This is because we can make sure the process works for your environment, and at the end of the day my job to make sure data is available and safe over anything else.

If you’re sharded, this must be done in parallel. You should use MCB ( This is a good idea even if you’re not sharded, as it works with sharded and plain replica sets to ensure all the backups and config servers (if applicable) are “dumped” to the same point in time.

Next, if you are not using virtualization or the cloud, you’ll need to order in 2x the hardware and have a place for the old equipment. While not optimal, you might consider the above approach only for just the last version even with its risk if you don’t have the budget for anything else. With virtualization or cloud, people can typically use more hardware for a short time, and the cost is only the use of the equipment for that time. This is easily budgeted as part of the upgrade cost against the risks of not upgrading.

  1. Use MCB to take a backup and save it. An example config is:
         host: localhost
         port: 27017
         log_dir: /var/log/mongodb-consistent-backup
             method: mongodump
             name: upgrade
             location: /mongo_backups/upgrade_XX_to_3.6
             max_lag_secs: 10
                 wait_secs: [1+] (default: 300)
                 ping_secs: [1+] (default: 3)
             method: tar
             compression: none
  2. It figures out if it’s sharded or not. Additionally, it reaches out and maybe even backs up from another secondary as needed. When done, you will have a structure like:
    >find /mongo_backups/upgrade_XX_to_3.6
    and so on...
  3. Now that we have a backup, we can build new hardware for rs1. As I said, we will focus only on this one replica set in this example (a backup would just have that folder in a single replica set back up):
    • Setup all nodes with a 3.6 compatible configuration file. You do not need to keep the same engine, use WiredTiger (default) if you’re not sure.
    • Disable authentication for now.
    • Start the nodes and ensure the replica sets are working and healthy (rs.initiate, rs.add, rs.status).
    • Run import using mongorestore and –oplog on the extracted tar file.
    • Drop admin.users. If the salt has changed, you’ll need to recreate all users (2.6 -> 3.0+).
    • Once the restore is complete, use rs.printReplicationInfo or PMM to verify when the replication is caught up from the import.
    • Start up your application pointing to the new location using the new driver you’ve already tested on this version, grab a beer and you’re done!

Hopefully, you can see how much more comfortable this route is. You know all the new features are working, and you do not need to do anything else (like in the old system) to make sure you have switched to replica-set configs or something.

In this process to upgrade MongoDB, if you used MCB you can do the same for sharding. However, you will keep all of you existing sharding, which the default dump/restore sharded does for you. It should be noted that in a future version they could change the layout of the config servers and this process might need adaption. If you think is the case, drop a question in the Percona Forums, Twitter, or even the contact-us page and we will be glad to help.

I want to thank you for reading this blog on how to upgrade MongoDB, and hope it helps. This is just a base guideline and there are many specific things per-version to consider that are outside of the scope of this blog. Percona has support and experts to help guide you if you have any questions.

Powered by WordPress | Theme: Aeros 2.0 by