In this blog post, we will walk through the internals of the election process in MongoDB®, following on from a previous post on the internals of the replica set. You can read Part 1 here.
For this post, I am refer to the same configurations we discussed before.
Elections: As the term suggests, in MongoDB there is a freedom to “vote”: individual nodes of the cluster can vote and select their primary member for that replica set cluster.
Why Elections? MongoDB maintains high availability through this process.
When do elections take place?
- When the node does not found a primary node within the election timeout limit. By default this value is 10s, and from MongoDB version 3.2 this can be changed according to your needs. The parameter to set this value is
settings.electionTimeoutMillis
and can be seen in the logs as:
settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5ba8ed10d4fddccfedeb7492') } }
From the mongo shell, the value for the
electionTimeoutMillis
can be found in replica set configuration as:
rplint:SECONDARY> rs.conf() { "_id" : "rplint", "version" : 3, "protocolVersion" : NumberLong(1), "members" : [ { "_id" : 0, "host" : "m103:25001", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "192.168.103.100:25002", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 2, "host" : "192.168.103.100:25003", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 10000, "catchUpTimeoutMillis" : 60000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5c20ff87272eff3a5e28573f") } }
More precisely the value for
electionTimeoutMillis
can be found at:
rplint:SECONDARY> rs.conf().settings.electionTimeoutMillis 10000
2. If the priority of the existing primary node is being taken over by another node. For example, during planned maintenance using replica set configuration settings. The priority of the member node can be changed as explained here
The priority of all three members can be seen from the replica set configuration like this:
rplint:SECONDARY> rs.conf().members[0].priority 1 rplint:SECONDARY> rplint:SECONDARY> rplint:SECONDARY> rs.conf().members[2].priority 1 rplint:SECONDARY> rs.conf().members[1].priority 1
How do elections work in a MongoDB replica set cluster?
Before real elections, the node runs a dry election. Dry election? Yes, the node first runs dry elections, and if the node wins a dry election, then an actual election begins. Here’s how:
- Candidate node asks every node if another node would vote for it through
replSetRequestVotes
, without increasing the term itself.
- Primary node steps down if it finds a candidate node term higher than itself. Otherwise the dry election fails, and the replica set continues to run as is did before.
- If the dry election succeeds, then an actual election begins.
- For the real election, the node increments its term and then votes for itself.
- VoterRequester sends
replSetRequestVotes
command through ScatterGatherRunner and then each node responds back with their vote.
- The candidate that receives votes from the most nodes wins the election.
- Once the candidate wins, it transits to primary node. Through heartbeats it sends a notification to all other nodes.
- Then the candidate node checks if it needs to catch up from the former primary node.
- The node that receives the
replSetRequestVotes
command checks its own term and then votes, but only after ReplicationCoordinator receives confirmation from TopologyCoordinator
- The TopologyCoordinator grants the vote after following considerations:
- Config version must be matched,
- Replica set name must be matched
- An arbiter voter must not see any healthy primary of greater or equal priority.
An example
A primary (port:25002) Transition to secondary after receiving the
rs.stepDown()
command.
2019-01-03T03:05:29.972+0000 I COMMAND [conn124] Attempting to step down in response to replSetStepDown command 2019-01-03T03:05:29.976+0000 I REPL [conn124] transition to SECONDARY driver: { name: "NetworkInterfaceASIO-Replication", version: "3.4.15" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "14.04" } } 2019-01-03T03:05:40.874+0000 I REPL [ReplicationExecutor] Member m103:25001 is now in state PRIMARY 2019-01-03T03:05:41.459+0000 I REPL [rsBackgroundSync] sync source candidate: m103:25001 2019-01-03T03:05:41.459+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to m103:25001 2019-01-03T03:05:41.460+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to m103:25001, took 1ms (1 connections now open to m103:25001) 2019-01-03T03:05:41.461+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to m103:25001 2019-01-03T03:05:41.462+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to m103:25001, took 1ms (2 connections now open to m103:25001)
Dry election at candidate node (port:25001) and success: no primary found.
2019-01-03T03:05:31.498+0000 I REPL [rsBackgroundSync] could not find member to sync from 2019-01-03T03:05:36.493+0000 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to 192.168.103.100:25002: InvalidSyncSource: Sync source was cleared. Was 192.168.103.100:25002 2019-01-03T03:05:39.390+0000 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms 2019-01-03T03:05:39.390+0000 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected. current term: 35 2019-01-03T03:05:39.391+0000 I REPL [ReplicationExecutor] VoteRequester(term 35 dry run) received a yes vote from 192.168.103.100:25002; response message: { term: 35, voteGranted: true, reason: "", ok: 1.0 }
Dry election succeeds and increments term by 1 (here the term was 35 and is incremented to 36). It transitions to primary and enters catchup mode.
2019-01-03T03:05:39.391+0000 I REPL [ReplicationExecutor] dry election run succeeded, running for election in term 36 2019-01-03T03:05:39.394+0000 I REPL [ReplicationExecutor] VoteRequester(term 36) received a yes vote from 192.168.103.100:25003; response message: { term: 36, voteGranted: true, reason: "", ok: 1.0 } 2019-01-03T03:05:39.395+0000 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 36 2019-01-03T03:05:39.395+0000 I REPL [ReplicationExecutor] transition to PRIMARY 2019-01-03T03:05:39.395+0000 I REPL [ReplicationExecutor] Entering primary catch-up mode.
Other nodes also receive information about the new primary.
2019-01-03T03:05:31.498+0000 I REPL [rsBackgroundSync] could not find member to sync from 2019-01-03T03:05:36.493+0000 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to 192.168.103.100:25002: InvalidSyncSource: Sync source was cleared. Was 192.168.103.100:25002 2019-01-03T03:05:41.499+0000 I REPL [ReplicationExecutor] Member m103:25001 is now in state PRIMARY
This is how MongoDB is able to maintain high availability by electing primary node from the replica set clusters in the case of existing primary node failures.
—
Photo by Daria Shevtsova from Pexels