Docker Compose MongoDB Replica Set: Complete Reference

Type: Software Reference Confidence: 0.93 Sources: 7 Verified: 2026-02-27 Freshness: 2026-02-27

TL;DR

Constraints

Quick Reference

Service Configuration

ServiceImageCommand FlagsPortsVolumesKey Env
mongo1 (primary)mongo:7.0--replSet rs0 --keyFile /etc/mongo-keyfile --bind_ip_all27017:27017mongo1-data:/data/db, keyfile:roMONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD
mongo2 (secondary)mongo:7.0--replSet rs0 --keyFile /etc/mongo-keyfile --bind_ip_all27018:27017mongo2-data:/data/db, keyfile:roSame as above
mongo3 (secondary)mongo:7.0--replSet rs0 --keyFile /etc/mongo-keyfile --bind_ip_all27019:27017mongo3-data:/data/db, keyfile:roSame as above
mongo-arbiter (optional)mongo:7.0--replSet rs0 --keyFile /etc/mongo-keyfile --bind_ip_all27020:27017keyfile:ro (no data volume)None
mongo-init (one-shot)mongo:7.0Runs mongosh to execute rs.initiate()Noneinit script:roNone

Replica Set Member Roles

RoleVotesPriorityStores DataPurpose
Primary1Highest (e.g., 3)YesAccepts all writes; serves reads by default
Secondary1Lower (e.g., 1)YesReplicates from primary; can serve reads with readPreference
Arbiter10NoParticipates in elections only; breaks ties
Hidden10YesInvisible to clients; used for analytics/backup
Delayed10YesLags behind primary by configured seconds; disaster recovery

Connection String Formats

ContextConnection String
From host (no auth)mongodb://localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0
From host (with auth)mongodb://admin:password@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0&authSource=admin
From Docker networkmongodb://admin:password@mongo1:27017,mongo2:27017,mongo3:27017/?replicaSet=rs0&authSource=admin
With TLSAppend &tls=true&tlsCAFile=/path/to/ca.pem
Node.js drivermongodb://admin:password@mongo1:27017,mongo2:27017,mongo3:27017/mydb?replicaSet=rs0&authSource=admin&w=majority

Decision Tree

START: What is your use case?
├── Local development (need transactions/change streams)?
│   ├── YES → Single-node replica set (1 member, simplest setup)
│   └── NO ↓
├── Development/staging with HA testing?
│   ├── YES → 3-node replica set (primary + 2 secondaries)
│   └── NO ↓
├── Production with budget constraints?
│   ├── YES → 2 data nodes + 1 arbiter (saves storage, still has quorum)
│   └── NO ↓
├── Production with full redundancy?
│   ├── YES → 3-node replica set with keyfile auth + named volumes + resource limits
│   └── NO ↓
├── Need read scaling across regions?
│   ├── YES → 5 or 7 members with priorities + readPreference=secondary
│   └── NO ↓
└── DEFAULT → 3-node replica set with keyfile authentication

Step-by-Step Guide

1. Generate the keyfile for internal authentication

The keyfile is used for inter-node authentication within the replica set. All members must share the same keyfile. [src1]

# Generate a random 756-byte base64-encoded key
openssl rand -base64 756 > ./keyfile

# Set restrictive permissions (MongoDB requires 400)
chmod 400 ./keyfile

# On Linux, change ownership to match the MongoDB container user (UID 999)
sudo chown 999:999 ./keyfile

Verify: ls -la ./keyfile → expected: -r-------- 1 999 999 1024 ... keyfile

2. Create the Docker Compose file

Define three MongoDB services on a shared network with the keyfile mounted read-only. [src2] [src4]

Full docker-compose.yml: see markdown source for the complete 3-node configuration with healthchecks, named volumes, and an init container.

Verify: docker compose config → should output valid YAML with no errors.

3. Create the replica set initialization script

This script connects to the primary candidate and runs rs.initiate() with the member list. [src1] [src5]

#!/bin/bash
# init-replica.sh
echo "Waiting for MongoDB nodes..."
sleep 10
mongosh --host mongo1:27017 -u admin -p "${MONGO_PASSWORD:-changeme}" \
  --authenticationDatabase admin --eval '
  rs.initiate({
    _id: "rs0",
    members: [
      { _id: 0, host: "mongo1:27017", priority: 3 },
      { _id: 1, host: "mongo2:27017", priority: 2 },
      { _id: 2, host: "mongo3:27017", priority: 1 }
    ]
  });
'

Verify: docker logs mongo-init → expected: member list with PRIMARY and SECONDARY states.

4. Start the cluster and verify

Bring up all services and check the replica set status. [src3]

# Start all services
docker compose up -d

# Check init container logs
docker compose logs -f mongo-init

# Connect to primary and verify
docker exec -it mongo1 mongosh -u admin -p changeme \
  --authenticationDatabase admin --eval "rs.status()"

Verify: rs.status().members should show 3 members -- one PRIMARY and two SECONDARY.

5. Create application database and user

After the replica set is running, create a dedicated database user for your application. [src1]

use admin;
db.createUser({
  user: "appuser",
  pwd: "appsecret",
  roles: [
    { role: "readWrite", db: "myapp" },
    { role: "readWrite", db: "myapp_test" }
  ]
});

Verify: mongosh -u appuser -p appsecret --authenticationDatabase admin myapp --eval "db.test.insertOne({ok:1})" → expected: { acknowledged: true }

6. Test write concern and read preference

Verify replication by writing with majority write concern and reading from secondaries. [src7]

db.test.insertOne(
  { msg: "replicated", ts: new Date() },
  { writeConcern: { w: "majority", wtimeout: 5000 } }
);
db.getMongo().setReadPref("secondary");
db.test.find();

Verify: Document appears on secondary reads; rs.printReplicationInfo() shows replication lag < 1 second.

Code Examples

Single-Node Replica Set (Minimal Development Setup)

# docker-compose.yml -- Single-node replica set for local dev
services:
  mongodb:
    image: mongo:7.0
    command: ["--replSet", "rs0", "--bind_ip_all"]
    ports:
      - "27017:27017"
    volumes:
      - mongo-data:/data/db
    healthcheck:
      test: >
        mongosh --eval "try{rs.status().ok}catch(e){rs.initiate({_id:'rs0',members:[{_id:0,host:'localhost:27017'}]}).ok}"
      interval: 10s
      timeout: 10s
      retries: 5
volumes:
  mongo-data:

Node.js: Connection with Retry Logic

const { MongoClient } = require('mongodb');  // ^6.0.0

const uri = 'mongodb://admin:changeme@mongo1:27017,mongo2:27017,mongo3:27017/?replicaSet=rs0&authSource=admin';

const client = new MongoClient(uri, {
  readPreference: 'secondaryPreferred',
  w: 'majority',
  retryWrites: true,
  retryReads: true,
  serverSelectionTimeoutMS: 5000,
});

await client.connect();
const status = await client.db('admin').command({ replSetGetStatus: 1 });
console.log('Connected to replica set:', status.set);

Python: Connection with PyMongo

from pymongo import MongoClient  # pymongo >= 4.6.0

uri = (
    "mongodb://admin:changeme@mongo1:27017,mongo2:27017,mongo3:27017/"
    "?replicaSet=rs0&authSource=admin"
    "&readPreference=secondaryPreferred&w=majority"
)

client = MongoClient(uri, serverSelectionTimeoutMS=5000)
status = client.admin.command("replSetGetStatus")
print(f"Connected to: {status['set']}")

With Arbiter Node (Budget HA)

2 data nodes + 1 arbiter: saves storage costs while maintaining quorum for elections. See markdown source for complete docker-compose.yml with arbiterOnly: true in rs.initiate().

Anti-Patterns

Wrong: Using localhost in replica set member hostnames

# BAD -- localhost only resolves within a single container
command: ["--replSet", "rs0", "--bind_ip", "localhost"]
# rs.initiate with localhost: members cannot reach each other

Correct: Use container hostnames on a shared network

# GOOD -- container hostnames resolve via Docker DNS
command: ["--replSet", "rs0", "--bind_ip_all"]
# members: [{host: "mongo1:27017"}, {host: "mongo2:27017"}, ...]

Wrong: Keyfile with permissive permissions

# BAD -- MongoDB refuses to start
chmod 644 ./keyfile
# Error: "permissions on /etc/mongo-keyfile are too open"

Correct: Restrictive keyfile permissions with correct ownership

# GOOD -- only owner can read, owned by mongodb user (UID 999)
chmod 400 ./keyfile
sudo chown 999:999 ./keyfile

Wrong: Mismatched replica set names

# BAD -- replSet name in command does not match rs.initiate _id
command: ["--replSet", "myRS"]
# Then: rs.initiate({ _id: "rs0", ... }) -- silent failure

Correct: Consistent replica set name everywhere

# GOOD -- same name in command flag and rs.initiate
command: ["--replSet", "rs0"]
# rs.initiate({ _id: "rs0", ... })

Wrong: No healthcheck or init container

# BAD -- no mechanism to initialize replica set automatically
services:
  mongo1:
    image: mongo:7.0
    command: ["--replSet", "rs0"]
    # Requires manual mongosh after every restart

Correct: Healthcheck-based or init container initialization

# GOOD -- healthcheck auto-initializes on first run
healthcheck:
  test: >
    mongosh --eval "try{rs.status().ok}catch(e){rs.initiate({_id:'rs0',members:[{_id:0,host:'mongo1:27017'}]}).ok}"
  interval: 10s

Wrong: Even number of voting members

// BAD -- 2 voting members cannot elect if one fails
rs.initiate({
  _id: "rs0",
  members: [
    { _id: 0, host: "mongo1:27017" },
    { _id: 1, host: "mongo2:27017" }
  ]
});

Correct: Odd number of voting members

// GOOD -- 3 members: majority is 2, survives 1 failure
rs.initiate({
  _id: "rs0",
  members: [
    { _id: 0, host: "mongo1:27017" },
    { _id: 1, host: "mongo2:27017" },
    { _id: 2, host: "mongo3:27017" }
  ]
});

Common Pitfalls

Diagnostic Commands

# Check replica set status
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.status()"

# Show replica set configuration
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.conf()"

# Check replication lag
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.printReplicationInfo()"

# Identify primary node
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.isMaster().primary"

# Verify all members are healthy
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.status().members.map(m => ({name: m.name, state: m.stateStr, health: m.health}))"

# Test connection string from host
mongosh "mongodb://admin:changeme@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0&authSource=admin" --eval "db.runCommand({ping:1})"

# Check container health
docker compose ps

# Force primary step-down (failover testing)
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.stepDown(60)"

Version History & Compatibility

MongoDB VersionDocker TagStatusKey ChangesCompose Notes
8.0mongo:8.0CurrentNew query optimizer, improved shardingSame compose config as 7.0
7.0mongo:7.0Current LTSCompound wildcard indexes, auto-encryptionRecommended for production Docker
6.0mongo:6.0Supported until Aug 2025Queryable encryption, cluster syncUses mongosh (not legacy mongo shell)
5.0mongo:5.0EOL Oct 2024Time series, versioned APILast version with legacy mongo shell
Compose VersionCommandStatusNotes
V2 (plugin)docker composeCurrentSupports depends_on.condition: service_healthy
V1 (standalone)docker-composeDeprecatedDoes not support service_healthy condition

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Local development needing transactions or change streamsSimple CRUD dev without transactionsStandalone MongoDB (no replica set)
CI/CD pipeline integration testingProduction on cloud infrastructureMongoDB Atlas or Kubernetes operator
Staging environment mirroring production HASingle-server with no failover needsStandalone MongoDB with backups
Learning MongoDB replication and failoverNeed global distribution across regionsMongoDB Atlas global clusters
Testing application behavior during failoverSystems with < 4 GB RAMSingle-node replica set (1 member)

Important Caveats

Related Units