--replSet names, a shared keyfile for auth, a custom bridge network for DNS resolution, and a one-time rs.initiate() call to bootstrap the cluster.mongosh --eval "rs.initiate({_id:'rs0', members:[{_id:0,host:'mongo1:27017'},{_id:1,host:'mongo2:27017'},{_id:2,host:'mongo3:27017'}]})"rs.initiate() members must be resolvable by both the containers AND your application -- using localhost inside containers breaks cross-node communication.docker compose plugin); Docker Engine 24+.--replSet flag MUST match the _id in rs.initiate() -- mismatch causes silent failurekeyFile or x509) -- any network-accessible mongod without auth is fully openrs.initiate() members MUST be resolvable by all replica set members AND by connecting clients| Service | Image | Command Flags | Ports | Volumes | Key Env |
|---|---|---|---|---|---|
| mongo1 (primary) | mongo:7.0 | --replSet rs0 --keyFile /etc/mongo-keyfile --bind_ip_all | 27017:27017 | mongo1-data:/data/db, keyfile:ro | MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD |
| mongo2 (secondary) | mongo:7.0 | --replSet rs0 --keyFile /etc/mongo-keyfile --bind_ip_all | 27018:27017 | mongo2-data:/data/db, keyfile:ro | Same as above |
| mongo3 (secondary) | mongo:7.0 | --replSet rs0 --keyFile /etc/mongo-keyfile --bind_ip_all | 27019:27017 | mongo3-data:/data/db, keyfile:ro | Same as above |
| mongo-arbiter (optional) | mongo:7.0 | --replSet rs0 --keyFile /etc/mongo-keyfile --bind_ip_all | 27020:27017 | keyfile:ro (no data volume) | None |
| mongo-init (one-shot) | mongo:7.0 | Runs mongosh to execute rs.initiate() | None | init script:ro | None |
| Role | Votes | Priority | Stores Data | Purpose |
|---|---|---|---|---|
| Primary | 1 | Highest (e.g., 3) | Yes | Accepts all writes; serves reads by default |
| Secondary | 1 | Lower (e.g., 1) | Yes | Replicates from primary; can serve reads with readPreference |
| Arbiter | 1 | 0 | No | Participates in elections only; breaks ties |
| Hidden | 1 | 0 | Yes | Invisible to clients; used for analytics/backup |
| Delayed | 1 | 0 | Yes | Lags behind primary by configured seconds; disaster recovery |
| Context | Connection String |
|---|---|
| From host (no auth) | mongodb://localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0 |
| From host (with auth) | mongodb://admin:password@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0&authSource=admin |
| From Docker network | mongodb://admin:password@mongo1:27017,mongo2:27017,mongo3:27017/?replicaSet=rs0&authSource=admin |
| With TLS | Append &tls=true&tlsCAFile=/path/to/ca.pem |
| Node.js driver | mongodb://admin:password@mongo1:27017,mongo2:27017,mongo3:27017/mydb?replicaSet=rs0&authSource=admin&w=majority |
START: What is your use case?
├── Local development (need transactions/change streams)?
│ ├── YES → Single-node replica set (1 member, simplest setup)
│ └── NO ↓
├── Development/staging with HA testing?
│ ├── YES → 3-node replica set (primary + 2 secondaries)
│ └── NO ↓
├── Production with budget constraints?
│ ├── YES → 2 data nodes + 1 arbiter (saves storage, still has quorum)
│ └── NO ↓
├── Production with full redundancy?
│ ├── YES → 3-node replica set with keyfile auth + named volumes + resource limits
│ └── NO ↓
├── Need read scaling across regions?
│ ├── YES → 5 or 7 members with priorities + readPreference=secondary
│ └── NO ↓
└── DEFAULT → 3-node replica set with keyfile authentication
The keyfile is used for inter-node authentication within the replica set. All members must share the same keyfile. [src1]
# Generate a random 756-byte base64-encoded key
openssl rand -base64 756 > ./keyfile
# Set restrictive permissions (MongoDB requires 400)
chmod 400 ./keyfile
# On Linux, change ownership to match the MongoDB container user (UID 999)
sudo chown 999:999 ./keyfile
Verify: ls -la ./keyfile → expected: -r-------- 1 999 999 1024 ... keyfile
Define three MongoDB services on a shared network with the keyfile mounted read-only. [src2] [src4]
Full docker-compose.yml: see markdown source for the complete 3-node configuration with healthchecks, named volumes, and an init container.
Verify: docker compose config → should output valid YAML with no errors.
This script connects to the primary candidate and runs rs.initiate() with the member list. [src1] [src5]
#!/bin/bash
# init-replica.sh
echo "Waiting for MongoDB nodes..."
sleep 10
mongosh --host mongo1:27017 -u admin -p "${MONGO_PASSWORD:-changeme}" \
--authenticationDatabase admin --eval '
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo1:27017", priority: 3 },
{ _id: 1, host: "mongo2:27017", priority: 2 },
{ _id: 2, host: "mongo3:27017", priority: 1 }
]
});
'
Verify: docker logs mongo-init → expected: member list with PRIMARY and SECONDARY states.
Bring up all services and check the replica set status. [src3]
# Start all services
docker compose up -d
# Check init container logs
docker compose logs -f mongo-init
# Connect to primary and verify
docker exec -it mongo1 mongosh -u admin -p changeme \
--authenticationDatabase admin --eval "rs.status()"
Verify: rs.status().members should show 3 members -- one PRIMARY and two SECONDARY.
After the replica set is running, create a dedicated database user for your application. [src1]
use admin;
db.createUser({
user: "appuser",
pwd: "appsecret",
roles: [
{ role: "readWrite", db: "myapp" },
{ role: "readWrite", db: "myapp_test" }
]
});
Verify: mongosh -u appuser -p appsecret --authenticationDatabase admin myapp --eval "db.test.insertOne({ok:1})" → expected: { acknowledged: true }
Verify replication by writing with majority write concern and reading from secondaries. [src7]
db.test.insertOne(
{ msg: "replicated", ts: new Date() },
{ writeConcern: { w: "majority", wtimeout: 5000 } }
);
db.getMongo().setReadPref("secondary");
db.test.find();
Verify: Document appears on secondary reads; rs.printReplicationInfo() shows replication lag < 1 second.
# docker-compose.yml -- Single-node replica set for local dev
services:
mongodb:
image: mongo:7.0
command: ["--replSet", "rs0", "--bind_ip_all"]
ports:
- "27017:27017"
volumes:
- mongo-data:/data/db
healthcheck:
test: >
mongosh --eval "try{rs.status().ok}catch(e){rs.initiate({_id:'rs0',members:[{_id:0,host:'localhost:27017'}]}).ok}"
interval: 10s
timeout: 10s
retries: 5
volumes:
mongo-data:
const { MongoClient } = require('mongodb'); // ^6.0.0
const uri = 'mongodb://admin:changeme@mongo1:27017,mongo2:27017,mongo3:27017/?replicaSet=rs0&authSource=admin';
const client = new MongoClient(uri, {
readPreference: 'secondaryPreferred',
w: 'majority',
retryWrites: true,
retryReads: true,
serverSelectionTimeoutMS: 5000,
});
await client.connect();
const status = await client.db('admin').command({ replSetGetStatus: 1 });
console.log('Connected to replica set:', status.set);
from pymongo import MongoClient # pymongo >= 4.6.0
uri = (
"mongodb://admin:changeme@mongo1:27017,mongo2:27017,mongo3:27017/"
"?replicaSet=rs0&authSource=admin"
"&readPreference=secondaryPreferred&w=majority"
)
client = MongoClient(uri, serverSelectionTimeoutMS=5000)
status = client.admin.command("replSetGetStatus")
print(f"Connected to: {status['set']}")
2 data nodes + 1 arbiter: saves storage costs while maintaining quorum for elections. See markdown source for complete docker-compose.yml with arbiterOnly: true in rs.initiate().
# BAD -- localhost only resolves within a single container
command: ["--replSet", "rs0", "--bind_ip", "localhost"]
# rs.initiate with localhost: members cannot reach each other
# GOOD -- container hostnames resolve via Docker DNS
command: ["--replSet", "rs0", "--bind_ip_all"]
# members: [{host: "mongo1:27017"}, {host: "mongo2:27017"}, ...]
# BAD -- MongoDB refuses to start
chmod 644 ./keyfile
# Error: "permissions on /etc/mongo-keyfile are too open"
# GOOD -- only owner can read, owned by mongodb user (UID 999)
chmod 400 ./keyfile
sudo chown 999:999 ./keyfile
# BAD -- replSet name in command does not match rs.initiate _id
command: ["--replSet", "myRS"]
# Then: rs.initiate({ _id: "rs0", ... }) -- silent failure
# GOOD -- same name in command flag and rs.initiate
command: ["--replSet", "rs0"]
# rs.initiate({ _id: "rs0", ... })
# BAD -- no mechanism to initialize replica set automatically
services:
mongo1:
image: mongo:7.0
command: ["--replSet", "rs0"]
# Requires manual mongosh after every restart
# GOOD -- healthcheck auto-initializes on first run
healthcheck:
test: >
mongosh --eval "try{rs.status().ok}catch(e){rs.initiate({_id:'rs0',members:[{_id:0,host:'mongo1:27017'}]}).ok}"
interval: 10s
// BAD -- 2 voting members cannot elect if one fails
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo1:27017" },
{ _id: 1, host: "mongo2:27017" }
]
});
// GOOD -- 3 members: majority is 2, survives 1 failure
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongo1:27017" },
{ _id: 1, host: "mongo2:27017" },
{ _id: 2, host: "mongo3:27017" }
]
});
sudo chown 999:999 ./keyfile. [src1]MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD are only processed when /data/db is empty. Fix: docker compose down -v to re-initialize. [src2]mongo1. Fix: Add entries to /etc/hosts or use localhost:27017,localhost:27018,localhost:27019. [src5]depends_on with condition: service_healthy or add sleep 10. [src4]electionTimeoutMillis is 10 seconds. Writes fail during this window. Fix: Implement retry logic in your application. [src7]--bind_ip_all. [src3]w:1 acknowledges primary only. Fix: Use w: "majority" for important writes. [src7]# Check replica set status
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.status()"
# Show replica set configuration
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.conf()"
# Check replication lag
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.printReplicationInfo()"
# Identify primary node
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.isMaster().primary"
# Verify all members are healthy
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.status().members.map(m => ({name: m.name, state: m.stateStr, health: m.health}))"
# Test connection string from host
mongosh "mongodb://admin:changeme@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0&authSource=admin" --eval "db.runCommand({ping:1})"
# Check container health
docker compose ps
# Force primary step-down (failover testing)
docker exec -it mongo1 mongosh -u admin -p changeme --authenticationDatabase admin --eval "rs.stepDown(60)"
| MongoDB Version | Docker Tag | Status | Key Changes | Compose Notes |
|---|---|---|---|---|
| 8.0 | mongo:8.0 | Current | New query optimizer, improved sharding | Same compose config as 7.0 |
| 7.0 | mongo:7.0 | Current LTS | Compound wildcard indexes, auto-encryption | Recommended for production Docker |
| 6.0 | mongo:6.0 | Supported until Aug 2025 | Queryable encryption, cluster sync | Uses mongosh (not legacy mongo shell) |
| 5.0 | mongo:5.0 | EOL Oct 2024 | Time series, versioned API | Last version with legacy mongo shell |
| Compose Version | Command | Status | Notes |
|---|---|---|---|
| V2 (plugin) | docker compose | Current | Supports depends_on.condition: service_healthy |
| V1 (standalone) | docker-compose | Deprecated | Does not support service_healthy condition |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Local development needing transactions or change streams | Simple CRUD dev without transactions | Standalone MongoDB (no replica set) |
| CI/CD pipeline integration testing | Production on cloud infrastructure | MongoDB Atlas or Kubernetes operator |
| Staging environment mirroring production HA | Single-server with no failover needs | Standalone MongoDB with backups |
| Learning MongoDB replication and failover | Need global distribution across regions | MongoDB Atlas global clusters |
| Testing application behavior during failover | Systems with < 4 GB RAM | Single-node replica set (1 member) |
docker compose down -v deletes all data -- never use -v in production without backupsMONGO_INITDB_ROOT_USERNAME/PASSWORD only initialize on first run when /data/db is empty; changing them after has no effect/data/db