Docker Compose Redis Cluster: Standalone, Sentinel & Cluster Modes

Type: Software Reference Confidence: 0.93 Sources: 7 Verified: 2026-02-27 Freshness: 2026-02-27

TL;DR

Constraints

Quick Reference

Redis Deployment Modes

ModeNodesHAShardingMax DataWrite ScalingUse Case
Standalone1NoNoSingle node RAMNoDev, testing, caching
Replication1 master + N replicasManual failoverNoSingle node RAMRead-only replicasRead scaling, backups
Sentinel1 master + N replicas + 3 sentinelsAuto failoverNoSingle node RAMNoProduction HA, <100GB
Cluster3+ masters + replicasAuto failoverYes (16384 hash slots)Aggregate RAMYesProduction, >100GB, high throughput

Docker Compose Service Configuration

ServiceImagePortsVolumesKey Config
Redis Standaloneredis:7.4-alpine6379:6379redis-data:/dataappendonly yes
Redis Master (Sentinel)redis:7.4-alpine6379:6379master-data:/dataappendonly yes
Redis Replica (Sentinel)redis:7.4-alpine6380:6379replica-data:/data--replicaof master 6379
Redis Sentinelredis:7.4-alpine26379:26379sentinel-data:/datasentinel monitor mymaster master 6379 2
Redis Cluster Noderedis:7.4-alpine6371-6376:6379node-N-data:/datacluster-enabled yes

Persistence Modes

StrategyConfigData Loss RiskPerformanceRecommended For
NoneDefaultTotal on restartFastestCache-only
RDB onlysave 60 1000Minutes between snapshotsFastBackups, cache with recovery
AOF everysecappendonly yes, appendfsync everysec~1 secondGoodMost production workloads
AOF alwaysappendonly yes, appendfsync alwaysNoneSlowFinancial/critical data
RDB + AOFBoth enabled~1 secondGoodPostgreSQL-level durability

Decision Tree

START: What do you need from Redis?
├── Just a cache for development/testing?
│   ├── YES → Use Standalone mode (1 container, simplest setup)
│   └── NO ↓
├── Need high availability (auto-failover)?
│   ├── YES ↓
│   │   ├── Data fits in single node RAM (<100GB)?
│   │   │   ├── YES → Use Sentinel mode (1 master + 2 replicas + 3 sentinels)
│   │   │   └── NO ↓
│   │   └── Need to shard data or scale writes?
│   │       ├── YES → Use Cluster mode (6+ nodes: 3 masters + 3 replicas)
│   │       └── NO → Use Sentinel mode
│   └── NO ↓
├── Need read scaling only?
│   ├── YES → Use Replication (1 master + N replicas, no Sentinel)
│   └── NO ↓
└── DEFAULT → Start with Standalone, add Sentinel when uptime matters

Step-by-Step Guide

1. Set up Redis Standalone with Docker Compose

The simplest mode: a single Redis instance with optional persistence. Ideal for development and testing. [src4]

# docker-compose.yml -- Redis Standalone
services:
  redis:
    image: redis:7.4-alpine
    container_name: redis-standalone
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    command: >
      redis-server
      --appendonly yes
      --appendfsync everysec
      --save 60 1000
      --maxmemory 256mb
      --maxmemory-policy allkeys-lru
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 3

volumes:
  redis-data:

Verify: docker compose up -d && docker exec redis-standalone redis-cli ping → expected: PONG

2. Set up Redis Sentinel for high availability

Sentinel monitors a master and its replicas, automatically promoting a replica if the master fails. Minimum: 1 master + 2 replicas + 3 sentinels. [src3]

# sentinel.conf
sentinel monitor mymaster redis-master 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 10000
sentinel parallel-syncs mymaster 1
sentinel resolve-hostnames yes

Full script: docker-compose-sentinel.yml (53 lines)

Verify: docker exec redis-sentinel-1 redis-cli -p 26379 sentinel masters → shows mymaster with status ok

3. Set up Redis Cluster (6 nodes)

Redis Cluster distributes data across 3+ masters using 16,384 hash slots. Each master has a replica for failover. [src1]

# redis-cluster.conf (shared by all nodes)
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
appendfsync everysec
save 60 1000
maxmemory 256mb
maxmemory-policy allkeys-lru

Full script: docker-compose-cluster.yml (99 lines)

Verify: docker exec redis-node-1 redis-cli cluster infocluster_state:ok, cluster_slots_assigned:16384

4. Create the Redis Cluster

After all 6 nodes are running, initialize the cluster topology. [src1]

# Create cluster with 3 masters + 3 replicas
docker exec redis-node-1 redis-cli --cluster create \
  redis-node-1:6379 redis-node-2:6379 redis-node-3:6379 \
  redis-node-4:6379 redis-node-5:6379 redis-node-6:6379 \
  --cluster-replicas 1 --cluster-yes

Verify: docker exec redis-node-1 redis-cli --cluster check redis-node-1:6379[OK] All 16384 slots covered

5. Configure persistence for production

Choose persistence based on durability requirements. Redis 7.0+ uses multi-part AOF with an appendonlydir directory. [src2]

# Production redis.conf -- RDB + AOF
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000

Verify: docker exec redis-node-1 redis-cli info persistence → check aof_enabled:1, rdb_last_save_time is recent

Code Examples

Docker Compose: Redis Standalone with Custom Config

# Input:  redis.conf mounted from host
# Output: Redis instance with persistence and memory limits
services:
  redis:
    image: redis:7.4-alpine
    ports:
      - "6379:6379"
    volumes:
      - ./redis.conf:/usr/local/etc/redis/redis.conf
      - redis-data:/data
    command: redis-server /usr/local/etc/redis/redis.conf
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 3

volumes:
  redis-data:

Python: Connecting to Redis Cluster

# Input:  Redis Cluster running on localhost ports 6371-6376
# Output: Distributed key-value operations across cluster
from redis.cluster import RedisCluster  # redis-py >= 4.1.0

rc = RedisCluster(
    startup_nodes=[
        {"host": "127.0.0.1", "port": 6371},
        {"host": "127.0.0.1", "port": 6372},
        {"host": "127.0.0.1", "port": 6373},
    ],
    decode_responses=True,
)
rc.set("user:{1001}:name", "Alice")    # Hash slot from {1001}
rc.set("user:{1001}:email", "[email protected]") # Same slot
print(rc.get("user:{1001}:name"))      # "Alice"

Node.js: Connecting to Redis Cluster

// Input:  Redis Cluster running on localhost ports 6371-6376
// Output: Distributed key-value operations across cluster
import { createCluster } from 'redis';  // redis >= 4.0.0

const cluster = createCluster({
  rootNodes: [
    { url: 'redis://127.0.0.1:6371' },
    { url: 'redis://127.0.0.1:6372' },
    { url: 'redis://127.0.0.1:6373' },
  ],
});
await cluster.connect();
await cluster.set('user:{1001}:name', 'Alice');
const name = await cluster.get('user:{1001}:name');
console.log(name); // "Alice"
await cluster.quit();

Anti-Patterns

Wrong: Using default bridge network for Redis Cluster

# BAD -- default bridge network uses internal IPs unreachable between containers
services:
  redis-node-1:
    image: redis:7.4-alpine
    ports:
      - "6371:6379"
    # No custom network defined -- uses default bridge

Correct: Using a user-defined bridge network

# GOOD -- user-defined bridge network enables DNS resolution by container name
services:
  redis-node-1:
    image: redis:7.4-alpine
    networks:
      - redis-cluster

networks:
  redis-cluster:
    driver: bridge

Wrong: No volume for cluster-config-file

# BAD -- nodes.conf lost on container restart, cluster state corrupted
services:
  redis-node-1:
    image: redis:7.4-alpine
    command: redis-server --cluster-enabled yes --cluster-config-file nodes.conf
    # No volume for /data -- nodes.conf is written to /data

Correct: Persistent volume for /data directory

# GOOD -- nodes.conf and persistence files survive restarts
services:
  redis-node-1:
    image: redis:7.4-alpine
    command: redis-server --cluster-enabled yes --cluster-config-file nodes.conf
    volumes:
      - node1-data:/data

volumes:
  node1-data:

Wrong: Using MGET across different hash slots

# BAD -- keys land on different slots, raises CrossSlotError
rc.mget("user:1:name", "user:2:name", "user:3:name")
# redis.exceptions.CrossSlotError

Correct: Using hash tags to co-locate keys

# GOOD -- {user:1} ensures all keys go to the same hash slot
rc.mget("{user:1}:name", "{user:1}:email", "{user:1}:role")
# All keys share hash slot for {user:1}

Wrong: Exposing only client port in firewall

# BAD -- cluster bus port (16379) blocked, nodes can't gossip
services:
  redis-node-1:
    image: redis:7.4-alpine
    ports:
      - "6379:6379"
    # Missing: port 16379 for cluster bus

Correct: Both ports accessible on user-defined network

# GOOD -- all ports open between containers on user-defined network
services:
  redis-node-1:
    image: redis:7.4-alpine
    ports:
      - "6371:6379"  # Client access from host only
    networks:
      - redis-cluster  # Port 16379 open within network

Common Pitfalls

Diagnostic Commands

# Check cluster health (from any node)
docker exec redis-node-1 redis-cli cluster info

# Verify all 16384 slots are assigned
docker exec redis-node-1 redis-cli --cluster check redis-node-1:6379

# List all cluster nodes and their roles
docker exec redis-node-1 redis-cli cluster nodes

# Check which slot a key belongs to
docker exec redis-node-1 redis-cli cluster keyslot mykey

# Check Sentinel status (Sentinel mode)
docker exec redis-sentinel-1 redis-cli -p 26379 sentinel masters

# Trigger manual failover on a replica
docker exec redis-node-4 redis-cli cluster failover

# Check persistence status
docker exec redis-node-1 redis-cli info persistence

# Check memory usage
docker exec redis-node-1 redis-cli info memory

# Test cluster write across all masters
docker exec redis-node-1 redis-cli -c set test:key1 "hello"
docker exec redis-node-1 redis-cli -c set test:key2 "world"

Version History & Compatibility

VersionStatusKey ChangesDocker Image
Redis 8.0CurrentUnified modules into core, new ACL categoriesredis:8.0-alpine
Redis 7.4StableHash field expiration, performance improvementsredis:7.4-alpine
Redis 7.2MaintainedCluster improvements, enhanced ACLsredis:7.2-alpine
Redis 7.0MaintainedMulti-part AOF, sharded pub/sub, CLUSTER SHARDSredis:7.0-alpine
Redis 6.2EOLACL improvements, GETDEL/GETEX commandsredis:6.2-alpine
Redis 4.0EOLcluster-announce-ip/port (NAT support), modules APIN/A
Docker ComposeStatusNotes
V2 (docker compose)CurrentBuilt into Docker CLI, YAML v3 syntax
V1 (docker-compose)DeprecatedStandalone binary, removed in Docker 25+

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Need sharding across multiple nodes (data > single node RAM)Data fits in a single Redis instanceStandalone or Sentinel
Need automatic failover AND horizontal write scalingOnly need HA without shardingSentinel (simpler ops)
High-throughput writes exceeding single-node capacityLatency-sensitive multi-key transactions across dataSingle-instance Redis for atomicity
Building microservices with independent keyspacesRunning on a single Docker host with limited resourcesStandalone with persistence
Production system requiring zero-downtime upgradesDevelopment or testing environmentStandalone is sufficient

Important Caveats

Related Units