redis-cli --cluster create node1:6379 node2:6379 node3:6379 node4:6379 node5:6379 node6:6379 --cluster-replicas 1cluster-announce-ip or host networking.redis:7.4-alpine.| Mode | Nodes | HA | Sharding | Max Data | Write Scaling | Use Case |
|---|---|---|---|---|---|---|
| Standalone | 1 | No | No | Single node RAM | No | Dev, testing, caching |
| Replication | 1 master + N replicas | Manual failover | No | Single node RAM | Read-only replicas | Read scaling, backups |
| Sentinel | 1 master + N replicas + 3 sentinels | Auto failover | No | Single node RAM | No | Production HA, <100GB |
| Cluster | 3+ masters + replicas | Auto failover | Yes (16384 hash slots) | Aggregate RAM | Yes | Production, >100GB, high throughput |
| Service | Image | Ports | Volumes | Key Config |
|---|---|---|---|---|
| Redis Standalone | redis:7.4-alpine | 6379:6379 | redis-data:/data | appendonly yes |
| Redis Master (Sentinel) | redis:7.4-alpine | 6379:6379 | master-data:/data | appendonly yes |
| Redis Replica (Sentinel) | redis:7.4-alpine | 6380:6379 | replica-data:/data | --replicaof master 6379 |
| Redis Sentinel | redis:7.4-alpine | 26379:26379 | sentinel-data:/data | sentinel monitor mymaster master 6379 2 |
| Redis Cluster Node | redis:7.4-alpine | 6371-6376:6379 | node-N-data:/data | cluster-enabled yes |
| Strategy | Config | Data Loss Risk | Performance | Recommended For |
|---|---|---|---|---|
| None | Default | Total on restart | Fastest | Cache-only |
| RDB only | save 60 1000 | Minutes between snapshots | Fast | Backups, cache with recovery |
| AOF everysec | appendonly yes, appendfsync everysec | ~1 second | Good | Most production workloads |
| AOF always | appendonly yes, appendfsync always | None | Slow | Financial/critical data |
| RDB + AOF | Both enabled | ~1 second | Good | PostgreSQL-level durability |
START: What do you need from Redis?
├── Just a cache for development/testing?
│ ├── YES → Use Standalone mode (1 container, simplest setup)
│ └── NO ↓
├── Need high availability (auto-failover)?
│ ├── YES ↓
│ │ ├── Data fits in single node RAM (<100GB)?
│ │ │ ├── YES → Use Sentinel mode (1 master + 2 replicas + 3 sentinels)
│ │ │ └── NO ↓
│ │ └── Need to shard data or scale writes?
│ │ ├── YES → Use Cluster mode (6+ nodes: 3 masters + 3 replicas)
│ │ └── NO → Use Sentinel mode
│ └── NO ↓
├── Need read scaling only?
│ ├── YES → Use Replication (1 master + N replicas, no Sentinel)
│ └── NO ↓
└── DEFAULT → Start with Standalone, add Sentinel when uptime matters
The simplest mode: a single Redis instance with optional persistence. Ideal for development and testing. [src4]
# docker-compose.yml -- Redis Standalone
services:
redis:
image: redis:7.4-alpine
container_name: redis-standalone
ports:
- "6379:6379"
volumes:
- redis-data:/data
command: >
redis-server
--appendonly yes
--appendfsync everysec
--save 60 1000
--maxmemory 256mb
--maxmemory-policy allkeys-lru
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
volumes:
redis-data:
Verify: docker compose up -d && docker exec redis-standalone redis-cli ping → expected: PONG
Sentinel monitors a master and its replicas, automatically promoting a replica if the master fails. Minimum: 1 master + 2 replicas + 3 sentinels. [src3]
# sentinel.conf
sentinel monitor mymaster redis-master 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 10000
sentinel parallel-syncs mymaster 1
sentinel resolve-hostnames yes
Full script: docker-compose-sentinel.yml (53 lines)
Verify: docker exec redis-sentinel-1 redis-cli -p 26379 sentinel masters → shows mymaster with status ok
Redis Cluster distributes data across 3+ masters using 16,384 hash slots. Each master has a replica for failover. [src1]
# redis-cluster.conf (shared by all nodes)
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
appendfsync everysec
save 60 1000
maxmemory 256mb
maxmemory-policy allkeys-lru
Full script: docker-compose-cluster.yml (99 lines)
Verify: docker exec redis-node-1 redis-cli cluster info → cluster_state:ok, cluster_slots_assigned:16384
After all 6 nodes are running, initialize the cluster topology. [src1]
# Create cluster with 3 masters + 3 replicas
docker exec redis-node-1 redis-cli --cluster create \
redis-node-1:6379 redis-node-2:6379 redis-node-3:6379 \
redis-node-4:6379 redis-node-5:6379 redis-node-6:6379 \
--cluster-replicas 1 --cluster-yes
Verify: docker exec redis-node-1 redis-cli --cluster check redis-node-1:6379 → [OK] All 16384 slots covered
Choose persistence based on durability requirements. Redis 7.0+ uses multi-part AOF with an appendonlydir directory. [src2]
# Production redis.conf -- RDB + AOF
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
Verify: docker exec redis-node-1 redis-cli info persistence → check aof_enabled:1, rdb_last_save_time is recent
# Input: redis.conf mounted from host
# Output: Redis instance with persistence and memory limits
services:
redis:
image: redis:7.4-alpine
ports:
- "6379:6379"
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
- redis-data:/data
command: redis-server /usr/local/etc/redis/redis.conf
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
volumes:
redis-data:
# Input: Redis Cluster running on localhost ports 6371-6376
# Output: Distributed key-value operations across cluster
from redis.cluster import RedisCluster # redis-py >= 4.1.0
rc = RedisCluster(
startup_nodes=[
{"host": "127.0.0.1", "port": 6371},
{"host": "127.0.0.1", "port": 6372},
{"host": "127.0.0.1", "port": 6373},
],
decode_responses=True,
)
rc.set("user:{1001}:name", "Alice") # Hash slot from {1001}
rc.set("user:{1001}:email", "[email protected]") # Same slot
print(rc.get("user:{1001}:name")) # "Alice"
// Input: Redis Cluster running on localhost ports 6371-6376
// Output: Distributed key-value operations across cluster
import { createCluster } from 'redis'; // redis >= 4.0.0
const cluster = createCluster({
rootNodes: [
{ url: 'redis://127.0.0.1:6371' },
{ url: 'redis://127.0.0.1:6372' },
{ url: 'redis://127.0.0.1:6373' },
],
});
await cluster.connect();
await cluster.set('user:{1001}:name', 'Alice');
const name = await cluster.get('user:{1001}:name');
console.log(name); // "Alice"
await cluster.quit();
# BAD -- default bridge network uses internal IPs unreachable between containers
services:
redis-node-1:
image: redis:7.4-alpine
ports:
- "6371:6379"
# No custom network defined -- uses default bridge
# GOOD -- user-defined bridge network enables DNS resolution by container name
services:
redis-node-1:
image: redis:7.4-alpine
networks:
- redis-cluster
networks:
redis-cluster:
driver: bridge
# BAD -- nodes.conf lost on container restart, cluster state corrupted
services:
redis-node-1:
image: redis:7.4-alpine
command: redis-server --cluster-enabled yes --cluster-config-file nodes.conf
# No volume for /data -- nodes.conf is written to /data
# GOOD -- nodes.conf and persistence files survive restarts
services:
redis-node-1:
image: redis:7.4-alpine
command: redis-server --cluster-enabled yes --cluster-config-file nodes.conf
volumes:
- node1-data:/data
volumes:
node1-data:
# BAD -- keys land on different slots, raises CrossSlotError
rc.mget("user:1:name", "user:2:name", "user:3:name")
# redis.exceptions.CrossSlotError
# GOOD -- {user:1} ensures all keys go to the same hash slot
rc.mget("{user:1}:name", "{user:1}:email", "{user:1}:role")
# All keys share hash slot for {user:1}
# BAD -- cluster bus port (16379) blocked, nodes can't gossip
services:
redis-node-1:
image: redis:7.4-alpine
ports:
- "6379:6379"
# Missing: port 16379 for cluster bus
# GOOD -- all ports open between containers on user-defined network
services:
redis-node-1:
image: redis:7.4-alpine
ports:
- "6371:6379" # Client access from host only
networks:
- redis-cluster # Port 16379 open within network
cluster-announce-ip and cluster-announce-port in redis.conf, or use --net=host. [src1]/data, the node loses its cluster identity. Fix: Always mount a named volume to /data for every cluster node. [src1]redis-cli --cluster create before all 6 nodes are healthy causes partial cluster formation. Fix: Add depends_on with healthchecks in docker-compose.yml, or wait with a script. [src4]network_mode: host with different --port values. [src4]cluster-node-timeout 5000 for Docker environments. [src1]# Check cluster health (from any node)
docker exec redis-node-1 redis-cli cluster info
# Verify all 16384 slots are assigned
docker exec redis-node-1 redis-cli --cluster check redis-node-1:6379
# List all cluster nodes and their roles
docker exec redis-node-1 redis-cli cluster nodes
# Check which slot a key belongs to
docker exec redis-node-1 redis-cli cluster keyslot mykey
# Check Sentinel status (Sentinel mode)
docker exec redis-sentinel-1 redis-cli -p 26379 sentinel masters
# Trigger manual failover on a replica
docker exec redis-node-4 redis-cli cluster failover
# Check persistence status
docker exec redis-node-1 redis-cli info persistence
# Check memory usage
docker exec redis-node-1 redis-cli info memory
# Test cluster write across all masters
docker exec redis-node-1 redis-cli -c set test:key1 "hello"
docker exec redis-node-1 redis-cli -c set test:key2 "world"
| Version | Status | Key Changes | Docker Image |
|---|---|---|---|
| Redis 8.0 | Current | Unified modules into core, new ACL categories | redis:8.0-alpine |
| Redis 7.4 | Stable | Hash field expiration, performance improvements | redis:7.4-alpine |
| Redis 7.2 | Maintained | Cluster improvements, enhanced ACLs | redis:7.2-alpine |
| Redis 7.0 | Maintained | Multi-part AOF, sharded pub/sub, CLUSTER SHARDS | redis:7.0-alpine |
| Redis 6.2 | EOL | ACL improvements, GETDEL/GETEX commands | redis:6.2-alpine |
| Redis 4.0 | EOL | cluster-announce-ip/port (NAT support), modules API | N/A |
| Docker Compose | Status | Notes |
|---|---|---|
| V2 (docker compose) | Current | Built into Docker CLI, YAML v3 syntax |
| V1 (docker-compose) | Deprecated | Standalone binary, removed in Docker 25+ |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Need sharding across multiple nodes (data > single node RAM) | Data fits in a single Redis instance | Standalone or Sentinel |
| Need automatic failover AND horizontal write scaling | Only need HA without sharding | Sentinel (simpler ops) |
| High-throughput writes exceeding single-node capacity | Latency-sensitive multi-key transactions across data | Single-instance Redis for atomicity |
| Building microservices with independent keyspaces | Running on a single Docker host with limited resources | Standalone with persistence |
| Production system requiring zero-downtime upgrades | Development or testing environment | Standalone is sufficient |
{tag} to co-locate related keyscluster-announce-ip settings/opt/bitnami/redis/etc/) and environment variables -- do not mix documentation