SET key value EX ttl_seconds (Redis) with TTL on every key -- never cache without expiration.| Pattern | Read Path | Write Path | Consistency | Complexity | Best For |
|---|---|---|---|---|---|
| Cache-Aside (Lazy Loading) | App checks cache; on miss, reads DB, writes to cache | App writes to DB; invalidates or ignores cache | Eventual (TTL-bounded) | Low | Read-heavy workloads, general purpose |
| Read-Through | Cache itself fetches from DB on miss | App writes to DB; cache auto-populated on next read | Eventual (TTL-bounded) | Medium | Read-heavy with cache-provider support |
| Write-Through | App reads from cache (always populated) | App writes to cache; cache synchronously writes to DB | Strong | Medium | Read-heavy where consistency matters |
| Write-Behind (Write-Back) | App reads from cache (always populated) | App writes to cache; cache asynchronously flushes to DB | Eventual (delay-bounded) | High | Write-heavy workloads, batch writes |
| Refresh-Ahead | Cache proactively refreshes before TTL expires | Same as underlying pattern | Strong (if refresh succeeds) | High | Hot keys with predictable access |
| Distributed Cache | Hash-routed to correct shard; local read | Hash-routed to correct shard; replicated writes | Eventual or strong (configurable) | High | Large-scale, multi-node deployments |
START
|-- Read-heavy workload (>80% reads)?
| |-- YES --> Need strong consistency?
| | |-- YES --> Write-through (cache always in sync with DB)
| | +-- NO --> Cache-aside (simplest, most flexible)
| +-- NO |
|-- Write-heavy workload (>50% writes)?
| |-- YES --> Can tolerate small data loss window?
| | |-- YES --> Write-behind (best write throughput)
| | +-- NO --> Write-through (safe but slower writes)
| +-- NO |
|-- Balanced read/write?
| |-- YES --> Need automatic cache population?
| | |-- YES --> Read-through + write-through combo
| | +-- NO --> Cache-aside (manual control)
| +-- NO |
|-- Hot keys with predictable access?
| |-- YES --> Refresh-ahead (pre-warm before expiry)
| +-- NO |
+-- DEFAULT --> Cache-aside with TTL (safest starting point)
Select Redis (most common), Memcached (simpler, multi-threaded), or an in-process cache. Redis is the default choice for most applications due to its data structure support and persistence options. [src1]
# Install Redis (Docker)
docker run -d --name redis -p 6379:6379 redis:7-alpine
# Verify connection
redis-cli ping
# Expected: PONG
Verify: redis-cli ping → expected: PONG
The application is responsible for reading from and writing to the cache. On a cache miss, read from the database, then populate the cache. On a write, update the database and invalidate the cache entry. [src2]
// Node.js + ioredis cache-aside
const Redis = require('ioredis');
const redis = new Redis();
async function getUser(userId) {
const cacheKey = `user:${userId}`;
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached); // cache hit
const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);
await redis.set(cacheKey, JSON.stringify(user), 'EX', 3600); // TTL 1h
return user;
}
Verify: redis-cli INFO stats | grep keyspace_hits
When data changes, delete the cache key rather than updating it. Deletion is safer -- it avoids race conditions between concurrent reads and writes. [src5]
async function updateUser(userId, data) {
await db.query('UPDATE users SET name = $1 WHERE id = $2', [data.name, userId]);
await redis.del(`user:${userId}`); // invalidate, don't update
}
Verify: redis-cli EXISTS user:123 → expected: (integer) 0
If many keys expire at the same time, all requests hit the database simultaneously (thundering herd). Add random jitter to spread expirations. [src7]
const baseTTL = 3600; // 1 hour
const jitter = Math.floor(Math.random() * baseTTL * 0.1);
await redis.set(key, value, 'EX', baseTTL + jitter);
Verify: redis-cli TTL user:123 → should vary between 3600-3960
Use a distributed lock to ensure only one process recomputes on a cache miss while others wait or serve stale data. [src7]
async function getWithLock(key, computeFn, ttl = 3600) {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lockKey = `lock:${key}`;
const acquired = await redis.set(lockKey, '1', 'EX', 30, 'NX');
if (acquired) {
try {
const value = await computeFn();
await redis.set(key, JSON.stringify(value), 'EX', ttl);
return value;
} finally {
await redis.del(lockKey);
}
}
await new Promise(r => setTimeout(r, 100));
return getWithLock(key, computeFn, ttl);
}
Verify: Under load test, database queries for the same key should be 1, not N.
// Input: Redis connection, database connection
// Output: Cached read with consistent write-through
const Redis = require('ioredis'); // [email protected]
const redis = new Redis({ host: '127.0.0.1', port: 6379 });
// Cache-aside read
async function cacheGet(key, fetchFn, ttlSec = 3600) {
const hit = await redis.get(key);
if (hit) return JSON.parse(hit);
const data = await fetchFn();
if (data) {
const jitter = Math.floor(Math.random() * ttlSec * 0.1);
await redis.set(key, JSON.stringify(data), 'EX', ttlSec + jitter);
}
return data;
}
// Write-through: update DB then cache atomically
async function cacheSet(key, data, writeFn, ttlSec = 3600) {
await writeFn(data); // DB write first
await redis.set(key, JSON.stringify(data), 'EX', ttlSec);
}
# Input: Redis connection, decorated function
# Output: Transparent caching via decorator
import json, random, functools
import redis # redis-py>=5.0
r = redis.Redis(host="127.0.0.1", port=6379, decode_responses=True)
def cached(ttl=3600, prefix="cache"):
def decorator(fn):
@functools.wraps(fn)
def wrapper(*args, **kwargs):
key = f"{prefix}:{fn.__name__}:{args}:{kwargs}"
hit = r.get(key)
if hit:
return json.loads(hit)
result = fn(*args, **kwargs)
jitter = random.randint(0, int(ttl * 0.1))
r.set(key, json.dumps(result), ex=ttl + jitter)
return result
return wrapper
return decorator
@cached(ttl=1800)
def get_product(product_id: int) -> dict:
return db.execute("SELECT * FROM products WHERE id = %s", (product_id,))
// Input: Redis client, database connection
// Output: Cache-aside reads with built-in stampede protection
package main
import (
"context"
"encoding/json"
"math/rand"
"time"
"github.com/redis/go-redis/v9"
"golang.org/x/sync/singleflight"
)
var (
rdb = redis.NewClient(&redis.Options{Addr: "localhost:6379"})
sf singleflight.Group
)
func CacheGet(ctx context.Context, key string, fetchFn func() (any, error), ttl time.Duration) (any, error) {
val, err := rdb.Get(ctx, key).Result()
if err == nil {
var result any
json.Unmarshal([]byte(val), &result)
return result, nil
}
v, err, _ := sf.Do(key, func() (any, error) {
data, err := fetchFn()
if err != nil { return nil, err }
b, _ := json.Marshal(data)
jitter := time.Duration(rand.Int63n(int64(ttl) / 10))
rdb.Set(ctx, key, b, ttl+jitter)
return data, nil
})
return v, err
}
// BAD -- no TTL means cache entries live forever
// Memory grows unbounded; stale data served indefinitely
await redis.set(`user:${id}`, JSON.stringify(user));
// GOOD -- bounded TTL with jitter
const ttl = 3600 + Math.floor(Math.random() * 360);
await redis.set(`user:${id}`, JSON.stringify(user), 'EX', ttl);
// BAD -- race condition between concurrent read and write
async function updateUser(id, data) {
await db.update(id, data);
const updated = await db.get(id); // another write may happen here
await redis.set(`user:${id}`, JSON.stringify(updated), 'EX', 3600);
}
// GOOD -- delete is idempotent and race-free
async function updateUser(id, data) {
await db.update(id, data);
await redis.del(`user:${id}`); // next read will re-populate
}
// BAD -- 1000 concurrent requests all hit DB on cache miss
async function getProduct(id) {
const cached = await redis.get(`product:${id}`);
if (cached) return JSON.parse(cached);
const product = await db.query('SELECT * FROM products WHERE id = $1', [id]);
await redis.set(`product:${id}`, JSON.stringify(product), 'EX', 3600);
return product;
}
// GOOD -- only one request recomputes; others wait
async function getProduct(id) {
const cached = await redis.get(`product:${id}`);
if (cached) return JSON.parse(cached);
const lockKey = `lock:product:${id}`;
const locked = await redis.set(lockKey, '1', 'EX', 10, 'NX');
if (!locked) {
await sleep(50);
return getProduct(id);
}
try {
const product = await db.query('SELECT * FROM products WHERE id = $1', [id]);
await redis.set(`product:${id}`, JSON.stringify(product), 'EX', 3600);
return product;
} finally {
await redis.del(lockKey);
}
}
SET key NX EX), singleflight (Go), or probabilistic early expiration. [src7]# Check Redis memory usage
redis-cli INFO memory | grep used_memory_human
# Check cache hit ratio (higher is better; aim for >90%)
redis-cli INFO stats | grep keyspace
# keyspace_hits / (keyspace_hits + keyspace_misses) = hit ratio
# Monitor slow commands (>10ms)
redis-cli SLOWLOG GET 10
# Check TTL on a specific key
redis-cli TTL user:123
# Monitor real-time commands (use sparingly in production)
redis-cli MONITOR | head -50
# Check connected clients
redis-cli INFO clients | grep connected_clients
# Check eviction policy
redis-cli CONFIG GET maxmemory-policy
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Read:write ratio exceeds 10:1 | Data changes on every request | Direct database queries with connection pooling |
| Database query latency > 50ms and same query repeats | Strong consistency is non-negotiable and TTL window is unacceptable | Synchronous read-through with write-through |
| Need to absorb traffic spikes without scaling DB | Working with small datasets that fit in app memory | In-process caching (Caffeine, node-cache, lru-cache) |
| Multiple app instances need shared cache state | Caching would only save <5ms per request | No cache -- overhead not worth the complexity |
| Session storage, API rate limiting, leaderboards | Data has complex relational joins that change frequently | Materialized views in the database |
KEYS *) blocks all other operations. Use SCAN for iteration in production