maxmemory limit, memory fragmentation (high RSS vs low used_memory),
large or forgotten keys, client output buffer bloat, and missing/wrong eviction policies. Diagnose
with INFO memory and MEMORY DOCTOR; fix by setting appropriate
maxmemory-policy, enabling activedefrag, and auditing keys with
--bigkeys/--memkeys. [src1, src2]
redis-cli INFO memory — shows
used_memory, used_memory_rss, mem_fragmentation_ratio,
maxmemory, and evicted_keys in a single command. [src3]
maxmemory without an eviction policy (default
is noeviction) — Redis rejects all writes with
OOM command not allowed when used memory > 'maxmemory' instead of evicting old
keys. [src1]
INFO memory works on all versions. [src2,
src3]
MEMORY USAGE,
MEMORY DOCTOR, MEMORY STATS, and MEMORY MALLOC-STATS were
introduced in Redis 4.0. On older versions, use INFO memory and
DEBUG OBJECT. [src3]
JEMALLOC_FRAG_HINT will not defrag.
Verify with INFO memory — mem_allocator should show
jemalloc-5.x. [src2,
src4]
redis-cli --bigkeys on a primary during peak traffic: It
performs a full SCAN of the keyspace and can cause latency spikes. Run on a replica or
during maintenance windows. [src2]
maxmemory to 0 in production: A value of 0 means no
limit. Redis will consume all available RAM until the OS OOM killer terminates the process without
clean shutdown, risking data loss. [src1, src4]
CONFIG SET is ephemeral: Runtime config changes via
CONFIG SET maxmemory or CONFIG SET maxmemory-policy are lost on
restart. Always follow with CONFIG REWRITE or update redis.conf
manually. [src1]
| # | Cause | Likelihood | Signature | Fix |
|---|---|---|---|---|
| 1 | maxmemory reached with noeviction policy |
~30% | OOM command not allowed when used memory > 'maxmemory' |
Set maxmemory-policy allkeys-lru (or appropriate policy) [src1] |
| 2 | Memory fragmentation (high RSS, low dataset) | ~20% | mem_fragmentation_ratio > 1.5 in INFO memory |
Enable activedefrag yes; or restart Redis [src2,
src4]
|
| 3 | Large/forgotten keys consuming disproportionate memory | ~15% | redis-cli --bigkeys shows keys with millions of elements |
Delete or restructure oversized keys; set TTLs [src2] |
| 4 | Client output buffer overflow (slow consumers) | ~10% | client_recent_max_output_buffer high in INFO clients |
Tune client-output-buffer-limit; fix slow subscribers [src6,
src7]
|
| 5 | Replica output buffer growth | ~8% | mem_clients_slaves large in MEMORY STATS |
Fix replica lag; increase client-output-buffer-limit replica [src6]
|
| 6 | No TTL on keys (unbounded growth) | ~7% | db0:keys=N,expires=0 — zero keys have expiration |
Audit and set TTLs; use volatile-* eviction policies [src1, src4]
|
| 7 | RDB/AOF fork doubling RSS | ~5% | RSS spikes to 2x during BGSAVE or BGREWRITEAOF |
Reserve 50% memory headroom; use aof-use-rdb-preamble yes [src2,
src5]
|
| 8 | Lua scripts holding references | ~3% | used_memory_scripts high in INFO memory |
Avoid large data in Lua globals; use redis.call results directly [src3] |
| 9 | Wrong maxmemory value (too low for workload) | ~2% | Frequent evicted_keys but low mem_fragmentation_ratio |
Increase maxmemory to match workload [src1] |
START -- Redis memory issue detected
|
+-- Is Redis returning "OOM command not allowed"?
| +-- YES: Check maxmemory and eviction policy
| | +-- maxmemory-policy = noeviction? -> Set allkeys-lru or allkeys-lfu [Cause #1]
| | +-- maxmemory too low? -> Increase maxmemory [Cause #9]
| | +-- Keys have no TTL? -> Set TTLs, switch to volatile-lru [Cause #6]
| +-- NO: Continue below
|
+-- Is RSS much higher than used_memory (fragmentation ratio > 1.5)?
| +-- YES -> Enable activedefrag; consider restart for severe cases [Cause #2]
| +-- NO: Continue below
|
+-- Is used_memory growing steadily without stabilizing?
| +-- YES: Run redis-cli --bigkeys and --memkeys
| | +-- Found oversized keys? -> Delete/restructure/TTL [Cause #3]
| | +-- No big keys but mem_clients_normal is high? -> Client buffer issue [Cause #4]
| | +-- mem_clients_slaves is high? -> Replica buffer issue [Cause #5]
| +-- NO: Continue below
|
+-- Does RSS spike during BGSAVE/BGREWRITEAOF?
| +-- YES -> Reserve memory headroom; tune AOF settings [Cause #7]
| +-- NO -> Check used_memory_scripts and Lua usage [Cause #8]
Get a complete picture of memory usage. [src3]
redis-cli INFO memory
Key fields: used_memory, used_memory_rss, used_memory_peak,
mem_fragmentation_ratio (healthy: 1.0-1.5), maxmemory,
maxmemory_policy, mem_allocator (should be jemalloc).
Verify: redis-cli INFO memory | grep used_memory_human → shows current usage
in human-readable format.
Redis 4.0+ includes a built-in diagnostic advisor. [src3]
redis-cli MEMORY DOCTOR
Reports issues such as high fragmentation, peak memory significantly higher than current, and high RSS to dataset ratio.
Verify: Output should be Sam, I have no memory problems if healthy, or a
specific diagnostic message.
Find keys using disproportionate amounts of memory. [src2]
# Find largest keys by element count
redis-cli --bigkeys -i 0.1
# Find largest keys by memory usage (Redis 6.0+)
redis-cli --memkeys -i 0.1
# Check specific key memory usage
redis-cli MEMORY USAGE mykey
Verify: Output shows the largest key per data type and summary statistics.
If Redis hits maxmemory with noeviction, all writes fail. [src1]
# Check current policy
redis-cli CONFIG GET maxmemory-policy
# Set appropriate policy for cache workloads
redis-cli CONFIG SET maxmemory-policy allkeys-lru
# Persist the change
redis-cli CONFIG REWRITE
Verify: redis-cli CONFIG GET maxmemory-policy → shows the new policy.
If mem_fragmentation_ratio > 1.5, enable active defrag. [src2,
src4]
# Enable active defragmentation
redis-cli CONFIG SET activedefrag yes
# Configure thresholds (optional)
redis-cli CONFIG SET active-defrag-ignore-bytes 100mb
redis-cli CONFIG SET active-defrag-threshold-lower 10
redis-cli CONFIG SET active-defrag-threshold-upper 100
redis-cli CONFIG SET active-defrag-cycle-min 1
redis-cli CONFIG SET active-defrag-cycle-max 25
# Persist
redis-cli CONFIG REWRITE
Verify: redis-cli INFO memory | grep mem_fragmentation_ratio — should
decrease over minutes to hours.
Slow consumers (especially pub/sub subscribers) can cause massive buffer growth. [src6, src7]
# Check client buffer usage
redis-cli CLIENT LIST
# Set buffer limits (hard limit, soft limit, soft time)
redis-cli CONFIG SET client-output-buffer-limit "pubsub 32mb 8mb 60"
redis-cli CONFIG SET client-output-buffer-limit "replica 256mb 64mb 60"
Verify: redis-cli INFO clients — check
client_recent_max_output_buffer has decreased.
# Input: Redis connection URL
# Output: Memory health report dict with issues list
import redis
def check_redis_memory(url="redis://localhost:6379"):
r = redis.Redis.from_url(url)
info = r.info("memory")
issues = []
frag = info.get("mem_fragmentation_ratio", 1.0)
if frag > 1.5:
issues.append(f"High fragmentation: {frag:.2f} (>1.5)")
elif frag < 1.0:
issues.append(f"Swapping likely: ratio {frag:.2f} (<1.0)")
used = info.get("used_memory", 0)
maxmem = info.get("maxmemory", 0)
if maxmem > 0 and used / maxmem > 0.9:
issues.append(f"Memory {used/maxmem:.0%} of maxmemory")
policy = info.get("maxmemory_policy", "noeviction")
if policy == "noeviction" and maxmem > 0:
issues.append("noeviction policy with maxmemory set")
return {
"used_memory_human": info.get("used_memory_human"),
"maxmemory_human": info.get("maxmemory_human"),
"fragmentation_ratio": frag,
"eviction_policy": policy,
"evicted_keys": r.info("stats").get("evicted_keys", 0),
"issues": issues,
}
// Input: Redis connection URL
// Output: Logs warnings when thresholds exceeded
import Redis from "ioredis"; // [email protected]
async function monitorRedisMemory(redisUrl = "redis://localhost:6379") {
const redis = new Redis(redisUrl);
const info = await redis.info("memory");
const parsed = Object.fromEntries(
info.split("\r\n").filter(l => l.includes(":"))
.map(l => l.split(":"))
);
const fragRatio = parseFloat(parsed.mem_fragmentation_ratio);
const usedMem = parseInt(parsed.used_memory, 10);
const maxMem = parseInt(parsed.maxmemory, 10);
if (fragRatio > 1.5)
console.warn(`[REDIS] High fragmentation: ${fragRatio}`);
if (maxMem > 0 && usedMem / maxMem > 0.85)
console.warn(`[REDIS] Memory at ${((usedMem/maxMem)*100).toFixed(1)}%`);
if (parsed.maxmemory_policy === "noeviction" && maxMem > 0)
console.warn("[REDIS] noeviction with maxmemory -- writes will fail!");
await redis.quit();
return { fragRatio, usedMem, maxMem, policy: parsed.maxmemory_policy };
}
// Input: Redis address
// Output: MemoryReport struct with diagnostic fields
package main
import (
"context"
"fmt"
"strconv"
"github.com/redis/go-redis/v9" // v9.x
)
type MemoryReport struct {
UsedMemory int64 `json:"used_memory"`
MaxMemory int64 `json:"maxmemory"`
FragRatio float64 `json:"fragmentation_ratio"`
Policy string `json:"eviction_policy"`
EvictedKeys int64 `json:"evicted_keys"`
DoctorMessage string `json:"doctor_message"`
}
func DiagnoseRedis(ctx context.Context, addr string) (*MemoryReport, error) {
rdb := redis.NewClient(&redis.Options{Addr: addr})
defer rdb.Close()
info, err := rdb.InfoMap(ctx, "memory").Result()
if err != nil {
return nil, fmt.Errorf("INFO memory: %w", err)
}
mem := info["memory"]
used, _ := strconv.ParseInt(mem["used_memory"], 10, 64)
maxm, _ := strconv.ParseInt(mem["maxmemory"], 10, 64)
frag, _ := strconv.ParseFloat(mem["mem_fragmentation_ratio"], 64)
doctor, _ := rdb.MemoryDoctor(ctx).Result()
stats, _ := rdb.InfoMap(ctx, "stats").Result()
evicted, _ := strconv.ParseInt(stats["stats"]["evicted_keys"], 10, 64)
return &MemoryReport{
UsedMemory: used, MaxMemory: maxm,
FragRatio: frag, Policy: mem["maxmemory_policy"],
EvictedKeys: evicted, DoctorMessage: doctor,
}, nil
}
# BAD -- noeviction is the default; Redis rejects ALL writes when full [src1]
redis-cli CONFIG SET maxmemory 2gb
# maxmemory-policy remains "noeviction"
# Result: "OOM command not allowed when used memory > 'maxmemory'"
# GOOD -- set both together [src1]
redis-cli CONFIG SET maxmemory 2gb
redis-cli CONFIG SET maxmemory-policy allkeys-lru
redis-cli CONFIG REWRITE
# BAD -- KEYS blocks the entire server while scanning all keys [src2]
redis-cli KEYS "*session*"
# On a 10M key database, this blocks Redis for seconds
# GOOD -- SCAN is non-blocking and incremental [src2]
redis-cli --bigkeys -i 0.1
redis-cli --memkeys -i 0.1
# Or use SCAN directly with a cursor
redis-cli SCAN 0 MATCH "*session*" COUNT 100
# BAD -- ratio < 1.0 means Redis is swapping to disk [src4]
# INFO memory shows:
# used_memory: 8000000000
# used_memory_rss: 4000000000
# mem_fragmentation_ratio: 0.50
# Developer thinks: "Great, Redis is using less OS memory"
# Reality: Redis is actively swapping -- catastrophic performance
# GOOD -- check for swapping immediately [src4, src5]
redis-cli INFO memory | grep mem_fragmentation_ratio
# If < 1.0: Redis data is being swapped to disk
# Fix: increase available RAM or reduce maxmemory
# Verify no swap:
cat /proc/$(pidof redis-server)/smaps | grep Swap
# BAD -- activedefrag silently does nothing without jemalloc [src2, src4]
redis-cli CONFIG SET activedefrag yes
# No error, but fragmentation never decreases
# Because mem_allocator is "libc" not "jemalloc"
# GOOD -- check allocator first [src2]
redis-cli INFO memory | grep mem_allocator
# Expected: mem_allocator:jemalloc-5.3.0
# If not jemalloc: rebuild Redis or use official packages
redis-cli CONFIG SET activedefrag yes
volatile-* eviction policies, keys without TTLs are never evicted, leading to
OOM even with eviction enabled. Set TTLs on all cache data. [src1, src4]
used_memory with used_memory_rss:
used_memory is what Redis reports internally; used_memory_rss is what the
OS allocates. The gap is fragmentation overhead. Monitoring only used_memory misses
fragmentation-related OOM. [src3, src5]
BGSAVE and
BGREWRITEAOF fork the Redis process. Due to copy-on-write, write-heavy workloads
during fork can temporarily double RSS. Reserve at least 50% memory headroom. [src2,
src5]
client-output-buffer-limit pubsub 32mb 8mb 60 to disconnect slow consumers. [src6, src7]
FLUSHALL deletes all data but
does not return memory to the OS immediately due to allocator fragmentation. RSS remains high until
defragmentation or restart. [src2]
evicted_keys counter in
INFO stats means Redis is actively dropping data. This causes cache misses and
increased backend load. Alert on eviction rate, not just memory usage. [src1]
# === Memory overview ===
redis-cli INFO memory
# === Automated memory diagnosis (Redis 4.0+) ===
redis-cli MEMORY DOCTOR
# === Detailed memory statistics breakdown ===
redis-cli MEMORY STATS
# === Find largest keys by element count ===
redis-cli --bigkeys -i 0.1
# === Find largest keys by memory usage (Redis 6.0+) ===
redis-cli --memkeys -i 0.1
# === Check memory usage of a specific key ===
redis-cli MEMORY USAGE mykey SAMPLES 0
# === Check eviction stats ===
redis-cli INFO stats | grep -E "evicted_keys|keyspace"
# === Check current maxmemory and policy ===
redis-cli CONFIG GET maxmemory
redis-cli CONFIG GET maxmemory-policy
# === Check client buffer usage ===
redis-cli CLIENT LIST
redis-cli INFO clients
# === Check if Redis is swapping (Linux) ===
cat /proc/$(pidof redis-server)/smaps 2>/dev/null | grep -c "Swap:"
# === Check active defragmentation status ===
redis-cli INFO memory | grep -E "active_defrag|mem_fragmentation"
# === Allocator statistics (jemalloc internals) ===
redis-cli MEMORY MALLOC-STATS
# === Keyspace overview (key count per database) ===
redis-cli INFO keyspace
| Feature | Available Since | Notes |
|---|---|---|
INFO memory section |
Redis 2.4+ | Core memory reporting — works on all modern versions [src3] |
maxmemory + eviction policies |
Redis 2.0+ | LRU, random, volatile, noeviction [src1] |
MEMORY USAGE <key> |
Redis 4.0 | Per-key memory introspection [src3] |
MEMORY DOCTOR |
Redis 4.0 | Automated memory diagnosis [src3] |
MEMORY STATS |
Redis 4.0 | Detailed memory breakdown by category [src3] |
Active defragmentation (activedefrag) |
Redis 4.0 | Requires jemalloc; improved in 6.0 and 7.0 [src2, src4] |
LFU eviction (allkeys-lfu, volatile-lfu) |
Redis 4.0 | Frequency-based eviction alternative to LRU [src1] |
redis-cli --memkeys |
Redis 6.0 | Memory-based key scanning [src2] |
| Multi-threaded I/O | Redis 6.0 | Reduces client buffer pressure on high-connection workloads [src7] |
| Active defrag improvements | Redis 7.0 | Better large allocation handling, reduced CPU overhead [src2] |
| Use When | Don't Use When | Use Instead |
|---|---|---|
Redis returns OOM command not allowed errors |
Redis is slow but not out of memory | SLOWLOG GET for query performance analysis |
mem_fragmentation_ratio > 1.5 or < 1.0 |
Memory usage is stable and within limits | Regular monitoring is sufficient |
evicted_keys increasing but data should persist |
Keys are being evicted as designed (cache use case) | Eviction is working correctly — no action needed |
| RSS grows continuously without stabilizing | RSS spikes only during BGSAVE/BGREWRITEAOF | Normal fork behavior — ensure headroom exists |
| Client output buffers consuming significant memory | Single slow query blocking Redis | CLIENT KILL the offending client or SLOWLOG analysis |
maxmemory-policy to
volatile-lru and reserve 10-25% of memory for overhead. Check your provider's docs
before changing settings, as some parameters may be locked. [src4,
src5]
mem_fragmentation_ratio is misleading after restart: Right after
restart, RSS is minimal and the ratio may be very high (>5.0) or very low. Wait until memory
usage stabilizes before diagnosing fragmentation. [src3]
activedefrag
uses 1-25% of CPU (configurable). On CPU-bound workloads, this may increase latency. Monitor with
INFO stats field active_defrag_running. [src2,
src4]
echo never > /sys/kernel/mm/transparent_hugepage/enabled. [src2,
src5]
maxmemory limit. Uneven key distribution can cause OOM on individual nodes while
others have free memory. [src1]