How Do I Diagnose and Fix Redis Memory Issues?

Type: Software Reference Confidence: 0.93 Sources: 7 Verified: 2026-02-23 Freshness: stable

TL;DR

Constraints

Quick Reference

# Cause Likelihood Signature Fix
1 maxmemory reached with noeviction policy ~30% OOM command not allowed when used memory > 'maxmemory' Set maxmemory-policy allkeys-lru (or appropriate policy) [src1]
2 Memory fragmentation (high RSS, low dataset) ~20% mem_fragmentation_ratio > 1.5 in INFO memory Enable activedefrag yes; or restart Redis [src2, src4]
3 Large/forgotten keys consuming disproportionate memory ~15% redis-cli --bigkeys shows keys with millions of elements Delete or restructure oversized keys; set TTLs [src2]
4 Client output buffer overflow (slow consumers) ~10% client_recent_max_output_buffer high in INFO clients Tune client-output-buffer-limit; fix slow subscribers [src6, src7]
5 Replica output buffer growth ~8% mem_clients_slaves large in MEMORY STATS Fix replica lag; increase client-output-buffer-limit replica [src6]
6 No TTL on keys (unbounded growth) ~7% db0:keys=N,expires=0 — zero keys have expiration Audit and set TTLs; use volatile-* eviction policies [src1, src4]
7 RDB/AOF fork doubling RSS ~5% RSS spikes to 2x during BGSAVE or BGREWRITEAOF Reserve 50% memory headroom; use aof-use-rdb-preamble yes [src2, src5]
8 Lua scripts holding references ~3% used_memory_scripts high in INFO memory Avoid large data in Lua globals; use redis.call results directly [src3]
9 Wrong maxmemory value (too low for workload) ~2% Frequent evicted_keys but low mem_fragmentation_ratio Increase maxmemory to match workload [src1]

Decision Tree

START -- Redis memory issue detected
|
+-- Is Redis returning "OOM command not allowed"?
|   +-- YES: Check maxmemory and eviction policy
|   |   +-- maxmemory-policy = noeviction? -> Set allkeys-lru or allkeys-lfu [Cause #1]
|   |   +-- maxmemory too low? -> Increase maxmemory [Cause #9]
|   |   +-- Keys have no TTL? -> Set TTLs, switch to volatile-lru [Cause #6]
|   +-- NO: Continue below
|
+-- Is RSS much higher than used_memory (fragmentation ratio > 1.5)?
|   +-- YES -> Enable activedefrag; consider restart for severe cases [Cause #2]
|   +-- NO: Continue below
|
+-- Is used_memory growing steadily without stabilizing?
|   +-- YES: Run redis-cli --bigkeys and --memkeys
|   |   +-- Found oversized keys? -> Delete/restructure/TTL [Cause #3]
|   |   +-- No big keys but mem_clients_normal is high? -> Client buffer issue [Cause #4]
|   |   +-- mem_clients_slaves is high? -> Replica buffer issue [Cause #5]
|   +-- NO: Continue below
|
+-- Does RSS spike during BGSAVE/BGREWRITEAOF?
|   +-- YES -> Reserve memory headroom; tune AOF settings [Cause #7]
|   +-- NO -> Check used_memory_scripts and Lua usage [Cause #8]

Step-by-Step Guide

1. Assess current memory state

Get a complete picture of memory usage. [src3]

redis-cli INFO memory

Key fields: used_memory, used_memory_rss, used_memory_peak, mem_fragmentation_ratio (healthy: 1.0-1.5), maxmemory, maxmemory_policy, mem_allocator (should be jemalloc).

Verify: redis-cli INFO memory | grep used_memory_human → shows current usage in human-readable format.

2. Run MEMORY DOCTOR for automated diagnosis

Redis 4.0+ includes a built-in diagnostic advisor. [src3]

redis-cli MEMORY DOCTOR

Reports issues such as high fragmentation, peak memory significantly higher than current, and high RSS to dataset ratio.

Verify: Output should be Sam, I have no memory problems if healthy, or a specific diagnostic message.

3. Identify large keys consuming memory

Find keys using disproportionate amounts of memory. [src2]

# Find largest keys by element count
redis-cli --bigkeys -i 0.1

# Find largest keys by memory usage (Redis 6.0+)
redis-cli --memkeys -i 0.1

# Check specific key memory usage
redis-cli MEMORY USAGE mykey

Verify: Output shows the largest key per data type and summary statistics.

4. Check and fix eviction policy

If Redis hits maxmemory with noeviction, all writes fail. [src1]

# Check current policy
redis-cli CONFIG GET maxmemory-policy

# Set appropriate policy for cache workloads
redis-cli CONFIG SET maxmemory-policy allkeys-lru

# Persist the change
redis-cli CONFIG REWRITE

Verify: redis-cli CONFIG GET maxmemory-policy → shows the new policy.

5. Enable active defragmentation for high fragmentation

If mem_fragmentation_ratio > 1.5, enable active defrag. [src2, src4]

# Enable active defragmentation
redis-cli CONFIG SET activedefrag yes

# Configure thresholds (optional)
redis-cli CONFIG SET active-defrag-ignore-bytes 100mb
redis-cli CONFIG SET active-defrag-threshold-lower 10
redis-cli CONFIG SET active-defrag-threshold-upper 100
redis-cli CONFIG SET active-defrag-cycle-min 1
redis-cli CONFIG SET active-defrag-cycle-max 25

# Persist
redis-cli CONFIG REWRITE

Verify: redis-cli INFO memory | grep mem_fragmentation_ratio — should decrease over minutes to hours.

6. Audit and fix client output buffers

Slow consumers (especially pub/sub subscribers) can cause massive buffer growth. [src6, src7]

# Check client buffer usage
redis-cli CLIENT LIST

# Set buffer limits (hard limit, soft limit, soft time)
redis-cli CONFIG SET client-output-buffer-limit "pubsub 32mb 8mb 60"
redis-cli CONFIG SET client-output-buffer-limit "replica 256mb 64mb 60"

Verify: redis-cli INFO clients — check client_recent_max_output_buffer has decreased.

Code Examples

Python: Redis memory health check

# Input:  Redis connection URL
# Output: Memory health report dict with issues list
import redis

def check_redis_memory(url="redis://localhost:6379"):
    r = redis.Redis.from_url(url)
    info = r.info("memory")
    issues = []

    frag = info.get("mem_fragmentation_ratio", 1.0)
    if frag > 1.5:
        issues.append(f"High fragmentation: {frag:.2f} (>1.5)")
    elif frag < 1.0:
        issues.append(f"Swapping likely: ratio {frag:.2f} (<1.0)")

    used = info.get("used_memory", 0)
    maxmem = info.get("maxmemory", 0)
    if maxmem > 0 and used / maxmem > 0.9:
        issues.append(f"Memory {used/maxmem:.0%} of maxmemory")

    policy = info.get("maxmemory_policy", "noeviction")
    if policy == "noeviction" and maxmem > 0:
        issues.append("noeviction policy with maxmemory set")

    return {
        "used_memory_human": info.get("used_memory_human"),
        "maxmemory_human": info.get("maxmemory_human"),
        "fragmentation_ratio": frag,
        "eviction_policy": policy,
        "evicted_keys": r.info("stats").get("evicted_keys", 0),
        "issues": issues,
    }

JavaScript (Node.js): Monitor memory with alerts

// Input:  Redis connection URL
// Output: Logs warnings when thresholds exceeded
import Redis from "ioredis"; // [email protected]

async function monitorRedisMemory(redisUrl = "redis://localhost:6379") {
  const redis = new Redis(redisUrl);
  const info = await redis.info("memory");
  const parsed = Object.fromEntries(
    info.split("\r\n").filter(l => l.includes(":"))
      .map(l => l.split(":"))
  );
  const fragRatio = parseFloat(parsed.mem_fragmentation_ratio);
  const usedMem = parseInt(parsed.used_memory, 10);
  const maxMem = parseInt(parsed.maxmemory, 10);

  if (fragRatio > 1.5)
    console.warn(`[REDIS] High fragmentation: ${fragRatio}`);
  if (maxMem > 0 && usedMem / maxMem > 0.85)
    console.warn(`[REDIS] Memory at ${((usedMem/maxMem)*100).toFixed(1)}%`);
  if (parsed.maxmemory_policy === "noeviction" && maxMem > 0)
    console.warn("[REDIS] noeviction with maxmemory -- writes will fail!");

  await redis.quit();
  return { fragRatio, usedMem, maxMem, policy: parsed.maxmemory_policy };
}

Go: Programmatic memory diagnostics

// Input:  Redis address
// Output: MemoryReport struct with diagnostic fields
package main

import (
    "context"
    "fmt"
    "strconv"
    "github.com/redis/go-redis/v9" // v9.x
)

type MemoryReport struct {
    UsedMemory    int64   `json:"used_memory"`
    MaxMemory     int64   `json:"maxmemory"`
    FragRatio     float64 `json:"fragmentation_ratio"`
    Policy        string  `json:"eviction_policy"`
    EvictedKeys   int64   `json:"evicted_keys"`
    DoctorMessage string  `json:"doctor_message"`
}

func DiagnoseRedis(ctx context.Context, addr string) (*MemoryReport, error) {
    rdb := redis.NewClient(&redis.Options{Addr: addr})
    defer rdb.Close()

    info, err := rdb.InfoMap(ctx, "memory").Result()
    if err != nil {
        return nil, fmt.Errorf("INFO memory: %w", err)
    }
    mem := info["memory"]
    used, _ := strconv.ParseInt(mem["used_memory"], 10, 64)
    maxm, _ := strconv.ParseInt(mem["maxmemory"], 10, 64)
    frag, _ := strconv.ParseFloat(mem["mem_fragmentation_ratio"], 64)

    doctor, _ := rdb.MemoryDoctor(ctx).Result()
    stats, _ := rdb.InfoMap(ctx, "stats").Result()
    evicted, _ := strconv.ParseInt(stats["stats"]["evicted_keys"], 10, 64)

    return &MemoryReport{
        UsedMemory: used, MaxMemory: maxm,
        FragRatio: frag, Policy: mem["maxmemory_policy"],
        EvictedKeys: evicted, DoctorMessage: doctor,
    }, nil
}

Anti-Patterns

Wrong: Setting maxmemory without an eviction policy

# BAD -- noeviction is the default; Redis rejects ALL writes when full [src1]
redis-cli CONFIG SET maxmemory 2gb
# maxmemory-policy remains "noeviction"
# Result: "OOM command not allowed when used memory > 'maxmemory'"

Correct: Always pair maxmemory with an eviction policy

# GOOD -- set both together [src1]
redis-cli CONFIG SET maxmemory 2gb
redis-cli CONFIG SET maxmemory-policy allkeys-lru
redis-cli CONFIG REWRITE

Wrong: Using KEYS * to find large keys in production

# BAD -- KEYS blocks the entire server while scanning all keys [src2]
redis-cli KEYS "*session*"
# On a 10M key database, this blocks Redis for seconds

Correct: Use SCAN-based tools for production key analysis

# GOOD -- SCAN is non-blocking and incremental [src2]
redis-cli --bigkeys -i 0.1
redis-cli --memkeys -i 0.1
# Or use SCAN directly with a cursor
redis-cli SCAN 0 MATCH "*session*" COUNT 100

Wrong: Ignoring mem_fragmentation_ratio below 1.0

# BAD -- ratio < 1.0 means Redis is swapping to disk [src4]
# INFO memory shows:
# used_memory: 8000000000
# used_memory_rss: 4000000000
# mem_fragmentation_ratio: 0.50
# Developer thinks: "Great, Redis is using less OS memory"
# Reality: Redis is actively swapping -- catastrophic performance

Correct: Treat fragmentation ratio < 1.0 as critical

# GOOD -- check for swapping immediately [src4, src5]
redis-cli INFO memory | grep mem_fragmentation_ratio
# If < 1.0: Redis data is being swapped to disk
# Fix: increase available RAM or reduce maxmemory
# Verify no swap:
cat /proc/$(pidof redis-server)/smaps | grep Swap

Wrong: Running activedefrag on Redis with system libc malloc

# BAD -- activedefrag silently does nothing without jemalloc [src2, src4]
redis-cli CONFIG SET activedefrag yes
# No error, but fragmentation never decreases
# Because mem_allocator is "libc" not "jemalloc"

Correct: Verify allocator before enabling defrag

# GOOD -- check allocator first [src2]
redis-cli INFO memory | grep mem_allocator
# Expected: mem_allocator:jemalloc-5.3.0
# If not jemalloc: rebuild Redis or use official packages
redis-cli CONFIG SET activedefrag yes

Common Pitfalls

Diagnostic Commands

# === Memory overview ===
redis-cli INFO memory

# === Automated memory diagnosis (Redis 4.0+) ===
redis-cli MEMORY DOCTOR

# === Detailed memory statistics breakdown ===
redis-cli MEMORY STATS

# === Find largest keys by element count ===
redis-cli --bigkeys -i 0.1

# === Find largest keys by memory usage (Redis 6.0+) ===
redis-cli --memkeys -i 0.1

# === Check memory usage of a specific key ===
redis-cli MEMORY USAGE mykey SAMPLES 0

# === Check eviction stats ===
redis-cli INFO stats | grep -E "evicted_keys|keyspace"

# === Check current maxmemory and policy ===
redis-cli CONFIG GET maxmemory
redis-cli CONFIG GET maxmemory-policy

# === Check client buffer usage ===
redis-cli CLIENT LIST
redis-cli INFO clients

# === Check if Redis is swapping (Linux) ===
cat /proc/$(pidof redis-server)/smaps 2>/dev/null | grep -c "Swap:"

# === Check active defragmentation status ===
redis-cli INFO memory | grep -E "active_defrag|mem_fragmentation"

# === Allocator statistics (jemalloc internals) ===
redis-cli MEMORY MALLOC-STATS

# === Keyspace overview (key count per database) ===
redis-cli INFO keyspace

Version History & Compatibility

Feature Available Since Notes
INFO memory section Redis 2.4+ Core memory reporting — works on all modern versions [src3]
maxmemory + eviction policies Redis 2.0+ LRU, random, volatile, noeviction [src1]
MEMORY USAGE <key> Redis 4.0 Per-key memory introspection [src3]
MEMORY DOCTOR Redis 4.0 Automated memory diagnosis [src3]
MEMORY STATS Redis 4.0 Detailed memory breakdown by category [src3]
Active defragmentation (activedefrag) Redis 4.0 Requires jemalloc; improved in 6.0 and 7.0 [src2, src4]
LFU eviction (allkeys-lfu, volatile-lfu) Redis 4.0 Frequency-based eviction alternative to LRU [src1]
redis-cli --memkeys Redis 6.0 Memory-based key scanning [src2]
Multi-threaded I/O Redis 6.0 Reduces client buffer pressure on high-connection workloads [src7]
Active defrag improvements Redis 7.0 Better large allocation handling, reduced CPU overhead [src2]

When to Use / When Not to Use

Use When Don't Use When Use Instead
Redis returns OOM command not allowed errors Redis is slow but not out of memory SLOWLOG GET for query performance analysis
mem_fragmentation_ratio > 1.5 or < 1.0 Memory usage is stable and within limits Regular monitoring is sufficient
evicted_keys increasing but data should persist Keys are being evicted as designed (cache use case) Eviction is working correctly — no action needed
RSS grows continuously without stabilizing RSS spikes only during BGSAVE/BGREWRITEAOF Normal fork behavior — ensure headroom exists
Client output buffers consuming significant memory Single slow query blocking Redis CLIENT KILL the offending client or SLOWLOG analysis

Important Caveats

Related Units