How Do I Diagnose and Fix Java OutOfMemoryError?

Type: Software Reference Confidence: 0.95 Sources: 7 Verified: 2026-02-20 Freshness: stable

TL;DR

Constraints

Quick Reference

# OOM Type Error Message Likelihood Primary Cause Fix
1 Java heap space OutOfMemoryError: Java heap space ~50% Memory leak or undersized -Xmx Analyze heap dump; increase -Xmx or fix leak [src1]
2 GC overhead limit exceeded OutOfMemoryError: GC overhead limit exceeded ~20% GC spending >98% of time, recovering <2% heap Same as heap space — fix leak or increase heap [src1]
3 Metaspace OutOfMemoryError: Metaspace ~10% Too many classes loaded (dynamic proxies, reflection) Increase -XX:MaxMetaspaceSize; fix classloader leak [src1, src2]
4 Unable to create native thread OutOfMemoryError: Unable to create new native thread ~8% Thread leak or OS thread limit reached Fix thread leak; reduce -Xss; increase ulimit -u [src2]
5 Direct buffer memory OutOfMemoryError: Direct buffer memory ~5% NIO direct buffers not released Increase -XX:MaxDirectMemorySize; fix buffer leak [src2]
6 Requested array size OutOfMemoryError: Requested array size exceeds VM limit ~3% Array allocation > Integer.MAX_VALUE or heap Fix array sizing logic; process in chunks [src1]
7 Compressed class space OutOfMemoryError: Compressed class space ~2% Class metadata exceeds compressed pointer space Increase -XX:CompressedClassSpaceSize [src1]
8 Kill process or sacrifice child OutOfMemoryError: Kill process or sacrifice child ~2% Linux OOM Killer terminated JVM Increase container/host RAM; tune oom_score_adj [src2]

Decision Tree

START — java.lang.OutOfMemoryError thrown
├── Error message contains "Java heap space" or "GC overhead limit exceeded"?
│   ├── YES → Heap problem
│   │   ├── Heap dump available?
│   │   │   ├── YES → Eclipse MAT → Leak Suspects report
│   │   │   │   ├── Single object dominates heap? → Memory leak → Fix code
│   │   │   │   └── Many objects, heap nearly full → Increase -Xmx
│   │   │   └── NO → Enable -XX:+HeapDumpOnOutOfMemoryError, reproduce
│   │   └── Only under load spikes? → Increase -Xmx + add monitoring
│   └── NO ↓
├── Error message contains "Metaspace"?
│   ├── YES → Class loading problem
│   │   ├── On redeploy? → Classloader leak → Restart; fix leak
│   │   └── Grows slowly? → Increase -XX:MaxMetaspaceSize
│   └── NO ↓
├── Error message contains "Unable to create new native thread"?
│   ├── YES → Thread exhaustion
│   │   ├── Thread count growing? → Thread leak → Fix code
│   │   └── Hit OS limit? → Increase ulimit -u; reduce -Xss
│   └── NO ↓
├── "Direct buffer memory"? → Fix buffer release; increase -XX:MaxDirectMemorySize
├── "Kill process or sacrifice child"? → Increase container/host memory
└── Other (array size, compressed class, native method)
    └── See Quick Reference table for specific fix

Step-by-Step Guide

1. Enable heap dump on OOM (do this first)

Configure the JVM to automatically capture a heap dump when any OutOfMemoryError occurs. This is the single most important diagnostic step. [src1, src6]

# Add to JVM startup options
java -XX:+HeapDumpOnOutOfMemoryError \
     -XX:HeapDumpPath=/var/log/java/heap-dump.hprof \
     -XX:+ExitOnOutOfMemoryError \
     -jar myapp.jar

Verify: java -XX:+PrintFlagsFinal -version 2>&1 | grep HeapDumpOnOutOfMemoryError → expected: bool HeapDumpOnOutOfMemoryError = true

2. Identify the OOM type from the error message

The error message after OutOfMemoryError: tells you exactly which memory region is exhausted. [src1]

# Search application logs for the specific OOM type
grep -A 5 "OutOfMemoryError" /var/log/myapp/application.log

# Common patterns:
# "Java heap space"                     → Heap (-Xmx)
# "GC overhead limit exceeded"          → Heap (-Xmx)
# "Metaspace"                           → Class metadata (-XX:MaxMetaspaceSize)
# "Unable to create new native thread"  → Thread limit
# "Direct buffer memory"                → NIO buffers (-XX:MaxDirectMemorySize)

3. Capture a heap dump (if not auto-captured)

If HeapDumpOnOutOfMemoryError was not enabled, capture a dump from the running process. [src6]

# Find Java process PID
jps -lv

# Capture heap dump (~2 sec per GB, causes brief pause)
jmap -dump:format=b,file=/tmp/heap.hprof <pid>

# Alternative: jcmd (preferred on modern JDKs)
jcmd <pid> GC.heap_dump /tmp/heap.hprof

# Quick histogram (no full dump, minimal impact)
jmap -histo <pid> | head -30

4. Analyze the heap dump with Eclipse MAT

Eclipse Memory Analyzer Tool (MAT) is the most effective free tool for finding memory leaks. [src6, src4]

# Download Eclipse MAT from https://eclipse.dev/mat/
# Open the .hprof file in MAT

# Key reports to check:
# 1. Leak Suspects Report (automatic) — highlights top memory consumers
# 2. Dominator Tree — shows objects retaining the most memory
# 3. Histogram — sorted by retained heap size
# 4. Path to GC Roots (exclude weak refs) — shows WHY an object is retained

5. Check GC behavior

Review garbage collection logs to understand memory pressure patterns. [src4]

# Enable GC logging (JDK 9+)
java -Xlog:gc*:file=/var/log/java/gc.log:time,uptime,level,tags \
     -jar myapp.jar

# Enable GC logging (JDK 8)
java -verbose:gc -Xloggc:/var/log/java/gc.log \
     -XX:+PrintGCDetails -XX:+PrintGCDateStamps \
     -jar myapp.jar

6. Apply the fix based on OOM type

After identifying the root cause, apply the appropriate fix. [src1, src2, src4]

# Heap space / GC overhead
java -Xms2g -Xmx4g -jar myapp.jar

# Metaspace
java -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=512m -jar myapp.jar

# Native threads
java -Xss512k -jar myapp.jar
ulimit -u 65536

# Direct buffers
java -XX:MaxDirectMemorySize=512m -jar myapp.jar

# Containers — percentage-based sizing
java -XX:+UseContainerSupport \
     -XX:MaxRAMPercentage=75.0 \
     -XX:InitialRAMPercentage=50.0 \
     -jar myapp.jar

Code Examples

Java: detect and log memory pressure before OOM

// Input:  Running JVM with MemoryMXBean
// Output: Early warning logs when heap usage exceeds threshold

import java.lang.management.ManagementFactory;
import java.lang.management.MemoryMXBean;
import java.lang.management.MemoryUsage;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;

public class MemoryMonitor {
    private static final double WARN = 0.80;
    private static final double CRITICAL = 0.90;

    public static void startMonitoring() {
        MemoryMXBean memBean = ManagementFactory.getMemoryMXBean();
        Executors.newSingleThreadScheduledExecutor(r -> {
            Thread t = new Thread(r, "memory-monitor");
            t.setDaemon(true);
            return t;
        }).scheduleAtFixedRate(() -> {
            MemoryUsage heap = memBean.getHeapMemoryUsage();
            double usedPct = (double) heap.getUsed() / heap.getMax();
            if (usedPct > CRITICAL) {
                System.err.printf("CRITICAL: Heap %.1f%% (%dMB/%dMB)%n",
                    usedPct * 100, heap.getUsed() >> 20, heap.getMax() >> 20);
            } else if (usedPct > WARN) {
                System.err.printf("WARNING: Heap %.1f%% (%dMB/%dMB)%n",
                    usedPct * 100, heap.getUsed() >> 20, heap.getMax() >> 20);
            }
        }, 0, 10, TimeUnit.SECONDS);
    }
}

Bash: automated OOM diagnostic script

#!/bin/bash
# Input:  Java PID (or auto-detect)
# Output: Memory diagnostics: heap, threads, top objects

PID="${1:-$(jps -lv | grep -v Jps | head -1 | awk '{print $1}')}"
[ -z "$PID" ] && { echo "No Java process found"; exit 1; }

echo "=== Java OOM Diagnostic Report — PID: $PID ==="
echo "--- JVM Flags ---"
jcmd "$PID" VM.flags 2>/dev/null || jinfo -flags "$PID"

echo -e "\n--- Heap Usage ---"
jcmd "$PID" GC.heap_info 2>/dev/null || jmap -heap "$PID"

echo -e "\n--- Top 20 Objects ---"
jmap -histo "$PID" | head -25

echo -e "\n--- Thread Count ---"
jstack "$PID" 2>/dev/null | grep -c "^\"" || echo "N/A"

echo -e "\n--- Native Memory (if NMT enabled) ---"
jcmd "$PID" VM.native_memory summary 2>/dev/null \
    || echo "NMT not enabled. Start with -XX:NativeMemoryTracking=summary"

Java: common memory leak patterns and fixes

// PATTERN 1: Unbounded cache — use bounded cache
// BAD:  static Map<String, Object> cache = new HashMap<>();
// GOOD:
var cache = com.github.benmanes.caffeine.cache.Caffeine.newBuilder()
    .maximumSize(10_000)
    .expireAfterWrite(java.time.Duration.ofHours(1))
    .build();

// PATTERN 2: Unclosed resources — use try-with-resources
try (var is = new java.io.FileInputStream(file)) {
    // process stream
} // auto-closed

// PATTERN 3: Large collections — process in batches
int page = 0;
List<Record> batch;
do {
    batch = repo.findByPage(page++, 1000);
    processBatch(batch);
} while (!batch.isEmpty());

Anti-Patterns

Wrong: Catching OutOfMemoryError and continuing

// BAD — JVM state may be corrupted after OOM [src1]
try {
    byte[] data = new byte[Integer.MAX_VALUE];
} catch (OutOfMemoryError e) {
    System.out.println("Not enough memory, retrying...");
    byte[] data = new byte[1024 * 1024]; // unreliable
}

Correct: Crash fast and analyze the heap dump

// GOOD — let the JVM crash; capture dump; fix root cause [src1]
// JVM flags: -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError
// The heap dump gives you the evidence to fix the actual problem.

Wrong: Blindly increasing -Xmx without analysis

# BAD — just delays the crash if there is a memory leak [src4, src6]
# Monday:    java -Xmx2g -jar app.jar  # OOM after 6 hours
# Tuesday:   java -Xmx4g -jar app.jar  # OOM after 12 hours
# Wednesday: java -Xmx8g -jar app.jar  # OOM after 24 hours — still leaking!

Correct: Analyze heap dump first, then right-size

# GOOD — capture evidence, then decide [src4, src6]
# 1. Enable dump: -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/
# 2. Reproduce OOM
# 3. Open dump in Eclipse MAT → Leak Suspects
# 4. Leak found? → fix code, keep original -Xmx
# 5. No leak? → increase -Xmx to actual usage + 25% headroom

Wrong: Using fixed -Xmx in containers

# BAD — ignores container limits; may get OOM-killed [src4]
docker run -m 2g myapp java -Xmx8g -jar app.jar
# Result: Linux OOM Killer terminates JVM

Correct: Use container-aware memory settings

# GOOD — JVM respects container memory limits [src4]
docker run -m 2g myapp java \
    -XX:+UseContainerSupport \
    -XX:MaxRAMPercentage=75.0 \
    -jar app.jar
# JVM calculates: 2GB * 75% = 1.5GB max heap

Common Pitfalls

Diagnostic Commands

# === Identify the Java process ===
jps -lv                               # list Java processes with JVM flags
jcmd -l                               # alternative (modern JDKs)

# === Heap overview ===
jcmd <pid> GC.heap_info               # current heap usage (JDK 9+)
jmap -heap <pid>                       # heap summary (JDK 8)

# === Heap dump ===
jcmd <pid> GC.heap_dump /tmp/heap.hprof  # recommended (JDK 9+)
jmap -dump:format=b,file=/tmp/heap.hprof <pid>  # JDK 8+

# === Object histogram (quick, no full dump) ===
jmap -histo <pid> | head -30          # top objects by count
jmap -histo:live <pid> | head -30     # forces GC first

# === Thread analysis ===
jstack <pid> > /tmp/threads.txt       # full thread dump
jstack <pid> | grep -c "^\""          # thread count

# === GC stats ===
jstat -gcutil <pid> 1000 10           # GC stats every 1s, 10 times
# Columns: S0% S1% E% O% M% — Eden, Old, Metaspace

# === Native memory (requires -XX:NativeMemoryTracking=summary) ===
jcmd <pid> VM.native_memory summary   # heap, metaspace, threads, code cache

# === JVM flags ===
jcmd <pid> VM.flags                   # all active JVM flags
jinfo -flags <pid>                    # alternative

Version History & Compatibility

Java Version Memory Model Change Key Flags
Java 7 and earlier PermGen for class metadata -XX:MaxPermSize=256m
Java 8 PermGen replaced by Metaspace (native memory) -XX:MaxMetaspaceSize=256m
Java 9 Unified GC logging (-Xlog:gc*) -Xlog:gc*:file=gc.log
Java 10 Container support default on -XX:MaxRAMPercentage=75.0
Java 11 (LTS) ZGC experimental; Epsilon GC -XX:+UseZGC (experimental)
Java 15 ZGC production-ready -XX:+UseZGC
Java 17 (LTS) Strongly encapsulated JDK internals --add-opens for reflection
Java 21 (LTS) Virtual threads reduce native thread OOM -XX:+UseZGC -XX:+ZGenerational

When to Use / When Not to Use

Use This Guide When Don't Use When Use Instead
java.lang.OutOfMemoryError in logs java.lang.StackOverflowError Increase -Xss or fix recursion
JVM killed by OOM Killer (dmesg | grep oom) Slow GC pauses but no OOM GC tuning guide (G1GC/ZGC)
Heap usage grows monotonically over time High CPU from GC but heap stable GC algorithm selection guide
Application crashes after hours/days Immediate crash on startup Check classpath / dependency issues
Container restarts with exit code 137 Container restarts with exit code 1 Application error logs

Important Caveats

Related Units