How to Find and Fix Memory Leaks in Node.js

Type: Software Reference Confidence: 0.92 Sources: 8 Verified: 2026-02-23 Freshness: quarterly

TL;DR

Constraints

Quick Reference

# Cause Likelihood Signature Fix
1 Forgotten event listeners ~25% of cases MaxListenersExceededWarning or growing listener count Call removeListener()/off() in cleanup; use once() for one-shot events [src2, src3]
2 Unbounded cache / Map / object ~20% of cases Growing object count in heap snapshot Use LRU cache (lru-cache) or WeakMap for object keys [src3, src4]
3 Closures capturing large scope ~15% of cases Retained size of closure objects growing in heap Narrow closure scope; nullify large variables after use [src2, src4]
4 Uncleared setInterval/setTimeout ~12% of cases Timer count grows; callbacks reference stale objects Call clearInterval()/clearTimeout() on cleanup; use AbortSignal in Node 20+ [src2, src3]
5 Unhandled/unconsumed streams ~8% of cases Internal buffer growing; pause() never triggered Implement backpressure; call stream.destroy() on error [src4]
6 Global variable accumulation ~7% of cases Objects in global scope never freed Use let/const in functions; avoid global.xxx patterns [src2, src3]
7 Unclosed DB/network connections ~5% of cases Connection pool exhaustion; socket handles grow Use connection pooling with limits; close on shutdown [src3]
8 Circular references (pre-WeakRef) ~3% of cases Object pairs mutually referencing Use WeakRef/WeakMap; break cycles explicitly [src4]
9 Large Buffer allocation ~3% of cases external memory growing in process.memoryUsage() Use streaming I/O; avoid loading full files into memory [src4]
10 Promise chains never resolved ~2% of cases Pending promise count growing Always resolve/reject; add .catch(); use AbortController for timeouts [src2]

Decision Tree

START
├── Is heapUsed growing steadily over time (monotonically)?
│   ├── YES → Likely a memory leak — continue diagnosis ↓
│   └── NO → Might be normal GC behavior; check if RSS is growing without heapUsed growth
│       └── RSS growing, heapUsed stable → V8 page caching (Node.js 22+), not a leak [src8]
│           └── Confirm: introduce brief idle periods → if RSS drops, it's V8 optimization
├── Do you see "MaxListenersExceededWarning" in logs?
│   ├── YES → Event listener leak. Find where listeners are added without removal [src2, src3]
│   └── NO ↓
├── Running in Docker/K8s and getting OOM-killed?
│   ├── YES → Check --max-old-space-size (set to 75% of container limit) [src1]
│   │   └── Already set? → True leak — continue diagnosis ↓
│   └── NO ↓
├── Does memory growth correlate with specific requests/events?
│   ├── YES → Trace the request handler
│   │   ├── Using in-memory cache? → Add eviction policy (LRU, TTL) [src3, src4]
│   │   ├── Creating timers per request? → Clear them in response/error handler [src2]
│   │   └── Opening connections per request? → Use connection pool [src3]
│   └── NO → Global or startup-time leak ↓
├── Take two heap snapshots 5 minutes apart → compare
│   ├── Growing object type identified → Trace by retainer path in DevTools [src7]
│   └── No clear growing type → Use allocation timeline profiling [src7]
└── Still unclear → Use Clinic.js HeapProfiler for automated analysis [src6]

Step-by-Step Guide

1. Confirm there is actually a memory leak

Not all memory growth is a leak. V8's garbage collector works in cycles, so memory usage fluctuates. A true leak shows monotonically growing heapUsed over extended periods. In Node.js 22+, RSS may grow under sustained CPU load due to V8's page-caching optimization -- this is not a leak. [src1, src5, src8]

// Add this to your app for basic monitoring
setInterval(() => {
  const mem = process.memoryUsage();
  console.log(JSON.stringify({
    rss: Math.round(mem.rss / 1024 / 1024) + 'MB',
    heapTotal: Math.round(mem.heapTotal / 1024 / 1024) + 'MB',
    heapUsed: Math.round(mem.heapUsed / 1024 / 1024) + 'MB',
    external: Math.round(mem.external / 1024 / 1024) + 'MB',
  }));
}, 10000);

Verify: If heapUsed keeps growing over 30+ minutes under steady load → you have a leak.

2. Start with Chrome DevTools heap snapshots

Connect Chrome DevTools to your Node.js process for interactive heap analysis. [src7]

# Start your app with --inspect
node --inspect app.js

# Or attach to a running process
kill -USR1 <PID>  # Linux/macOS — enables inspector on default port 9229

Then open chrome://inspect in Chrome → click "inspect" on your target → go to the Memory tab.

Verify: DevTools connects and shows heap statistics.

3. Take comparison heap snapshots

The most effective technique: take two heap snapshots with time/load in between, then compare them. Force GC before each snapshot for accuracy. [src7]

1. Open Memory tab in Chrome DevTools
2. Force GC (click trash icon) before first snapshot
3. Select "Heap snapshot" → click "Take snapshot" → label it "Baseline"
4. Exercise your app (send requests, run operations for 5 minutes)
5. Force GC again → take another snapshot → label it "After load"
6. Select the second snapshot → change view to "Comparison"
7. Sort by "# Delta" (descending) → objects with positive delta are growing
8. Click on growing objects → inspect "Retainers" to find what holds references

Verify: You should see specific constructor names with positive deltas -- these are your leaking objects.

4. Check for common leak patterns

Search your codebase for the most common leak sources. [src2, src3]

# Event listeners added without removal
grep -rn "\.on(" --include="*.js" --include="*.ts" src/ | head -30

# Timers without cleanup
grep -rn "setInterval\|setTimeout" --include="*.js" --include="*.ts" src/
grep -rn "clearInterval\|clearTimeout" --include="*.js" --include="*.ts" src/

# Global variable assignments
grep -rn "global\.\|globalThis\." --include="*.js" --include="*.ts" src/

Verify: Each setInterval/.on() has a corresponding clearInterval/.off() in cleanup code.

5. Fix event listener leaks

The most common leak: adding listeners in request handlers without removing them. [src2, src3]

// ❌ WRONG — listener added per request, never removed
app.get('/data', (req, res) => {
  database.on('update', (data) => {
    res.write(data);
  });
});

// ✅ CORRECT — remove listener when response ends
app.get('/data', (req, res) => {
  const onUpdate = (data) => res.write(data);
  database.on('update', onUpdate);
  res.on('close', () => {
    database.off('update', onUpdate);
  });
});

Verify: emitter.listenerCount('eventName') stays constant under load.

6. Fix unbounded cache leaks

In-memory caches that grow forever are a classic leak. [src3, src4]

// ❌ WRONG — Map grows forever
const cache = new Map();
function getData(key) {
  if (cache.has(key)) return cache.get(key);
  const data = fetchFromDB(key);
  cache.set(key, data);
  return data;
}

// ✅ CORRECT — LRU cache with max size
import { LRUCache } from 'lru-cache';
const cache = new LRUCache({ max: 1000, ttl: 1000 * 60 * 5 });
function getData(key) {
  if (cache.has(key)) return cache.get(key);
  const data = fetchFromDB(key);
  cache.set(key, data);
  return data;
}

Verify: cache.size stays bounded under sustained load.

7. Use Clinic.js for automated diagnosis

When manual analysis is difficult, Clinic.js can automatically diagnose memory issues. [src6]

# Install and run heap profiler
npm install -g clinic
clinic heapprofile -- node app.js

# Run doctor for overall health check
clinic doctor -- node app.js

Verify: Clinic.js report highlights the source file and line where allocations are growing.

Code Examples

Express.js: Fixing middleware memory leak

// Input:  Express middleware leaking memory via event listeners
// Output: Properly cleaned up middleware

const express = require('express');
const EventEmitter = require('events');
const app = express();
const eventBus = new EventEmitter();

// ❌ LEAKY middleware — listeners accumulate
app.use((req, res, next) => {
  eventBus.on('notification', (msg) => {
    console.log(`Request ${req.url}: ${msg}`);
  });
  next();
});

// ✅ FIXED middleware — listeners cleaned up
app.use((req, res, next) => {
  const handler = (msg) => {
    console.log(`Request ${req.url}: ${msg}`);
  };
  eventBus.on('notification', handler);
  res.on('finish', () => eventBus.off('notification', handler));
  res.on('close', () => eventBus.off('notification', handler));
  next();
});

app.listen(3000);

Production monitoring: Memory leak detection script

// Input:  Need to detect memory leaks in production without DevTools
// Output: Automated monitoring with alerts

const LEAK_THRESHOLD_MB = 50;
const CHECK_INTERVAL_MS = 30000;
const WINDOW_SIZE = 20;
const measurements = [];

function checkMemory() {
  const { heapUsed, rss } = process.memoryUsage();
  const heapMB = heapUsed / 1024 / 1024;

  measurements.push({ heapMB, timestamp: Date.now() });
  if (measurements.length > WINDOW_SIZE) measurements.shift();

  if (measurements.length >= WINDOW_SIZE) {
    const oldest = measurements[0].heapMB;
    const newest = measurements[measurements.length - 1].heapMB;
    const growth = newest - oldest;

    if (growth > LEAK_THRESHOLD_MB) {
      console.error(`MEMORY LEAK DETECTED: heap grew ${growth.toFixed(1)}MB`);

      if (typeof v8 !== 'undefined') {
        const filename = `/tmp/heapdump-${Date.now()}.heapsnapshot`;
        require('v8').writeHeapSnapshot(filename);
        console.error(`  Heap snapshot written to: ${filename}`);
      }
    }
  }
}

const monitorTimer = setInterval(checkMemory, CHECK_INTERVAL_MS);
process.on('SIGTERM', () => clearInterval(monitorTimer));

Graceful shutdown: Preventing connection leaks

// Input:  Server with DB and Redis connections that leak on restart
// Output: Graceful shutdown with comprehensive cleanup

const http = require('http');
const { Pool } = require('pg');
const Redis = require('ioredis');

const pgPool = new Pool({ max: 20 });
const redis = new Redis();
const server = http.createServer(app);
const activeConnections = new Set();

server.on('connection', (conn) => {
  activeConnections.add(conn);
  conn.on('close', () => activeConnections.delete(conn));
});

async function gracefulShutdown(signal) {
  console.log(`${signal} received — starting graceful shutdown`);
  server.close();
  for (const conn of activeConnections) conn.destroy();
  activeConnections.clear();
  await pgPool.end();
  await redis.quit();
  console.log('Graceful shutdown complete');
  process.exit(0);
}

process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
server.listen(3000);

Anti-Patterns

Wrong: Global cache without eviction

// ❌ BAD — grows forever, guaranteed memory leak [src3, src4]
const userCache = {};

app.get('/user/:id', async (req, res) => {
  if (!userCache[req.params.id]) {
    userCache[req.params.id] = await db.getUser(req.params.id);
  }
  res.json(userCache[req.params.id]);
});
// After 100,000 unique users → cache holds all 100,000 in memory

Correct: Bounded cache with TTL

// ✅ GOOD — bounded size + automatic expiration [src3, src4]
import { LRUCache } from 'lru-cache';

const userCache = new LRUCache({
  max: 500,
  ttl: 1000 * 60 * 10,
  updateAgeOnGet: true,
});

app.get('/user/:id', async (req, res) => {
  let user = userCache.get(req.params.id);
  if (!user) {
    user = await db.getUser(req.params.id);
    userCache.set(req.params.id, user);
  }
  res.json(user);
});

Wrong: Event listeners in request handlers

// ❌ BAD — adds a new listener on every request, never removes [src2, src3]
const priceEmitter = new EventEmitter();

app.get('/price', (req, res) => {
  priceEmitter.on('update', (price) => {
    console.log('New price:', price);
  });
  res.json({ price: currentPrice });
});

Correct: Use once() or clean up listeners

// ✅ GOOD — once() auto-removes after first call [src2, src3]
app.get('/price', (req, res) => {
  priceEmitter.once('update', (price) => {
    console.log('New price:', price);
  });
  res.json({ price: currentPrice });
});

// ✅ ALSO GOOD — explicit cleanup
app.get('/price-stream', (req, res) => {
  const handler = (price) => res.write(`data: ${price}\n\n`);
  priceEmitter.on('update', handler);
  req.on('close', () => priceEmitter.off('update', handler));
});

Wrong: Timer per request without cleanup

// ❌ BAD — timer keeps running after response is sent [src2]
app.get('/status', (req, res) => {
  setInterval(() => {
    checkStatus().then(s => console.log(s));
  }, 5000);
  res.json({ status: 'ok' });
});

Correct: Clear timers on cleanup

// ✅ GOOD — timer cleared when request ends [src2]
app.get('/status', (req, res) => {
  const timer = setInterval(() => {
    checkStatus().then(s => console.log(s));
  }, 5000);
  res.on('finish', () => clearInterval(timer));
  res.on('close', () => clearInterval(timer));
  res.json({ status: 'ok' });
});

Common Pitfalls

Diagnostic Commands

# Start app with inspector for Chrome DevTools
node --inspect app.js

# Start with increased heap (stopgap, not a fix)
node --max-old-space-size=4096 app.js

# Expose GC for manual triggering (debug only)
node --expose-gc app.js
# Then in code: global.gc(); console.log(process.memoryUsage());

# Generate heap snapshot programmatically (Node 12+)
node -e "require('v8').writeHeapSnapshot()"

# Auto-capture snapshots before OOM (Node 20+)
node --heapsnapshot-near-heap-limit=3 --diagnostic-dir=/tmp/heapdumps app.js

# Monitor memory from outside the process
watch -n 5 'ps -o pid,rss,vsz,command -p $(pgrep -f "node app.js")'

# Clinic.js automated profiling
npx clinic heapprofile -- node app.js
npx clinic doctor -- node app.js

# Find potential global variable leaks
grep -rn "global\.\|globalThis\." --include="*.js" --include="*.ts" src/

# Count setInterval without clearInterval
echo "setInterval: $(grep -rc 'setInterval' --include='*.js' src/)"
echo "clearInterval: $(grep -rc 'clearInterval' --include='*.js' src/)"

Version History & Compatibility

Version Memory Features Key Changes
Node.js 23 (Current) Latest V8 12.x Continued improvements to GC diagnostics, --heapsnapshot-near-heap-limit stable [src1]
Node.js 22 LTS V8 12.4 v8.queryObjects() for counting live objects by prototype; V8 caches free pages instead of unmapping (higher RSS under sustained load, not a leak); stream.finished() perf regression fixed in 22.17.1 [src8]
Node.js 20 LTS V8 11.3 v8.writeHeapSnapshot() stable, --diagnostic-dir flag, AbortSignal.timeout() for timer cleanup, container-aware max heap size via cgroup limits [src1]
Node.js 18 LTS (EOL Apr 2025) V8 10.2 v8.writeHeapSnapshot() built-in, improved heap snapshot performance [src1]
Node.js 16 (EOL) V8 9.4 WeakRef and FinalizationRegistry stable [src1]

When to Use / When Not to Use

Use When Don't Use When Use Instead
heapUsed grows monotonically over time High CPU but stable memory CPU profiling (clinic flame, 0x)
MaxListenersExceededWarning in logs OOM kill on big data processing (expected) Streaming / pagination
Process restarts due to FATAL ERROR heap Slow response times without memory growth Event loop profiling
Memory usage never decreases after load drops Process uses lots of RSS but heapUsed is normal V8 page caching (Node.js 22+) -- not a leak [src8]
Container OOM-kills despite reasonable heap limits Memory spikes only during startup (one-time) Increase container limit or --max-old-space-size

Important Caveats

Related Units