setInterval/setTimeout), (5) unhandled streams
and unclosed connections.node --inspect app.js then open Chrome DevTools Memory
tab, take two heap snapshots 5 minutes apart, compare by "# Delta" to find growing objects.process.memoryUsage().heapUsed growing monotonically over
hours -- this indicates a leak even if GC is running. RSS growth alone can be misleading: Node.js 22
caches free V8 pages under sustained load (not a leak). [src8]
heapdump, v8-profiler-next.maxListeners to suppress MaxListenersExceededWarning -- it
masks a real leak; find and remove the accumulating listeners instead [src2,
src3]
v8.writeHeapSnapshot() pauses the event
loop for seconds to minutes proportional to heap size [src7]--max-old-space-size to 75% of cgroup memory limit -- Node.js 20+ is
cgroup-aware but V8 may still OOM-kill before GC runs [src1]WeakRef and FinalizationRegistry are not substitutes for explicit cleanup --
weak reference finalization timing is non-deterministic [src1]heapUsed, not RSS [src8]
--expose-gc in production -- manual GC calls block the event loop and mask real
allocation patterns [src1]| # | Cause | Likelihood | Signature | Fix |
|---|---|---|---|---|
| 1 | Forgotten event listeners | ~25% of cases | MaxListenersExceededWarning or growing listener count |
Call removeListener()/off() in cleanup; use once()
for one-shot events [src2,
src3]
|
| 2 | Unbounded cache / Map / object | ~20% of cases | Growing object count in heap snapshot | Use LRU cache (lru-cache) or WeakMap for object keys [src3,
src4] |
| 3 | Closures capturing large scope | ~15% of cases | Retained size of closure objects growing in heap | Narrow closure scope; nullify large variables after use [src2, src4] |
| 4 | Uncleared setInterval/setTimeout |
~12% of cases | Timer count grows; callbacks reference stale objects | Call clearInterval()/clearTimeout() on cleanup; use
AbortSignal in Node 20+ [src2,
src3]
|
| 5 | Unhandled/unconsumed streams | ~8% of cases | Internal buffer growing; pause() never triggered |
Implement backpressure; call stream.destroy() on error [src4] |
| 6 | Global variable accumulation | ~7% of cases | Objects in global scope never freed | Use let/const in functions; avoid global.xxx patterns
[src2,
src3]
|
| 7 | Unclosed DB/network connections | ~5% of cases | Connection pool exhaustion; socket handles grow | Use connection pooling with limits; close on shutdown [src3] |
| 8 | Circular references (pre-WeakRef) | ~3% of cases | Object pairs mutually referencing | Use WeakRef/WeakMap; break cycles explicitly [src4] |
| 9 | Large Buffer allocation | ~3% of cases | external memory growing in process.memoryUsage() |
Use streaming I/O; avoid loading full files into memory [src4] |
| 10 | Promise chains never resolved | ~2% of cases | Pending promise count growing | Always resolve/reject; add .catch(); use AbortController for
timeouts [src2]
|
START
├── Is heapUsed growing steadily over time (monotonically)?
│ ├── YES → Likely a memory leak — continue diagnosis ↓
│ └── NO → Might be normal GC behavior; check if RSS is growing without heapUsed growth
│ └── RSS growing, heapUsed stable → V8 page caching (Node.js 22+), not a leak [src8]
│ └── Confirm: introduce brief idle periods → if RSS drops, it's V8 optimization
├── Do you see "MaxListenersExceededWarning" in logs?
│ ├── YES → Event listener leak. Find where listeners are added without removal [src2, src3]
│ └── NO ↓
├── Running in Docker/K8s and getting OOM-killed?
│ ├── YES → Check --max-old-space-size (set to 75% of container limit) [src1]
│ │ └── Already set? → True leak — continue diagnosis ↓
│ └── NO ↓
├── Does memory growth correlate with specific requests/events?
│ ├── YES → Trace the request handler
│ │ ├── Using in-memory cache? → Add eviction policy (LRU, TTL) [src3, src4]
│ │ ├── Creating timers per request? → Clear them in response/error handler [src2]
│ │ └── Opening connections per request? → Use connection pool [src3]
│ └── NO → Global or startup-time leak ↓
├── Take two heap snapshots 5 minutes apart → compare
│ ├── Growing object type identified → Trace by retainer path in DevTools [src7]
│ └── No clear growing type → Use allocation timeline profiling [src7]
└── Still unclear → Use Clinic.js HeapProfiler for automated analysis [src6]
Not all memory growth is a leak. V8's garbage collector works in cycles, so memory usage fluctuates. A true
leak shows monotonically growing heapUsed over extended periods. In Node.js 22+, RSS may grow
under sustained CPU load due to V8's page-caching optimization -- this is not a leak. [src1, src5, src8]
// Add this to your app for basic monitoring
setInterval(() => {
const mem = process.memoryUsage();
console.log(JSON.stringify({
rss: Math.round(mem.rss / 1024 / 1024) + 'MB',
heapTotal: Math.round(mem.heapTotal / 1024 / 1024) + 'MB',
heapUsed: Math.round(mem.heapUsed / 1024 / 1024) + 'MB',
external: Math.round(mem.external / 1024 / 1024) + 'MB',
}));
}, 10000);
Verify: If heapUsed keeps growing over 30+ minutes under steady load → you have
a leak.
Connect Chrome DevTools to your Node.js process for interactive heap analysis. [src7]
# Start your app with --inspect
node --inspect app.js
# Or attach to a running process
kill -USR1 <PID> # Linux/macOS — enables inspector on default port 9229
Then open chrome://inspect in Chrome → click "inspect" on your target → go to the
Memory tab.
Verify: DevTools connects and shows heap statistics.
The most effective technique: take two heap snapshots with time/load in between, then compare them. Force GC before each snapshot for accuracy. [src7]
1. Open Memory tab in Chrome DevTools
2. Force GC (click trash icon) before first snapshot
3. Select "Heap snapshot" → click "Take snapshot" → label it "Baseline"
4. Exercise your app (send requests, run operations for 5 minutes)
5. Force GC again → take another snapshot → label it "After load"
6. Select the second snapshot → change view to "Comparison"
7. Sort by "# Delta" (descending) → objects with positive delta are growing
8. Click on growing objects → inspect "Retainers" to find what holds references
Verify: You should see specific constructor names with positive deltas -- these are your leaking objects.
Search your codebase for the most common leak sources. [src2, src3]
# Event listeners added without removal
grep -rn "\.on(" --include="*.js" --include="*.ts" src/ | head -30
# Timers without cleanup
grep -rn "setInterval\|setTimeout" --include="*.js" --include="*.ts" src/
grep -rn "clearInterval\|clearTimeout" --include="*.js" --include="*.ts" src/
# Global variable assignments
grep -rn "global\.\|globalThis\." --include="*.js" --include="*.ts" src/
Verify: Each setInterval/.on() has a corresponding
clearInterval/.off() in cleanup code.
The most common leak: adding listeners in request handlers without removing them. [src2, src3]
// ❌ WRONG — listener added per request, never removed
app.get('/data', (req, res) => {
database.on('update', (data) => {
res.write(data);
});
});
// ✅ CORRECT — remove listener when response ends
app.get('/data', (req, res) => {
const onUpdate = (data) => res.write(data);
database.on('update', onUpdate);
res.on('close', () => {
database.off('update', onUpdate);
});
});
Verify: emitter.listenerCount('eventName') stays constant under load.
In-memory caches that grow forever are a classic leak. [src3, src4]
// ❌ WRONG — Map grows forever
const cache = new Map();
function getData(key) {
if (cache.has(key)) return cache.get(key);
const data = fetchFromDB(key);
cache.set(key, data);
return data;
}
// ✅ CORRECT — LRU cache with max size
import { LRUCache } from 'lru-cache';
const cache = new LRUCache({ max: 1000, ttl: 1000 * 60 * 5 });
function getData(key) {
if (cache.has(key)) return cache.get(key);
const data = fetchFromDB(key);
cache.set(key, data);
return data;
}
Verify: cache.size stays bounded under sustained load.
When manual analysis is difficult, Clinic.js can automatically diagnose memory issues. [src6]
# Install and run heap profiler
npm install -g clinic
clinic heapprofile -- node app.js
# Run doctor for overall health check
clinic doctor -- node app.js
Verify: Clinic.js report highlights the source file and line where allocations are growing.
// Input: Express middleware leaking memory via event listeners
// Output: Properly cleaned up middleware
const express = require('express');
const EventEmitter = require('events');
const app = express();
const eventBus = new EventEmitter();
// ❌ LEAKY middleware — listeners accumulate
app.use((req, res, next) => {
eventBus.on('notification', (msg) => {
console.log(`Request ${req.url}: ${msg}`);
});
next();
});
// ✅ FIXED middleware — listeners cleaned up
app.use((req, res, next) => {
const handler = (msg) => {
console.log(`Request ${req.url}: ${msg}`);
};
eventBus.on('notification', handler);
res.on('finish', () => eventBus.off('notification', handler));
res.on('close', () => eventBus.off('notification', handler));
next();
});
app.listen(3000);
// Input: Need to detect memory leaks in production without DevTools
// Output: Automated monitoring with alerts
const LEAK_THRESHOLD_MB = 50;
const CHECK_INTERVAL_MS = 30000;
const WINDOW_SIZE = 20;
const measurements = [];
function checkMemory() {
const { heapUsed, rss } = process.memoryUsage();
const heapMB = heapUsed / 1024 / 1024;
measurements.push({ heapMB, timestamp: Date.now() });
if (measurements.length > WINDOW_SIZE) measurements.shift();
if (measurements.length >= WINDOW_SIZE) {
const oldest = measurements[0].heapMB;
const newest = measurements[measurements.length - 1].heapMB;
const growth = newest - oldest;
if (growth > LEAK_THRESHOLD_MB) {
console.error(`MEMORY LEAK DETECTED: heap grew ${growth.toFixed(1)}MB`);
if (typeof v8 !== 'undefined') {
const filename = `/tmp/heapdump-${Date.now()}.heapsnapshot`;
require('v8').writeHeapSnapshot(filename);
console.error(` Heap snapshot written to: ${filename}`);
}
}
}
}
const monitorTimer = setInterval(checkMemory, CHECK_INTERVAL_MS);
process.on('SIGTERM', () => clearInterval(monitorTimer));
// Input: Server with DB and Redis connections that leak on restart
// Output: Graceful shutdown with comprehensive cleanup
const http = require('http');
const { Pool } = require('pg');
const Redis = require('ioredis');
const pgPool = new Pool({ max: 20 });
const redis = new Redis();
const server = http.createServer(app);
const activeConnections = new Set();
server.on('connection', (conn) => {
activeConnections.add(conn);
conn.on('close', () => activeConnections.delete(conn));
});
async function gracefulShutdown(signal) {
console.log(`${signal} received — starting graceful shutdown`);
server.close();
for (const conn of activeConnections) conn.destroy();
activeConnections.clear();
await pgPool.end();
await redis.quit();
console.log('Graceful shutdown complete');
process.exit(0);
}
process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
server.listen(3000);
// ❌ BAD — grows forever, guaranteed memory leak [src3, src4]
const userCache = {};
app.get('/user/:id', async (req, res) => {
if (!userCache[req.params.id]) {
userCache[req.params.id] = await db.getUser(req.params.id);
}
res.json(userCache[req.params.id]);
});
// After 100,000 unique users → cache holds all 100,000 in memory
// ✅ GOOD — bounded size + automatic expiration [src3, src4]
import { LRUCache } from 'lru-cache';
const userCache = new LRUCache({
max: 500,
ttl: 1000 * 60 * 10,
updateAgeOnGet: true,
});
app.get('/user/:id', async (req, res) => {
let user = userCache.get(req.params.id);
if (!user) {
user = await db.getUser(req.params.id);
userCache.set(req.params.id, user);
}
res.json(user);
});
// ❌ BAD — adds a new listener on every request, never removes [src2, src3]
const priceEmitter = new EventEmitter();
app.get('/price', (req, res) => {
priceEmitter.on('update', (price) => {
console.log('New price:', price);
});
res.json({ price: currentPrice });
});
// ✅ GOOD — once() auto-removes after first call [src2, src3]
app.get('/price', (req, res) => {
priceEmitter.once('update', (price) => {
console.log('New price:', price);
});
res.json({ price: currentPrice });
});
// ✅ ALSO GOOD — explicit cleanup
app.get('/price-stream', (req, res) => {
const handler = (price) => res.write(`data: ${price}\n\n`);
priceEmitter.on('update', handler);
req.on('close', () => priceEmitter.off('update', handler));
});
// ❌ BAD — timer keeps running after response is sent [src2]
app.get('/status', (req, res) => {
setInterval(() => {
checkStatus().then(s => console.log(s));
}, 5000);
res.json({ status: 'ok' });
});
// ✅ GOOD — timer cleared when request ends [src2]
app.get('/status', (req, res) => {
const timer = setInterval(() => {
checkStatus().then(s => console.log(s));
}, 5000);
res.on('finish', () => clearInterval(timer));
res.on('close', () => clearInterval(timer));
res.json({ status: 'ok' });
});
rss (Resident Set Size) includes stack, code
segments, and V8's reserved-but-unused heap. heapUsed is the actual live object memory.
Track heapUsed for leak detection, not rss. In Node.js 22+, V8 caches free
pages under sustained load, causing RSS to grow even without a leak. [src1, src5, src8]
maxListeners to suppress it; find and fix the
leak. [src2,
src3]
process.memoryUsage() in production without alerting: Logging memory
stats is useless without automated threshold alerts. Set up monitoring that pages you when
heapUsed exceeds a baseline. [src1]v8.writeHeapSnapshot() carefully -- never under
high load. Use --heapsnapshot-near-heap-limit=N to auto-capture before OOM instead. [src7, src1]--max-old-space-size as a "fix": Increasing heap size delays the crash but
doesn't fix the leak. Use it only as a stopgap while you find the root cause. [src1]--expose-gc + global.gc()) before each snapshot, comparison data includes
unreachable objects awaiting collection, producing false positives. [src7]# Start app with inspector for Chrome DevTools
node --inspect app.js
# Start with increased heap (stopgap, not a fix)
node --max-old-space-size=4096 app.js
# Expose GC for manual triggering (debug only)
node --expose-gc app.js
# Then in code: global.gc(); console.log(process.memoryUsage());
# Generate heap snapshot programmatically (Node 12+)
node -e "require('v8').writeHeapSnapshot()"
# Auto-capture snapshots before OOM (Node 20+)
node --heapsnapshot-near-heap-limit=3 --diagnostic-dir=/tmp/heapdumps app.js
# Monitor memory from outside the process
watch -n 5 'ps -o pid,rss,vsz,command -p $(pgrep -f "node app.js")'
# Clinic.js automated profiling
npx clinic heapprofile -- node app.js
npx clinic doctor -- node app.js
# Find potential global variable leaks
grep -rn "global\.\|globalThis\." --include="*.js" --include="*.ts" src/
# Count setInterval without clearInterval
echo "setInterval: $(grep -rc 'setInterval' --include='*.js' src/)"
echo "clearInterval: $(grep -rc 'clearInterval' --include='*.js' src/)"
| Version | Memory Features | Key Changes |
|---|---|---|
| Node.js 23 (Current) | Latest V8 12.x | Continued improvements to GC diagnostics, --heapsnapshot-near-heap-limit stable
[src1] |
| Node.js 22 LTS | V8 12.4 | v8.queryObjects() for counting live objects by prototype; V8 caches free pages
instead of unmapping (higher RSS under sustained load, not a leak);
stream.finished() perf regression fixed in 22.17.1 [src8]
|
| Node.js 20 LTS | V8 11.3 | v8.writeHeapSnapshot() stable, --diagnostic-dir flag,
AbortSignal.timeout() for timer cleanup, container-aware max heap size via
cgroup limits [src1] |
| Node.js 18 LTS (EOL Apr 2025) | V8 10.2 | v8.writeHeapSnapshot() built-in, improved heap snapshot performance [src1] |
| Node.js 16 (EOL) | V8 9.4 | WeakRef and FinalizationRegistry stable [src1] |
| Use When | Don't Use When | Use Instead |
|---|---|---|
heapUsed grows monotonically over time |
High CPU but stable memory | CPU profiling (clinic flame, 0x) |
MaxListenersExceededWarning in logs |
OOM kill on big data processing (expected) | Streaming / pagination |
| Process restarts due to FATAL ERROR heap | Slow response times without memory growth | Event loop profiling |
| Memory usage never decreases after load drops | Process uses lots of RSS but heapUsed is normal | V8 page caching (Node.js 22+) -- not a leak [src8] |
| Container OOM-kills despite reasonable heap limits | Memory spikes only during startup (one-time) | Increase container limit or --max-old-space-size |
process.memoryUsage().external tracks C++ objects bound to JavaScript (Buffers, native
addons). A growing external value points to native memory leaks, not JavaScript heap leaks.
WeakRef and FinalizationRegistry (Node.js 16+) are not replacements for proper
cleanup. Weak references help with caches, but finalization timing is non-deterministic.--max-old-space-size to 75% of container memory limit to give GC headroom. Node.js 20+
reads cgroup limits automatically, but explicit flags provide tighter control. [src1]cluster module) forks workers -- each has its own heap. A leak in one worker
doesn't affect others, but all workers likely share the same leaky code.