ECONNREFUSED means Node.js reached the target IP but nothing
was listening on the specified port — the OS actively rejected the TCP connection with a RST packet. The
5 most common causes: (1) target service not running, (2) wrong host/port in connection config, (3)
Docker networking (localhost inside a container points to itself), (4) service bound to
127.0.0.1 instead of 0.0.0.0, (5) firewall blocking the port.curl -v telnet://host:port or
nc -zv host port to test raw TCP connectivity outside of Node.js. If this fails too, the
problem is not in your code.localhost inside Docker containers — it resolves to
the container's own loopback, not the host machine or other containers. Use the Docker service name from
docker-compose.yml instead.axios, fetch, got,
undici), database drivers (pg, mysql2, mongoose,
ioredis), and raw TCP sockets.localhost inside a container resolves to the container's own loopback —
always use the Docker service name (e.g., db, redis) as the hostname. [src3, src4]autoSelectFamily) — services listening only on
127.0.0.1 (IPv4) may refuse connections from localhost if it resolves to
::1 (IPv6). Force IPv4 with host: '127.0.0.1'. [src1, src6].connect() clients must always be released in a finally block —
leaked clients exhaust the pool. [src7]| # | Cause | Likelihood | Signature | Fix |
|---|---|---|---|---|
| 1 | Target service not running | ~30% of cases | connect ECONNREFUSED 127.0.0.1:PORT |
Start the service: systemctl start postgres,
docker start container [src1, src2]
|
| 2 | Wrong host or port | ~20% of cases | connect ECONNREFUSED WRONG_IP:PORT |
Verify connection string matches actual service host:port [src2] |
| 3 | Docker: using localhost instead of service name |
~15% of cases | connect ECONNREFUSED 127.0.0.1:PORT from container |
Use Docker Compose service name (e.g., db) as host [src3, src4] |
| 4 | Service bound to 127.0.0.1 only |
~10% of cases | Works locally, fails from another machine/container | Bind service to 0.0.0.0 or specific IP [src3, src4] |
| 5 | Firewall blocking the port | ~8% of cases | nc times out or refuses from outside |
Open port in firewall: ufw allow PORT or security group rule [src2]
|
| 6 | Service still starting up | ~7% of cases | Error on first request, works seconds later | Add retry logic with exponential backoff; use Docker healthchecks [src5] |
| 7 | Connection pool exhausted | ~4% of cases | ECONNREFUSED after sustained load |
Increase pool max; fix connection leaks (unreleased clients) [src7] |
| 8 | DNS resolution failure | ~3% of cases | ENOTFOUND or ECONNREFUSED on hostname |
Check DNS, use IP directly to test; verify /etc/hosts [src1] |
| 9 | Port already in use by another process | ~2% of cases | Target service fails to start silently | lsof -i :PORT or netstat -tuln | grep PORT to find conflict [src2]
|
| 10 | SSL/TLS port mismatch | ~1% of cases | Connecting with HTTP to HTTPS port or vice versa | Match protocol to port (e.g., 443 = HTTPS, 5432 = plain Postgres) [src7] |
| 11 | IPv6/IPv4 mismatch (Node.js 19+) | ~1% of cases | connect ECONNREFUSED ::1:PORT |
Force IPv4: host: '127.0.0.1' or set
autoSelectFamily: false [src1, src6] |
START
├── Can you reach the service from the SAME machine (curl/nc)?
│ ├── NO → Service is not running or port is wrong
│ │ ├── Check: is the service process running? (ps aux | grep service)
│ │ │ ├── NOT RUNNING → Start it [src1]
│ │ │ └── RUNNING → Check which port it's listening on (netstat -tuln) [src2]
│ │ └── Port conflict? → Another process using the port (lsof -i :PORT)
│ └── YES → Network/config issue between Node.js and the service ↓
├── Is Node.js running inside a Docker container?
│ ├── YES → Are you using "localhost" or "127.0.0.1" as host?
│ │ ├── YES → Change to Docker service name from docker-compose.yml [src3, src4]
│ │ └── NO → Check both containers are on the same Docker network [src3]
│ └── NO ↓
├── Is Node.js running in Kubernetes?
│ ├── YES → Use service-name.namespace.svc.cluster.local as host [src3]
│ └── NO ↓
├── Does the service bind to 0.0.0.0 or 127.0.0.1?
│ ├── 127.0.0.1 → Change to 0.0.0.0 if external access needed [src3]
│ └── 0.0.0.0 ↓
├── Is there a firewall between Node.js and the service?
│ ├── YES → Open the port in firewall/security group [src2]
│ └── NO ↓
├── Does the error happen only on first connection attempt?
│ ├── YES → Service still starting. Add retry with backoff [src5]
│ └── NO ↓
├── Does the error show ::1 (IPv6) but service listens on 127.0.0.1 (IPv4)?
│ ├── YES → Force IPv4: host: '127.0.0.1' or autoSelectFamily: false [src1, src6]
│ └── NO ↓
└── Does it happen under load?
├── YES → Connection pool exhaustion. Increase max, fix leaks [src7]
└── NO → Check environment variables for host/port config
The ECONNREFUSED error always includes the target IP and port. This tells you exactly where Node.js tried to
connect. In Node.js 22+, check error.cause for chained errors. [src1, src8]
Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 5432
Verify: Note the address and port — are they what you expect?
Before debugging your Node.js code, verify that the port is reachable at all. [src2]
# Test TCP connection (most reliable)
nc -zv hostname 5432
# Alternative with curl
curl -v telnet://hostname:5432
# Check what's listening on the port
# Linux
ss -tuln | grep 5432
lsof -i :5432
# macOS
lsof -nP -iTCP:5432 | grep LISTEN
# Windows
netstat -an | findstr "5432"
Verify: If nc fails too, the problem is at the OS/network level, not in
Node.js.
Check that the target service is actually started and accepting connections. [src1, src2]
# PostgreSQL
pg_isready -h localhost -p 5432
# MySQL
mysqladmin -h localhost -P 3306 ping
# Redis
redis-cli -h localhost -p 6379 ping
# Docker: check container status
docker ps | grep postgres
docker logs postgres-container
Verify: Service reports "accepting connections" or responds to ping.
The #1 Docker-specific cause: localhost inside a container refers to that container's own
loopback, not other containers or the host. [src3, src4]
# docker-compose.yml
services:
app:
build: .
depends_on:
db:
condition: service_healthy
environment:
# Use the service name "db" as hostname, NOT localhost
DATABASE_URL: postgres://user:pass@db:5432/mydb
db:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
For connecting to a service on the host from inside a container, use host.docker.internal.
Verify: docker exec app-container nc -zv db 5432 succeeds.
Some services bind to 127.0.0.1 by default, making them only accessible from the same machine.
[src3, src4]
# PostgreSQL: check listen_addresses
grep listen_addresses /etc/postgresql/16/main/postgresql.conf
# Should be: listen_addresses = '*'
# Redis: check bind
grep "^bind" /etc/redis/redis.conf
# Node.js HTTP server: bind to 0.0.0.0
server.listen(3000, '0.0.0.0');
Verify: ss -tuln | grep PORT shows 0.0.0.0:PORT not
127.0.0.1:PORT.
Node.js 19+ defaults to IPv6-first. Services listening only on IPv4 127.0.0.1 may refuse
connections from localhost resolving to ::1. [src1, src6]
// Force IPv4 for database connections
const pool = new Pool({
host: '127.0.0.1', // Explicit IPv4, not 'localhost'
port: 5432,
});
// Or disable autoSelectFamily for net.connect
const socket = net.connect({
host: 'localhost',
port: 5432,
autoSelectFamily: false,
});
Verify: Change localhost to 127.0.0.1 — if the error disappears,
it was an IPv6/IPv4 mismatch.
For transient connection failures, add retry logic. [src5]
async function connectWithRetry(connectFn, options = {}) {
const { maxRetries = 5, baseDelay = 1000, maxDelay = 30000,
retryableErrors = ['ECONNREFUSED', 'ECONNRESET', 'ETIMEDOUT', 'ENOTFOUND']
} = options;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await connectFn();
} catch (error) {
if (!retryableErrors.includes(error.code) || attempt === maxRetries) throw error;
const delay = Math.min(baseDelay * Math.pow(2, attempt - 1) + Math.random() * 1000, maxDelay);
console.warn(`Attempt ${attempt}/${maxRetries} failed (${error.code}). Retry in ${Math.round(delay)}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
Verify: App logs show retry attempts and eventually connects when service becomes available.
If the service is running and binding correctly but still unreachable from another machine. [src2]
# Linux: check ufw
sudo ufw status
sudo ufw allow 5432/tcp
# AWS: check Security Group
aws ec2 describe-security-groups --group-ids sg-xxx \
--query 'SecurityGroups[].IpPermissions[?ToPort==`5432`]'
Verify: nc -zv target-ip port succeeds from the Node.js host.
// Input: App crashing on ECONNREFUSED when DB is temporarily unavailable
// Output: Resilient connection with retry, health checks, graceful degradation
const { Pool } = require('pg');
class ResilientDatabase {
constructor(connectionString, options = {}) {
this.pool = new Pool({
connectionString,
max: options.maxConnections || 20,
idleTimeoutMillis: options.idleTimeout || 30000,
connectionTimeoutMillis: options.connectTimeout || 5000,
});
this.pool.on('error', (err) => {
console.error('Unexpected pool error:', err.message);
});
}
async connect(maxRetries = 5) {
for (let i = 1; i <= maxRetries; i++) {
try {
const client = await this.pool.connect();
await client.query('SELECT 1');
client.release();
console.log('Database connected');
return;
} catch (err) {
const delay = Math.min(1000 * Math.pow(2, i - 1), 15000);
console.warn(`DB attempt ${i}/${maxRetries} failed: ${err.code}. Retry in ${delay}ms`);
if (i === maxRetries) throw err;
await new Promise(r => setTimeout(r, delay));
}
}
}
async query(text, params) {
try {
return await this.pool.query(text, params);
} catch (err) {
if (err.code === 'ECONNREFUSED') {
console.error('Database unavailable — attempting reconnect');
await this.connect(3);
return await this.pool.query(text, params);
}
throw err;
}
}
async close() { await this.pool.end(); }
}
const db = new ResilientDatabase(process.env.DATABASE_URL);
await db.connect();
// Input: API calls failing intermittently with ECONNREFUSED/ECONNRESET
// Output: Axios client with configurable retry and backoff
const axios = require('axios');
function createResilientClient(baseURL, options = {}) {
const client = axios.create({ baseURL, timeout: options.timeout || 10000 });
const RETRYABLE_CODES = new Set([
'ECONNREFUSED', 'ECONNRESET', 'ETIMEDOUT', 'ENOTFOUND', 'ENETUNREACH', 'EAI_AGAIN',
]);
const RETRYABLE_STATUS = new Set([429, 502, 503, 504]);
client.interceptors.response.use(null, async (error) => {
const config = error.config;
config.__retryCount = config.__retryCount || 0;
const maxRetries = options.maxRetries || 3;
const isRetryable =
(error.code && RETRYABLE_CODES.has(error.code)) ||
(error.response && RETRYABLE_STATUS.has(error.response.status));
if (!isRetryable || config.__retryCount >= maxRetries) return Promise.reject(error);
config.__retryCount++;
const delay = Math.min(1000 * Math.pow(2, config.__retryCount - 1) + Math.random() * 500, 10000);
console.warn(`Retry ${config.__retryCount}/${maxRetries} for ${config.url}`);
await new Promise(r => setTimeout(r, delay));
return client(config);
});
return client;
}
const api = createResilientClient('https://api.example.com', { maxRetries: 3 });
const { data } = await api.get('/users');
# Input: docker-compose.yml where app gets ECONNREFUSED
# Output: Properly networked services with health checks
version: '3.9'
services:
app:
build: .
ports:
- "3000:3000"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
environment:
DATABASE_URL: postgres://appuser:secret@postgres:5432/appdb
REDIS_URL: redis://redis:6379
networks:
- app-network
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: secret
POSTGRES_DB: appdb
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
interval: 5s
timeout: 3s
retries: 10
start_period: 10s
networks:
- app-network
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
networks:
- app-network
volumes:
pgdata:
networks:
app-network:
driver: bridge
// Input: API endpoint that may be temporarily unavailable
// Output: Resilient fetch wrapper using native fetch + AbortSignal.timeout
async function fetchWithRetry(url, options = {}) {
const { maxRetries = 3, baseDelay = 1000, timeout = 5000, ...fetchOpts } = options;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url, {
...fetchOpts,
signal: AbortSignal.timeout(timeout),
});
return response;
} catch (error) {
const isRetryable = error.cause?.code === 'ECONNREFUSED'
|| error.cause?.code === 'ECONNRESET'
|| error.name === 'TimeoutError';
if (!isRetryable || attempt === maxRetries) throw error;
const delay = baseDelay * Math.pow(2, attempt - 1) + Math.random() * 500;
await new Promise(r => setTimeout(r, delay));
}
}
}
const response = await fetchWithRetry('http://localhost:3000/api/health', {
maxRetries: 5,
timeout: 3000,
});
// BAD — localhost inside a container = the container itself [src3, src4]
const pool = new Pool({
host: 'localhost', // Points to app container, not DB container
port: 5432,
database: 'mydb',
});
// Error: connect ECONNREFUSED 127.0.0.1:5432
// GOOD — use the service name from docker-compose.yml [src3, src4]
const pool = new Pool({
host: process.env.DB_HOST || 'postgres',
port: parseInt(process.env.DB_PORT || '5432'),
database: 'mydb',
});
// BAD — app exits if DB isn't ready yet [src5]
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const result = await pool.query('SELECT NOW()');
// If DB is still starting -> ECONNREFUSED -> crash
// GOOD — wait for DB to become available [src5]
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
async function waitForDB(maxRetries = 10) {
for (let i = 1; i <= maxRetries; i++) {
try {
const result = await pool.query('SELECT NOW()');
console.log('Connected at', result.rows[0].now);
return;
} catch (err) {
if (i === maxRetries) throw new Error(`DB unavailable after ${maxRetries} attempts`);
const delay = Math.min(1000 * Math.pow(2, i - 1), 15000);
console.warn(`DB not ready (${err.code}), retry ${i}/${maxRetries} in ${delay}ms`);
await new Promise(r => setTimeout(r, delay));
}
}
}
await waitForDB();
// BAD — different values needed per environment [src7]
const pool = new Pool({
host: '192.168.1.50',
port: 5432,
user: 'admin',
password: 'secret123',
database: 'production_db',
});
// GOOD — works in dev, Docker, staging, production [src7]
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});
# BAD — depends_on only waits for container start, not service readiness [src3]
services:
app:
depends_on:
- db # db container starts, but PostgreSQL may not be ready yet
# GOOD — waits for PostgreSQL to actually accept connections [src3]
services:
app:
depends_on:
db:
condition: service_healthy
db:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
localhost vs 127.0.0.1 vs ::1: On dual-stack
systems (especially Node.js 19+), localhost may resolve to IPv6 ::1 first,
while the service only listens on IPv4 127.0.0.1. Try 127.0.0.1 explicitly if
localhost gives ECONNREFUSED. [src1, src2]depends_on doesn't mean "ready": depends_on only waits
for the container to start, not for the service inside to be ready. Use
condition: service_healthy with proper healthchecks. [src3, src4]ports: "5433:5432" maps host
port 5433 to container port 5432. Container-to-container communication uses the internal port (5432),
not the mapped one. [src3]DATABASE_URL is undefined because
.env file isn't loaded, or the env var isn't passed into Docker. Debug with
console.log(process.env.DATABASE_URL). [src7]pool.connect() without releasing
clients (.release()), the pool fills up and new connections hang or get refused. Always
release in a finally block. [src7]localhost forwarding between
Windows and WSL can be unreliable. Use the WSL2 IP (hostname -I inside WSL) or
host.docker.internal. [src4]service-name.namespace.svc.cluster.local as the host, not localhost.
Kubernetes pods each have their own network namespace. [src3]fetch() wrap the original
system error in error.cause. Check error.cause.code for
ECONNREFUSED, not just error.code. [src1, src8]# Test TCP connectivity
nc -zv hostname port
curl -v telnet://hostname:port --connect-timeout 5
# Check what's listening on a port
# Linux
ss -tuln | grep PORT
lsof -i :PORT
# macOS
lsof -nP -iTCP:PORT | grep LISTEN
# Windows
netstat -an | findstr "PORT"
# Check if service is running
systemctl status postgresql
docker ps | grep container-name
docker logs container-name --tail 50
# Test database connectivity
pg_isready -h hostname -p 5432
redis-cli -h hostname -p 6379 ping
# Docker: test from inside container
docker exec -it app-container sh -c "nc -zv db-service 5432"
# Docker: inspect network
docker network ls
docker network inspect bridge
docker inspect container-name --format '{{.NetworkSettings.Networks}}'
# Check DNS resolution
nslookup hostname
dig hostname
# Firewall check
sudo ufw status
sudo iptables -L -n | grep PORT
| Version | Status | Behavior | Key Changes |
|---|---|---|---|
| Node.js 23 (Current) | Active | require(esm) enabled by default |
Networking behavior same as 22; require() can now load ESM [src8] |
| Node.js 22 LTS | Active LTS | ECONNREFUSED in error.cause chain |
Improved error stacks with error.cause for chained errors;
fetch() stable [src1] |
| Node.js 20 LTS | Maintenance | Stable | fetch() built-in with connection error codes;
AbortSignal.timeout() [src1, src6] |
| Node.js 19 | EOL | IPv6-first default | net.setDefaultAutoSelectFamily(true) — may cause ECONNREFUSED on IPv4-only
services [src6] |
| Node.js 18 LTS | EOL (Apr 2025) | Supported | Built-in fetch() (experimental); net.connect() improvements [src6] |
| Use When | Don't Use When | Use Instead |
|---|---|---|
Error message contains ECONNREFUSED |
Error is ECONNRESET during active request |
Check keepalive timeouts, server crashes |
| Cannot connect to database/API at all | Error is ETIMEDOUT (connection hangs) |
Check network routing, firewall rules |
| Error happens in Docker/container environments | Error is ENOTFOUND (DNS failure) |
Check DNS configuration, hostname spelling |
| Error happens on app startup before any requests | Error is authentication failure after connecting | Check credentials, certificates |
Error shows ::1 or IPv6 address |
Error is EPERM or EACCES |
Check file permissions, privileged ports (<1024) |
ECONNREFUSED is a TCP-level error — it means the target OS received the SYN packet and
responded with RST. This is different from ETIMEDOUT (no response at all, possibly
firewalled) and ENOTFOUND (DNS failure).kube-dns or CoreDNS).
Use service-name.namespace.svc.cluster.local as the host, not localhost.pg) have their own internal retry/reconnect logic. Check your
driver's documentation before adding redundant retry logic.ECONNREFUSED on ::1 (IPv6 localhost) when the service listens on
127.0.0.1 (IPv4) is a common trap on Node.js 19+. Force IPv4 with
host: '127.0.0.1' or disable IPv6 resolution.ssl: { rejectUnauthorized: false } or proper certificate configuration.fetch() in error.cause — check
error.cause.code for the original system error code, not error.code directly.