How to Debug ECONNREFUSED Errors in Node.js

Type: Software Reference Confidence: 0.93 Sources: 8 Verified: 2026-02-23 Freshness: quarterly

TL;DR

Constraints

Quick Reference

# Cause Likelihood Signature Fix
1 Target service not running ~30% of cases connect ECONNREFUSED 127.0.0.1:PORT Start the service: systemctl start postgres, docker start container [src1, src2]
2 Wrong host or port ~20% of cases connect ECONNREFUSED WRONG_IP:PORT Verify connection string matches actual service host:port [src2]
3 Docker: using localhost instead of service name ~15% of cases connect ECONNREFUSED 127.0.0.1:PORT from container Use Docker Compose service name (e.g., db) as host [src3, src4]
4 Service bound to 127.0.0.1 only ~10% of cases Works locally, fails from another machine/container Bind service to 0.0.0.0 or specific IP [src3, src4]
5 Firewall blocking the port ~8% of cases nc times out or refuses from outside Open port in firewall: ufw allow PORT or security group rule [src2]
6 Service still starting up ~7% of cases Error on first request, works seconds later Add retry logic with exponential backoff; use Docker healthchecks [src5]
7 Connection pool exhausted ~4% of cases ECONNREFUSED after sustained load Increase pool max; fix connection leaks (unreleased clients) [src7]
8 DNS resolution failure ~3% of cases ENOTFOUND or ECONNREFUSED on hostname Check DNS, use IP directly to test; verify /etc/hosts [src1]
9 Port already in use by another process ~2% of cases Target service fails to start silently lsof -i :PORT or netstat -tuln | grep PORT to find conflict [src2]
10 SSL/TLS port mismatch ~1% of cases Connecting with HTTP to HTTPS port or vice versa Match protocol to port (e.g., 443 = HTTPS, 5432 = plain Postgres) [src7]
11 IPv6/IPv4 mismatch (Node.js 19+) ~1% of cases connect ECONNREFUSED ::1:PORT Force IPv4: host: '127.0.0.1' or set autoSelectFamily: false [src1, src6]

Decision Tree

START
├── Can you reach the service from the SAME machine (curl/nc)?
│   ├── NO → Service is not running or port is wrong
│   │   ├── Check: is the service process running? (ps aux | grep service)
│   │   │   ├── NOT RUNNING → Start it [src1]
│   │   │   └── RUNNING → Check which port it's listening on (netstat -tuln) [src2]
│   │   └── Port conflict? → Another process using the port (lsof -i :PORT)
│   └── YES → Network/config issue between Node.js and the service ↓
├── Is Node.js running inside a Docker container?
│   ├── YES → Are you using "localhost" or "127.0.0.1" as host?
│   │   ├── YES → Change to Docker service name from docker-compose.yml [src3, src4]
│   │   └── NO → Check both containers are on the same Docker network [src3]
│   └── NO ↓
├── Is Node.js running in Kubernetes?
│   ├── YES → Use service-name.namespace.svc.cluster.local as host [src3]
│   └── NO ↓
├── Does the service bind to 0.0.0.0 or 127.0.0.1?
│   ├── 127.0.0.1 → Change to 0.0.0.0 if external access needed [src3]
│   └── 0.0.0.0 ↓
├── Is there a firewall between Node.js and the service?
│   ├── YES → Open the port in firewall/security group [src2]
│   └── NO ↓
├── Does the error happen only on first connection attempt?
│   ├── YES → Service still starting. Add retry with backoff [src5]
│   └── NO ↓
├── Does the error show ::1 (IPv6) but service listens on 127.0.0.1 (IPv4)?
│   ├── YES → Force IPv4: host: '127.0.0.1' or autoSelectFamily: false [src1, src6]
│   └── NO ↓
└── Does it happen under load?
    ├── YES → Connection pool exhaustion. Increase max, fix leaks [src7]
    └── NO → Check environment variables for host/port config

Step-by-Step Guide

1. Read the full error message

The ECONNREFUSED error always includes the target IP and port. This tells you exactly where Node.js tried to connect. In Node.js 22+, check error.cause for chained errors. [src1, src8]

Error: connect ECONNREFUSED 127.0.0.1:5432
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '127.0.0.1',
    port: 5432

Verify: Note the address and port — are they what you expect?

2. Test raw TCP connectivity

Before debugging your Node.js code, verify that the port is reachable at all. [src2]

# Test TCP connection (most reliable)
nc -zv hostname 5432

# Alternative with curl
curl -v telnet://hostname:5432

# Check what's listening on the port
# Linux
ss -tuln | grep 5432
lsof -i :5432

# macOS
lsof -nP -iTCP:5432 | grep LISTEN

# Windows
netstat -an | findstr "5432"

Verify: If nc fails too, the problem is at the OS/network level, not in Node.js.

3. Verify the service is running and listening

Check that the target service is actually started and accepting connections. [src1, src2]

# PostgreSQL
pg_isready -h localhost -p 5432

# MySQL
mysqladmin -h localhost -P 3306 ping

# Redis
redis-cli -h localhost -p 6379 ping

# Docker: check container status
docker ps | grep postgres
docker logs postgres-container

Verify: Service reports "accepting connections" or responds to ping.

4. Fix Docker networking issues

The #1 Docker-specific cause: localhost inside a container refers to that container's own loopback, not other containers or the host. [src3, src4]

# docker-compose.yml
services:
  app:
    build: .
    depends_on:
      db:
        condition: service_healthy
    environment:
      # Use the service name "db" as hostname, NOT localhost
      DATABASE_URL: postgres://user:pass@db:5432/mydb

  db:
    image: postgres:16
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

For connecting to a service on the host from inside a container, use host.docker.internal.

Verify: docker exec app-container nc -zv db 5432 succeeds.

5. Check service binding address

Some services bind to 127.0.0.1 by default, making them only accessible from the same machine. [src3, src4]

# PostgreSQL: check listen_addresses
grep listen_addresses /etc/postgresql/16/main/postgresql.conf
# Should be: listen_addresses = '*'

# Redis: check bind
grep "^bind" /etc/redis/redis.conf

# Node.js HTTP server: bind to 0.0.0.0
server.listen(3000, '0.0.0.0');

Verify: ss -tuln | grep PORT shows 0.0.0.0:PORT not 127.0.0.1:PORT.

6. Handle IPv6/IPv4 mismatch (Node.js 19+)

Node.js 19+ defaults to IPv6-first. Services listening only on IPv4 127.0.0.1 may refuse connections from localhost resolving to ::1. [src1, src6]

// Force IPv4 for database connections
const pool = new Pool({
  host: '127.0.0.1',  // Explicit IPv4, not 'localhost'
  port: 5432,
});

// Or disable autoSelectFamily for net.connect
const socket = net.connect({
  host: 'localhost',
  port: 5432,
  autoSelectFamily: false,
});

Verify: Change localhost to 127.0.0.1 — if the error disappears, it was an IPv6/IPv4 mismatch.

7. Implement retry logic with exponential backoff

For transient connection failures, add retry logic. [src5]

async function connectWithRetry(connectFn, options = {}) {
  const { maxRetries = 5, baseDelay = 1000, maxDelay = 30000,
    retryableErrors = ['ECONNREFUSED', 'ECONNRESET', 'ETIMEDOUT', 'ENOTFOUND']
  } = options;

  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await connectFn();
    } catch (error) {
      if (!retryableErrors.includes(error.code) || attempt === maxRetries) throw error;
      const delay = Math.min(baseDelay * Math.pow(2, attempt - 1) + Math.random() * 1000, maxDelay);
      console.warn(`Attempt ${attempt}/${maxRetries} failed (${error.code}). Retry in ${Math.round(delay)}ms`);
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}

Verify: App logs show retry attempts and eventually connects when service becomes available.

8. Check firewall and security groups

If the service is running and binding correctly but still unreachable from another machine. [src2]

# Linux: check ufw
sudo ufw status
sudo ufw allow 5432/tcp

# AWS: check Security Group
aws ec2 describe-security-groups --group-ids sg-xxx \
  --query 'SecurityGroups[].IpPermissions[?ToPort==`5432`]'

Verify: nc -zv target-ip port succeeds from the Node.js host.

Code Examples

Database connection with retry and health monitoring

// Input:  App crashing on ECONNREFUSED when DB is temporarily unavailable
// Output: Resilient connection with retry, health checks, graceful degradation

const { Pool } = require('pg');

class ResilientDatabase {
  constructor(connectionString, options = {}) {
    this.pool = new Pool({
      connectionString,
      max: options.maxConnections || 20,
      idleTimeoutMillis: options.idleTimeout || 30000,
      connectionTimeoutMillis: options.connectTimeout || 5000,
    });
    this.pool.on('error', (err) => {
      console.error('Unexpected pool error:', err.message);
    });
  }

  async connect(maxRetries = 5) {
    for (let i = 1; i <= maxRetries; i++) {
      try {
        const client = await this.pool.connect();
        await client.query('SELECT 1');
        client.release();
        console.log('Database connected');
        return;
      } catch (err) {
        const delay = Math.min(1000 * Math.pow(2, i - 1), 15000);
        console.warn(`DB attempt ${i}/${maxRetries} failed: ${err.code}. Retry in ${delay}ms`);
        if (i === maxRetries) throw err;
        await new Promise(r => setTimeout(r, delay));
      }
    }
  }

  async query(text, params) {
    try {
      return await this.pool.query(text, params);
    } catch (err) {
      if (err.code === 'ECONNREFUSED') {
        console.error('Database unavailable — attempting reconnect');
        await this.connect(3);
        return await this.pool.query(text, params);
      }
      throw err;
    }
  }

  async close() { await this.pool.end(); }
}

const db = new ResilientDatabase(process.env.DATABASE_URL);
await db.connect();

HTTP client with smart retry for upstream APIs

// Input:  API calls failing intermittently with ECONNREFUSED/ECONNRESET
// Output: Axios client with configurable retry and backoff

const axios = require('axios');

function createResilientClient(baseURL, options = {}) {
  const client = axios.create({ baseURL, timeout: options.timeout || 10000 });

  const RETRYABLE_CODES = new Set([
    'ECONNREFUSED', 'ECONNRESET', 'ETIMEDOUT', 'ENOTFOUND', 'ENETUNREACH', 'EAI_AGAIN',
  ]);
  const RETRYABLE_STATUS = new Set([429, 502, 503, 504]);

  client.interceptors.response.use(null, async (error) => {
    const config = error.config;
    config.__retryCount = config.__retryCount || 0;
    const maxRetries = options.maxRetries || 3;

    const isRetryable =
      (error.code && RETRYABLE_CODES.has(error.code)) ||
      (error.response && RETRYABLE_STATUS.has(error.response.status));

    if (!isRetryable || config.__retryCount >= maxRetries) return Promise.reject(error);

    config.__retryCount++;
    const delay = Math.min(1000 * Math.pow(2, config.__retryCount - 1) + Math.random() * 500, 10000);
    console.warn(`Retry ${config.__retryCount}/${maxRetries} for ${config.url}`);
    await new Promise(r => setTimeout(r, delay));
    return client(config);
  });

  return client;
}

const api = createResilientClient('https://api.example.com', { maxRetries: 3 });
const { data } = await api.get('/users');

Docker Compose: Full-stack app with proper networking

# Input:  docker-compose.yml where app gets ECONNREFUSED
# Output: Properly networked services with health checks

version: '3.9'

services:
  app:
    build: .
    ports:
      - "3000:3000"
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    environment:
      DATABASE_URL: postgres://appuser:secret@postgres:5432/appdb
      REDIS_URL: redis://redis:6379
    networks:
      - app-network

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: appdb
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
      interval: 5s
      timeout: 3s
      retries: 10
      start_period: 10s
    networks:
      - app-network

  redis:
    image: redis:7-alpine
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 5
    networks:
      - app-network

volumes:
  pgdata:

networks:
  app-network:
    driver: bridge

Node.js native fetch with retry (Node.js 22+)

// Input:  API endpoint that may be temporarily unavailable
// Output: Resilient fetch wrapper using native fetch + AbortSignal.timeout

async function fetchWithRetry(url, options = {}) {
  const { maxRetries = 3, baseDelay = 1000, timeout = 5000, ...fetchOpts } = options;

  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      const response = await fetch(url, {
        ...fetchOpts,
        signal: AbortSignal.timeout(timeout),
      });
      return response;
    } catch (error) {
      const isRetryable = error.cause?.code === 'ECONNREFUSED'
        || error.cause?.code === 'ECONNRESET'
        || error.name === 'TimeoutError';
      if (!isRetryable || attempt === maxRetries) throw error;
      const delay = baseDelay * Math.pow(2, attempt - 1) + Math.random() * 500;
      await new Promise(r => setTimeout(r, delay));
    }
  }
}

const response = await fetchWithRetry('http://localhost:3000/api/health', {
  maxRetries: 5,
  timeout: 3000,
});

Anti-Patterns

Wrong: Using localhost in Docker containers

// BAD — localhost inside a container = the container itself [src3, src4]
const pool = new Pool({
  host: 'localhost',    // Points to app container, not DB container
  port: 5432,
  database: 'mydb',
});
// Error: connect ECONNREFUSED 127.0.0.1:5432

Correct: Use Docker service name

// GOOD — use the service name from docker-compose.yml [src3, src4]
const pool = new Pool({
  host: process.env.DB_HOST || 'postgres',
  port: parseInt(process.env.DB_PORT || '5432'),
  database: 'mydb',
});

Wrong: Crashing on first connection failure

// BAD — app exits if DB isn't ready yet [src5]
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const result = await pool.query('SELECT NOW()');
// If DB is still starting -> ECONNREFUSED -> crash

Correct: Retry with backoff on startup

// GOOD — wait for DB to become available [src5]
const pool = new Pool({ connectionString: process.env.DATABASE_URL });

async function waitForDB(maxRetries = 10) {
  for (let i = 1; i <= maxRetries; i++) {
    try {
      const result = await pool.query('SELECT NOW()');
      console.log('Connected at', result.rows[0].now);
      return;
    } catch (err) {
      if (i === maxRetries) throw new Error(`DB unavailable after ${maxRetries} attempts`);
      const delay = Math.min(1000 * Math.pow(2, i - 1), 15000);
      console.warn(`DB not ready (${err.code}), retry ${i}/${maxRetries} in ${delay}ms`);
      await new Promise(r => setTimeout(r, delay));
    }
  }
}
await waitForDB();

Wrong: Hardcoding connection details

// BAD — different values needed per environment [src7]
const pool = new Pool({
  host: '192.168.1.50',
  port: 5432,
  user: 'admin',
  password: 'secret123',
  database: 'production_db',
});

Correct: Use environment variables

// GOOD — works in dev, Docker, staging, production [src7]
const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

Wrong: Using depends_on without health checks

# BAD — depends_on only waits for container start, not service readiness [src3]
services:
  app:
    depends_on:
      - db   # db container starts, but PostgreSQL may not be ready yet

Correct: Use depends_on with service_healthy condition

# GOOD — waits for PostgreSQL to actually accept connections [src3]
services:
  app:
    depends_on:
      db:
        condition: service_healthy
  db:
    image: postgres:16
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5
      start_period: 10s

Common Pitfalls

Diagnostic Commands

# Test TCP connectivity
nc -zv hostname port
curl -v telnet://hostname:port --connect-timeout 5

# Check what's listening on a port
# Linux
ss -tuln | grep PORT
lsof -i :PORT

# macOS
lsof -nP -iTCP:PORT | grep LISTEN

# Windows
netstat -an | findstr "PORT"

# Check if service is running
systemctl status postgresql
docker ps | grep container-name
docker logs container-name --tail 50

# Test database connectivity
pg_isready -h hostname -p 5432
redis-cli -h hostname -p 6379 ping

# Docker: test from inside container
docker exec -it app-container sh -c "nc -zv db-service 5432"

# Docker: inspect network
docker network ls
docker network inspect bridge
docker inspect container-name --format '{{.NetworkSettings.Networks}}'

# Check DNS resolution
nslookup hostname
dig hostname

# Firewall check
sudo ufw status
sudo iptables -L -n | grep PORT

Version History & Compatibility

Version Status Behavior Key Changes
Node.js 23 (Current) Active require(esm) enabled by default Networking behavior same as 22; require() can now load ESM [src8]
Node.js 22 LTS Active LTS ECONNREFUSED in error.cause chain Improved error stacks with error.cause for chained errors; fetch() stable [src1]
Node.js 20 LTS Maintenance Stable fetch() built-in with connection error codes; AbortSignal.timeout() [src1, src6]
Node.js 19 EOL IPv6-first default net.setDefaultAutoSelectFamily(true) — may cause ECONNREFUSED on IPv4-only services [src6]
Node.js 18 LTS EOL (Apr 2025) Supported Built-in fetch() (experimental); net.connect() improvements [src6]

When to Use / When Not to Use

Use When Don't Use When Use Instead
Error message contains ECONNREFUSED Error is ECONNRESET during active request Check keepalive timeouts, server crashes
Cannot connect to database/API at all Error is ETIMEDOUT (connection hangs) Check network routing, firewall rules
Error happens in Docker/container environments Error is ENOTFOUND (DNS failure) Check DNS configuration, hostname spelling
Error happens on app startup before any requests Error is authentication failure after connecting Check credentials, certificates
Error shows ::1 or IPv6 address Error is EPERM or EACCES Check file permissions, privileged ports (<1024)

Important Caveats

Related Units