pool_size = (core_count * 2) + effective_spindle_countmax_connections (PostgreSQL default: 100)finally block, defer statement, context manager, or try-with-resources -- never rely on garbage collectionper_instance_pool = max_connections / instance_countSET, PREPARE, LISTEN/NOTIFY, or temporary tables outside a single transaction| Pool Type | Language/Platform | Library | Default Pool Size | Key Config Parameters |
|---|---|---|---|---|
| HikariCP | Java | com.zaxxer.hikari | 10 | maximumPoolSize, minimumIdle, connectionTimeout, idleTimeout, maxLifetime |
| PgBouncer | PostgreSQL (proxy) | pgbouncer | 20 per db | pool_mode, default_pool_size, max_client_conn, server_idle_timeout |
| node-postgres | Node.js | pg.Pool | 10 | max, idleTimeoutMillis, connectionTimeoutMillis, allowExitOnIdle |
| SQLAlchemy | Python | create_engine() | 5 | pool_size, max_overflow, pool_recycle, pool_timeout, pool_pre_ping |
| database/sql | Go | stdlib | unlimited (2 idle) | SetMaxOpenConns, SetMaxIdleConns, SetConnMaxLifetime, SetConnMaxIdleTime |
| pgxpool | Go | github.com/jackc/pgx/v5 | 4 (min) | MaxConns, MinConns, MaxConnLifetime, MaxConnIdleTime, HealthCheckPeriod |
| c3p0 | Java | com.mchange.v2.c3p0 | 3 | maxPoolSize, minPoolSize, acquireIncrement, maxIdleTime |
| Drizzle/Neon | Node.js (serverless) | @neondatabase/serverless | 1 | Serverless per-request; use external pooler |
| DBCP2 | Java | commons-dbcp2 | 8 | maxTotal, maxIdle, minIdle, maxWaitMillis |
pool_size = (core_count * 2) + effective_spindle_count
core_count: Physical cores on DB server (not HT threads)
effective_spindle_count: 0 if dataset fits in RAM; 1 for SSD; actual count for HDD
Example: 4-core + SSD = (4 * 2) + 1 = 9 (round to 10)
START: How many app instances hit this database?
├── Single instance?
│ ├── YES → pool_size = (db_cores * 2) + spindle_count
│ └── NO ↓
├── Multiple instances (N)?
│ ├── YES → per_instance_pool = (db_cores * 2 + spindle_count) / N
│ └── Consider external pooler (PgBouncer) ↓
├── Using serverless / many short-lived processes?
│ ├── YES → Use PgBouncer in transaction mode between app and DB
│ └── NO ↓
├── Need prepared statements or LISTEN/NOTIFY?
│ ├── YES → PgBouncer session mode (or direct connection for those features)
│ └── NO → PgBouncer transaction mode (best efficiency)
└── DEFAULT → Start with pool_size=10, monitor pg_stat_activity, adjust
Determine the database server's core count and storage type. Apply the HikariCP formula, then divide by the number of application instances. [src1]
-- PostgreSQL: check current connection limits and usage
SELECT setting FROM pg_settings WHERE name = 'max_connections';
SELECT count(*) FROM pg_stat_activity;
Verify: SELECT count(*) FROM pg_stat_activity; → should show current connections well below max_connections.
Set the pool size, timeouts, and health checks in your application. Every pool library needs at minimum: max size, idle timeout, connection timeout, and max lifetime. [src2]
# Universal pool config checklist:
max_pool_size: 10 # From sizing formula
connection_timeout: 30s # How long to wait for a connection from the pool
idle_timeout: 10min # Remove idle connections after this duration
max_lifetime: 30min # Recycle connections to avoid stale state
validation_query: SELECT 1 # Health check (or pool_pre_ping equivalent)
leak_detection: 60s # Alert if connection held longer than this
Verify: Application logs should show pool initialized with expected size, no timeout warnings.
Wrap every database operation in try/finally (or language equivalent) to guarantee connection release. [src4]
# Python example pattern
from contextlib import contextmanager
@contextmanager
def get_connection(pool):
conn = pool.connect()
try:
yield conn
finally:
conn.close() # Returns to pool, does not destroy
Verify: Under load, SELECT count(*) FROM pg_stat_activity WHERE state = 'idle'; should remain near your pool size.
Track pool metrics: active connections, idle connections, wait time, and timeouts. Set alerts for pool exhaustion. [src6]
-- PostgreSQL: connection breakdown by state
SELECT state, count(*) FROM pg_stat_activity
WHERE datname = 'your_db'
GROUP BY state ORDER BY count DESC;
Verify: Active connections stay within pool_size bounds; wait_count metrics stay near zero.
// Input: PostgreSQL connection string via DATABASE_URL
// Output: Pooled query results with automatic connection management
const { Pool } = require('pg'); // [email protected]
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 10, // Max connections in pool
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 5000, // Fail if no connection in 5s
maxUses: 7500, // Recycle after N uses
});
// Single query (auto-acquires and releases)
const result = await pool.query('SELECT * FROM users WHERE id = $1', [userId]);
// Transaction (manual acquire and release)
const client = await pool.connect();
try {
await client.query('BEGIN');
await client.query('INSERT INTO orders (user_id) VALUES ($1)', [userId]);
await client.query('COMMIT');
} catch (e) {
await client.query('ROLLBACK');
throw e;
} finally {
client.release(); // CRITICAL: always release
}
# Input: PostgreSQL connection URL
# Output: Engine with QueuePool and pre-ping validation
from sqlalchemy import create_engine, text
engine = create_engine(
"postgresql+psycopg2://user:pass@host:5432/dbname",
pool_size=10, # Permanent connections
max_overflow=5, # Temporary overflow connections
pool_timeout=30, # Wait for available connection
pool_recycle=1800, # Recycle every 30 minutes
pool_pre_ping=True, # Validate before use
)
with engine.connect() as conn:
result = conn.execute(text("SELECT * FROM users WHERE id = :id"), {"id": 1})
rows = result.fetchall()
// Input: JDBC connection properties via application.yml
// Output: High-performance pool (Spring Boot default since 2.x)
// application.yml configuration:
// spring.datasource.hikari:
// maximum-pool-size: 10
// minimum-idle: 5
// connection-timeout: 30000
// idle-timeout: 600000
// max-lifetime: 1800000
// leak-detection-threshold: 60000
try (Connection conn = dataSource.getConnection();
PreparedStatement ps = conn.prepareStatement(
"SELECT * FROM users WHERE id = ?")) {
ps.setInt(1, userId);
try (ResultSet rs = ps.executeQuery()) {
while (rs.next()) { /* process */ }
}
} // Auto-returned to HikariCP pool
// Input: PostgreSQL DSN
// Output: Bounded connection pool (Go defaults are unbounded)
db, err := sql.Open("postgres", dsn)
if err != nil { log.Fatal(err) }
defer db.Close()
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(5)
db.SetConnMaxLifetime(30 * time.Minute)
db.SetConnMaxIdleTime(5 * time.Minute)
rows, err := db.QueryContext(ctx,
"SELECT id, name FROM users WHERE active = $1", true)
if err != nil { return err }
defer rows.Close() // Always close to release connection
// BAD -- new TCP + TLS + auth for EVERY query
async function getUser(id) {
const client = new Client({ connectionString: process.env.DATABASE_URL });
await client.connect();
const result = await client.query('SELECT * FROM users WHERE id = $1', [id]);
await client.end();
return result.rows[0];
}
// GOOD -- pool created once, connections reused
const pool = new Pool({ connectionString: process.env.DATABASE_URL, max: 10 });
async function getUser(id) {
const result = await pool.query('SELECT * FROM users WHERE id = $1', [id]);
return result.rows[0];
}
# BAD -- connection leaked if execute() throws
conn = engine.raw_connection()
cursor = conn.cursor()
cursor.execute("INSERT INTO logs (msg) VALUES (%s)", ("test",))
conn.commit()
conn.close()
# GOOD -- connection always returned, even on exception
with engine.connect() as conn:
conn.execute(text("INSERT INTO logs (msg) VALUES (:msg)"), {"msg": "test"})
conn.commit()
# BAD -- 200 x 10 instances = 2000 connections (PG default max: 100)
spring.datasource.hikari.maximum-pool-size: 200
# GOOD -- 10 x 10 instances = 100 total (within max_connections)
spring.datasource.hikari.maximum-pool-size: 10
# BAD -- stale connections cause "server closed the connection"
engine = create_engine("postgresql://...", pool_size=10)
# GOOD -- validates before use, recycles every 30 minutes
engine = create_engine("postgresql://...", pool_size=10,
pool_pre_ping=True, pool_recycle=1800)
try/finally, context managers, defer, or try-with-resources. [src6]max_connections=100. Fix: per_instance = max_connections / instance_count. [src1]maxLifetime / pool_recycle shorter than network timeout. [src4]connectionTimeout to 5-30s. [src2]SetMaxOpenConns(). [src5]SET LOCAL. [src3]wait_count and p99 acquisition time. [src6]# PostgreSQL: connection breakdown by state
psql -c "SELECT state, count(*) FROM pg_stat_activity GROUP BY state;"
# PostgreSQL: find long-running queries holding connections
psql -c "SELECT pid, now() - query_start AS duration, query, state
FROM pg_stat_activity WHERE state != 'idle' ORDER BY duration DESC LIMIT 10;"
# PostgreSQL: check max_connections setting
psql -c "SHOW max_connections;"
# PgBouncer: check pool status
psql -p 6432 pgbouncer -c "SHOW POOLS;"
# PgBouncer: check connection stats
psql -p 6432 pgbouncer -c "SHOW STATS;"
# HikariCP: check pool metrics (Spring Actuator)
curl http://localhost:8080/actuator/metrics/hikaricp.connections.active
curl http://localhost:8080/actuator/metrics/hikaricp.connections.idle
curl http://localhost:8080/actuator/metrics/hikaricp.connections.pending
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Multi-request application server (web app, API) | Single-shot CLI script that runs once and exits | Direct connection (no pool overhead) |
| Multiple concurrent database queries | Serverless function with 1 query per invocation | External pooler (PgBouncer) or serverless driver |
| Long-running application process | Database has a built-in proxy (Aurora Proxy, PlanetScale) | Use the managed pooler |
| Need to limit total connections to database | Embedded/mobile app with SQLite | SQLite uses file locking, not pooling |
| Multiple app instances sharing one DB | Redis or other non-relational store | Client library's built-in pool (e.g., ioredis) |
(cores * 2) + spindle_count is a starting point for OLTP workloads, not a universal law. OLAP or mixed workloads may need different sizing. Always benchmark.PREPARE/EXECUTE, LISTEN/NOTIFY, SET (non-LOCAL), temp tables, advisory locks, and DECLARE CURSOR outside transactions.minimumIdle values cause connection storms across a fleet on restart.