pgloader mssql://user:pass@host/dbname postgresql://user:pass@host/dbnamesetval() for every SERIAL/IDENTITY column — skipping this causes duplicate key errors on the next INSERT.MERGE statement. For PostgreSQL 13–14, rewrite SQL Server MERGE as INSERT ... ON CONFLICT. PostgreSQL 17 adds WHEN NOT MATCHED BY SOURCE and RETURNING for full MERGE parity.NVARCHAR (UTF-16) maps to PostgreSQL TEXT (UTF-8). Character length semantics differ — always test LEN() vs LENGTH() comparisons and CHECK constraints after migration.| SQL Server Syntax | PostgreSQL Equivalent | Example |
|---|---|---|
IDENTITY(1,1) | GENERATED ALWAYS AS IDENTITY or SERIAL | id INTEGER GENERATED ALWAYS AS IDENTITY |
NVARCHAR(MAX) | TEXT | description TEXT |
DATETIME / DATETIME2 | TIMESTAMP / TIMESTAMPTZ | created_at TIMESTAMPTZ DEFAULT NOW() |
BIT | BOOLEAN | is_active BOOLEAN DEFAULT TRUE |
UNIQUEIDENTIFIER | UUID | id UUID DEFAULT gen_random_uuid() |
MONEY | NUMERIC(19,4) | price NUMERIC(19,4) |
NTEXT / NCHAR | TEXT / CHAR | notes TEXT |
GETDATE() | NOW() or CURRENT_TIMESTAMP | SELECT NOW() |
ISNULL(expr, alt) | COALESCE(expr, alt) | COALESCE(name, 'N/A') |
TOP N | LIMIT N | SELECT * FROM t LIMIT 10 |
SCOPE_IDENTITY() | RETURNING id or currval() | INSERT INTO t (name) VALUES ('x') RETURNING id |
CHARINDEX(sub, str) | POSITION(sub IN str) | POSITION('abc' IN column_name) |
LEN(str) | LENGTH(str) | LENGTH(column_name) |
CONVERT(type, expr) | CAST(expr AS type) or ::type | created_at::DATE |
[bracketed identifiers] | "quoted identifiers" | SELECT "Column Name" FROM "My Table" |
DATEADD(day, N, date) | date + INTERVAL 'N days' | NOW() + INTERVAL '30 days' |
DATEDIFF(day, d1, d2) | d2 - d1 or EXTRACT(EPOCH FROM ...) | end_date - start_date |
OPENJSON() | JSON_TABLE() (PG 17+) or jsonb_to_recordset() | SELECT * FROM JSON_TABLE(data, '$.items[*]' ...) |
START
├── Database size < 10 GB and simple schema (few stored procedures)?
│ ├── YES → Use pgloader for one-command migration (see Step 3)
│ └── NO ↓
├── Heavy T-SQL stored procedures, triggers, CLR assemblies?
│ ├── YES → Use AWS SCT for schema conversion + manual PL/pgSQL rewrite
│ └── NO ↓
├── Need near-zero downtime (production migration)?
│ ├── YES → Use AWS DMS with CDC (Change Data Capture) for continuous replication
│ └── NO ↓
├── Migrating to AWS (Aurora PostgreSQL, RDS)?
│ ├── YES → Use AWS SCT + DMS combo (schema conversion + data replication)
│ └── NO ↓
├── Schema only (data migrated separately)?
│ ├── YES → Use sqlserver2pgsql to generate DDL scripts, then bulk-load data
│ └── NO ↓
└── DEFAULT → pgloader for schema + data, then manually convert stored procedures
Inventory all objects that need migration: tables, views, stored procedures, triggers, functions, indexes, constraints, and CLR assemblies. Quantify T-SQL complexity to estimate manual conversion effort. [src5]
-- Count objects by type in SQL Server
SELECT type_desc, COUNT(*) AS object_count
FROM sys.objects
WHERE is_ms_shipped = 0
GROUP BY type_desc
ORDER BY object_count DESC;
-- List stored procedures with line counts
SELECT s.name AS schema_name, p.name AS proc_name,
LEN(m.definition) - LEN(REPLACE(m.definition, CHAR(10), '')) AS line_count
FROM sys.procedures p
JOIN sys.schemas s ON p.schema_id = s.schema_id
JOIN sys.sql_modules m ON p.object_id = m.object_id
ORDER BY line_count DESC;
-- Check database size
EXEC sp_spaceused;
-- Identify CLR assemblies (these need full rewrite)
SELECT a.name AS assembly_name, a.permission_set_desc,
m.assembly_class, m.assembly_method
FROM sys.assemblies a
LEFT JOIN sys.assembly_modules m ON a.assembly_id = m.assembly_id
WHERE a.is_user_defined = 1;
Verify: Review the object counts — note the number of stored procedures (these need manual review), CLR assemblies (these need full rewrite), and total database size.
Export the SQL Server schema and convert data types, constraints, and indexes to PostgreSQL syntax. Use sqlserver2pgsql or AWS SCT for automated conversion, then review the output. [src2, src6]
# Option A: sqlserver2pgsql (generates DDL scripts from SQL dump)
sqlcmd -S localhost -d mydb -E -Q "SET NOCOUNT ON" -o schema_dump.sql
perl sqlserver2pgsql.pl -f schema_dump.sql \
-b before_migration.sql \
-a after_migration.sql \
-u unsure_migration.sql
psql -h pghost -d targetdb -f before_migration.sql
# Option B: AWS SCT (GUI tool)
# Download AWS Schema Conversion Tool, connect to both databases,
# review conversion assessment, then apply converted schema
Verify: psql -d targetdb -c "\dt" shows all expected tables. Compare table count with SQL Server.
Load data from SQL Server into PostgreSQL. pgloader handles schema + data in a single command, including data type casting and error handling. pgloader can handle up to 3 TB/hour when properly tuned. [src3]
# Option A: pgloader (recommended for most migrations)
pgloader mssql://sa:password@sqlserver-host/sourcedb \
postgresql://pguser:pgpass@pg-host/targetdb
# pgloader with a load file for more control
pgloader migration.load
# migration.load contents:
# LOAD DATABASE
# FROM mssql://sa:password@sqlserver-host/sourcedb
# INTO postgresql://pguser:pgpass@pg-host/targetdb
# WITH include drop, create tables, create indexes,
# reset sequences, downcase identifiers
# CAST type datetime to timestamptz,
# type nvarchar to text,
# type money to numeric
# Option B: AWS DMS (for large databases or continuous replication)
# Create replication instance, source/target endpoints, and migration task
# Supports CDC for near-zero downtime cutover
Verify: Compare row counts across all tables between SQL Server and PostgreSQL.
Rewrite T-SQL stored procedures as PL/pgSQL functions. Automated tools handle 60–80% of cases; the rest needs manual conversion. [src4]
-- SQL Server T-SQL stored procedure
CREATE PROCEDURE dbo.GetActiveUsers
@MinAge INT = 18,
@Status NVARCHAR(20) = 'active'
AS
BEGIN
SET NOCOUNT ON;
SELECT u.user_id, u.name, u.email
FROM dbo.users u
WHERE u.age >= @MinAge AND u.status = @Status
ORDER BY u.name;
END;
-- PostgreSQL PL/pgSQL equivalent
CREATE OR REPLACE FUNCTION get_active_users(
p_min_age INTEGER DEFAULT 18,
p_status TEXT DEFAULT 'active'
)
RETURNS TABLE(user_id INTEGER, name TEXT, email TEXT)
LANGUAGE plpgsql
AS $$
BEGIN
RETURN QUERY
SELECT u.user_id, u.name, u.email
FROM users u
WHERE u.age >= p_min_age AND u.status = p_status
ORDER BY u.name;
END;
$$;
Verify: Call each converted function and compare output: SELECT * FROM get_active_users()
Update SQL queries embedded in your application code. Replace SQL Server-specific syntax with PostgreSQL equivalents. [src2, src5]
-- SQL Server:
SELECT TOP 10 u.name, ISNULL(u.nickname, u.name) AS display_name,
CONVERT(VARCHAR, u.created_at, 120) AS created_date
FROM [dbo].[users] u WITH (NOLOCK)
WHERE u.status = 'active'
ORDER BY u.created_at DESC;
-- PostgreSQL:
SELECT u.name, COALESCE(u.nickname, u.name) AS display_name,
TO_CHAR(u.created_at, 'YYYY-MM-DD HH24:MI:SS') AS created_date
FROM users u
WHERE u.status = 'active'
ORDER BY u.created_at DESC
LIMIT 10;
Verify: Run your application's test suite against PostgreSQL. Check logs for syntax errors.
Replace SQL Server drivers with PostgreSQL drivers in your application configuration. [src1]
# Node.js: mssql → pg
npm uninstall mssql && npm install pg
# Python: pyodbc → psycopg
pip uninstall pyodbc && pip install psycopg[binary]
# .NET: System.Data.SqlClient → Npgsql
dotnet remove package System.Data.SqlClient
dotnet add package Npgsql
# Java: mssql-jdbc → postgresql
# JDBC URL: jdbc:postgresql://host:5432/db
Verify: Application connects successfully. Run SELECT version(); through your app to confirm PostgreSQL connectivity.
Run the deferred indexes and constraints after data migration for maximum import speed. [src6]
-- Apply indexes
CREATE INDEX idx_users_email ON users (email);
CREATE INDEX idx_users_status ON users (status) WHERE status = 'active';
CREATE UNIQUE INDEX idx_users_username ON users (LOWER(username));
-- Apply foreign key constraints
ALTER TABLE orders ADD CONSTRAINT fk_orders_users
FOREIGN KEY (user_id) REFERENCES users (user_id);
-- Reset sequences after data import (critical!)
SELECT setval('users_user_id_seq', (SELECT MAX(user_id) FROM users));
SELECT setval('orders_order_id_seq', (SELECT MAX(order_id) FROM orders));
Verify: SELECT indexname FROM pg_indexes WHERE schemaname = 'public'; lists all expected indexes.
After migration, run both databases in parallel during a validation period. Compare query results and performance before final cutover. [src7]
# Compare row counts between databases
psql -d pgdb -t -A -c "SELECT COUNT(*) FROM users WHERE status='active'" > pg_count.txt
sqlcmd -S sqlhost -d mydb -h -1 -Q "SELECT COUNT(*) FROM users WHERE status='active'" > mssql_count.txt
diff pg_count.txt mssql_count.txt
# Performance baseline comparison
psql -d pgdb -c "EXPLAIN ANALYZE SELECT * FROM orders WHERE created_at > NOW() - INTERVAL '30 days';"
Verify: All critical query results match. PostgreSQL query performance is within acceptable range of SQL Server baseline.
Full script: python-psycopg-migrating-a-sql-server-data-access-.py (47 lines)
# Input: Python app using pyodbc for SQL Server
# Output: Same app using psycopg for PostgreSQL
import psycopg
from contextlib import contextmanager
DATABASE_URL = "postgresql://pguser:pgpass@pghost:5432/mydb"
@contextmanager
def get_connection():
conn = psycopg.connect(DATABASE_URL)
try:
yield conn
conn.commit()
except Exception:
conn.rollback()
raise
finally:
conn.close()
Full script: node-js-pg-converting-express-routes-from-mssql-to.js (57 lines)
// Input: Express app using mssql package for SQL Server
// Output: Same routes using pg package for PostgreSQL
import pg from 'pg';
const { Pool } = pg;
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
});
Full script: sql-common-t-sql-to-pl-pgsql-conversions.sql (54 lines)
-- Input: T-SQL patterns commonly found in SQL Server applications
-- Output: PostgreSQL PL/pgSQL equivalents
-- 1. Temporary tables: # prefix → CREATE TEMP TABLE
CREATE TEMP TABLE temp_users AS
SELECT * FROM users WHERE active = TRUE;
-- 2. String concatenation: + → ||
SELECT first_name || ' ' || last_name AS full_name FROM users;
// Input: .NET app using System.Data.SqlClient
// Output: Same app using Npgsql for PostgreSQL
using Npgsql;
// BEFORE: var conn = new SqlConnection("Server=sqlhost;Database=mydb;...");
// AFTER:
await using var conn = new NpgsqlConnection(
"Host=pghost;Database=mydb;Username=pguser;Password=pgpass");
await conn.OpenAsync();
// @param → $1 positional, or use NpgsqlParameter
await using var cmd = new NpgsqlCommand(
"SELECT * FROM users WHERE status = $1 LIMIT $2", conn);
cmd.Parameters.AddWithValue(NpgsqlTypes.NpgsqlDbType.Text, "active");
cmd.Parameters.AddWithValue(NpgsqlTypes.NpgsqlDbType.Integer, 10);
await using var reader = await cmd.ExecuteReaderAsync();
-- ❌ BAD — SQL Server bracket syntax doesn't work in PostgreSQL
SELECT [user_id], [first name], [Order Date]
FROM [dbo].[users]
WHERE [status] = 'active';
-- ✅ GOOD — PostgreSQL uses double quotes; prefer lowercase unquoted
SELECT user_id, "first name", "Order Date"
FROM users
WHERE status = 'active';
-- BEST — rename columns to avoid quoting entirely
ALTER TABLE users RENAME COLUMN "first name" TO first_name;
-- ❌ BAD — TOP is SQL Server syntax, not supported in PostgreSQL
SELECT TOP 10 * FROM orders ORDER BY created_at DESC;
-- ✅ GOOD — PostgreSQL uses LIMIT (or FETCH FIRST per SQL standard)
SELECT * FROM orders ORDER BY created_at DESC LIMIT 10;
-- ❌ BAD — ISNULL is SQL Server-specific
SELECT ISNULL(nickname, name) AS display_name FROM users;
-- ✅ GOOD — COALESCE is ANSI SQL, portable across all databases
SELECT COALESCE(nickname, name) AS display_name FROM users;
-- ❌ BAD — Table hints like NOLOCK are SQL Server-specific
SELECT * FROM orders WITH (NOLOCK) WHERE status = 'pending';
-- ✅ GOOD — PostgreSQL MVCC means readers never block writers
SELECT * FROM orders WHERE status = 'pending';
-- No hint needed — READ COMMITTED prevents dirty reads
-- ❌ BAD — SQL Server string concatenation and date functions
DECLARE @fullname NVARCHAR(200);
SET @fullname = @first + ' ' + @last;
SELECT * FROM events WHERE event_date > GETDATE();
-- ✅ GOOD — PostgreSQL standard syntax
-- In PL/pgSQL:
DECLARE v_fullname TEXT;
BEGIN
v_fullname := v_first || ' ' || v_last;
END;
-- In queries:
SELECT * FROM events WHERE event_date > NOW();
-- ❌ BAD — OPENJSON is SQL Server-specific; JSON_TABLE requires PG 17+
SELECT * FROM OPENJSON(@json_data);
-- ✅ GOOD (PG 13+) — Use jsonb_to_recordset for JSON array to table
SELECT * FROM jsonb_to_recordset('[{"id":1,"name":"Alice"}]'::jsonb)
AS t(id int, name text);
-- ✅ GOOD (PG 17+) — SQL/JSON JSON_TABLE for full OPENJSON parity
SELECT * FROM JSON_TABLE(
'{"users":[{"id":1,"name":"Alice"}]}'::jsonb,
'$.users[*]' COLUMNS (id int, name text)
);
downcase identifiers in pgloader, or CITEXT extension for case-insensitive text columns. [src2]BIT stores 0/1, PostgreSQL BOOLEAN uses true/false. Inserting 1 into a BOOLEAN column fails. Fix: Cast during migration: CASE WHEN bit_col = 1 THEN TRUE ELSE FALSE END. [src2]SELECT setval('table_column_seq', (SELECT MAX(column) FROM table)) for every serial/identity column. [src3]RETURNS TABLE or RETURNS SETOF declarations. Fix: Rewrite procedures as functions with RETURN QUERY SELECT .... [src4]WITH (NOLOCK) hints must be removed. Fix: Remove all table hints; rely on PostgreSQL MVCC for non-blocking reads. [src5]'hello' || NULL returns NULL. Fix: Use CONCAT() or COALESCE() for null-safe concatenation. [src1]dbo.tablename, PostgreSQL uses public.tablename. Fix: Map dbo to public in your migration tool. [src7]DATEADD(unit, N, date) and DATEDIFF(). PostgreSQL uses interval arithmetic: date + INTERVAL 'N days'. Fix: Replace all DATEADD/DATEDIFF calls with PostgreSQL interval syntax. [src2]ON_ERROR ignore to COPY, allowing skip of malformed rows. On PG 13–16, a single bad row aborts the entire COPY. Fix: Pre-validate data or use pgloader which handles errors natively. [src8]# Check PostgreSQL version and connection
psql -c "SELECT version();"
# Compare table counts between SQL Server and PostgreSQL
# SQL Server:
sqlcmd -S sqlhost -d mydb -Q "SELECT COUNT(*) FROM information_schema.tables WHERE table_type='BASE TABLE'"
# PostgreSQL:
psql -d mydb -c "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='public' AND table_type='BASE TABLE'"
# Compare row counts for all tables
psql -d mydb -c "SELECT relname, n_live_tup FROM pg_stat_user_tables ORDER BY relname;"
# Check for sequence drift after migration
psql -d mydb -c "SELECT sequencename, last_value FROM pg_sequences WHERE schemaname = 'public';"
# Verify foreign key constraints
psql -d mydb -c "SELECT conname, conrelid::regclass, confrelid::regclass FROM pg_constraint WHERE contype = 'f';"
# Check for missing indexes (high seq_scan, low idx_scan)
psql -d mydb -c "SELECT relname, seq_scan, idx_scan FROM pg_stat_user_tables WHERE seq_scan > 1000 AND idx_scan < 10 ORDER BY seq_scan DESC;"
# Test application driver connection
python -c "import psycopg; print(psycopg.connect('postgresql://user:pass@host/db').execute('SELECT 1').fetchone())"
| Version | Status | Key Features for Migration | Migration Notes |
|---|---|---|---|
| PostgreSQL 17 (2024) | Current | JSON_TABLE, MERGE RETURNING, WHEN NOT MATCHED BY SOURCE, COPY ON_ERROR ignore, incremental backup | Best migration target — closest SQL Server feature parity |
| PostgreSQL 16 (2023) | Supported | Logical replication from standby, improved COPY perf | Good target; improved bulk load performance |
| PostgreSQL 15 (2022) | Supported | MERGE statement added, SECURITY INVOKER views | SQL Server MERGE can be ported directly |
| PostgreSQL 14 (2021) | Maintenance | Multirange types, RETURNING improvements | RETURNING improvements help IDENTITY migration |
| PostgreSQL 13 (2020) | EOL Nov 2025 | Parallel vacuum, deduplicated B-tree | No longer supported — upgrade to 16+ |
| SQL Server 2022 | Current | Ledger tables, enhanced query store | Latest source — full DMS/pgloader support |
| SQL Server 2019 | Mainstream | UTF-8 collation support | UTF-8 collation simplifies text migration |
| SQL Server 2016–2017 | Extended | JSON support, graph tables | JSON columns should migrate to JSONB |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Moving to open-source to reduce licensing costs (60–90% savings) | You rely on SQL Server-specific features (SSRS, SSAS, SSIS) | Keep SQL Server or migrate BI tools separately |
| Cloud migration to AWS/GCP (Aurora PostgreSQL, Cloud SQL) | Database is < 100 MB and rarely changes | Export to SQLite or CSV |
| Need advanced features (JSONB, full-text search, PostGIS) | Team has no PostgreSQL experience and tight deadline | Azure SQL Managed Instance (lift-and-shift) |
| Application uses standard SQL with minimal T-SQL | Hundreds of complex stored procedures with CLR assemblies | Hybrid: migrate tables, keep proc logic in app layer |
| Linux/container deployment | Windows-only infrastructure with no plans to change | SQL Server on Linux or Azure SQL |
| Need JSON_TABLE, advanced partitioning, or extensions | Regulatory requirement to use specific vendor DB | Stay on SQL Server or evaluate alternatives |
IDENTITY columns (recommended over SERIAL) require PostgreSQL 10+. Always reset sequences after bulk data import using setval().@@IDENTITY and SCOPE_IDENTITY() have no direct equivalent — use INSERT ... RETURNING id or currval('sequence_name') instead.MERGE statement is fully supported only in PostgreSQL 17+ (with WHEN NOT MATCHED BY SOURCE and RETURNING). PostgreSQL 15–16 support basic MERGE. For earlier versions, use INSERT ... ON CONFLICT.USE database). Each connection targets one database. Use schemas or foreign data wrappers for cross-database access.NVARCHAR (UTF-16) maps to TEXT (UTF-8) in PostgreSQL. Character length semantics differ — test string length validations after migration.COPY ... ON_ERROR ignore option finally matches SQL Server's ability to skip bad rows during bulk import — use PG 17+ for the smoothest data migration experience.