How to Migrate from SQL Server to PostgreSQL

Type: Software Reference Confidence: 0.92 Sources: 8 Verified: 2026-02-23 Freshness: quarterly

TL;DR

Constraints

Quick Reference

SQL Server SyntaxPostgreSQL EquivalentExample
IDENTITY(1,1)GENERATED ALWAYS AS IDENTITY or SERIALid INTEGER GENERATED ALWAYS AS IDENTITY
NVARCHAR(MAX)TEXTdescription TEXT
DATETIME / DATETIME2TIMESTAMP / TIMESTAMPTZcreated_at TIMESTAMPTZ DEFAULT NOW()
BITBOOLEANis_active BOOLEAN DEFAULT TRUE
UNIQUEIDENTIFIERUUIDid UUID DEFAULT gen_random_uuid()
MONEYNUMERIC(19,4)price NUMERIC(19,4)
NTEXT / NCHARTEXT / CHARnotes TEXT
GETDATE()NOW() or CURRENT_TIMESTAMPSELECT NOW()
ISNULL(expr, alt)COALESCE(expr, alt)COALESCE(name, 'N/A')
TOP NLIMIT NSELECT * FROM t LIMIT 10
SCOPE_IDENTITY()RETURNING id or currval()INSERT INTO t (name) VALUES ('x') RETURNING id
CHARINDEX(sub, str)POSITION(sub IN str)POSITION('abc' IN column_name)
LEN(str)LENGTH(str)LENGTH(column_name)
CONVERT(type, expr)CAST(expr AS type) or ::typecreated_at::DATE
[bracketed identifiers]"quoted identifiers"SELECT "Column Name" FROM "My Table"
DATEADD(day, N, date)date + INTERVAL 'N days'NOW() + INTERVAL '30 days'
DATEDIFF(day, d1, d2)d2 - d1 or EXTRACT(EPOCH FROM ...)end_date - start_date
OPENJSON()JSON_TABLE() (PG 17+) or jsonb_to_recordset()SELECT * FROM JSON_TABLE(data, '$.items[*]' ...)

Decision Tree

START
├── Database size < 10 GB and simple schema (few stored procedures)?
│   ├── YES → Use pgloader for one-command migration (see Step 3)
│   └── NO ↓
├── Heavy T-SQL stored procedures, triggers, CLR assemblies?
│   ├── YES → Use AWS SCT for schema conversion + manual PL/pgSQL rewrite
│   └── NO ↓
├── Need near-zero downtime (production migration)?
│   ├── YES → Use AWS DMS with CDC (Change Data Capture) for continuous replication
│   └── NO ↓
├── Migrating to AWS (Aurora PostgreSQL, RDS)?
│   ├── YES → Use AWS SCT + DMS combo (schema conversion + data replication)
│   └── NO ↓
├── Schema only (data migrated separately)?
│   ├── YES → Use sqlserver2pgsql to generate DDL scripts, then bulk-load data
│   └── NO ↓
└── DEFAULT → pgloader for schema + data, then manually convert stored procedures

Step-by-Step Guide

1. Audit the SQL Server database

Inventory all objects that need migration: tables, views, stored procedures, triggers, functions, indexes, constraints, and CLR assemblies. Quantify T-SQL complexity to estimate manual conversion effort. [src5]

-- Count objects by type in SQL Server
SELECT type_desc, COUNT(*) AS object_count
FROM sys.objects
WHERE is_ms_shipped = 0
GROUP BY type_desc
ORDER BY object_count DESC;

-- List stored procedures with line counts
SELECT s.name AS schema_name, p.name AS proc_name,
       LEN(m.definition) - LEN(REPLACE(m.definition, CHAR(10), '')) AS line_count
FROM sys.procedures p
JOIN sys.schemas s ON p.schema_id = s.schema_id
JOIN sys.sql_modules m ON p.object_id = m.object_id
ORDER BY line_count DESC;

-- Check database size
EXEC sp_spaceused;

-- Identify CLR assemblies (these need full rewrite)
SELECT a.name AS assembly_name, a.permission_set_desc,
       m.assembly_class, m.assembly_method
FROM sys.assemblies a
LEFT JOIN sys.assembly_modules m ON a.assembly_id = m.assembly_id
WHERE a.is_user_defined = 1;

Verify: Review the object counts — note the number of stored procedures (these need manual review), CLR assemblies (these need full rewrite), and total database size.

2. Convert the schema

Export the SQL Server schema and convert data types, constraints, and indexes to PostgreSQL syntax. Use sqlserver2pgsql or AWS SCT for automated conversion, then review the output. [src2, src6]

# Option A: sqlserver2pgsql (generates DDL scripts from SQL dump)
sqlcmd -S localhost -d mydb -E -Q "SET NOCOUNT ON" -o schema_dump.sql

perl sqlserver2pgsql.pl -f schema_dump.sql \
  -b before_migration.sql \
  -a after_migration.sql \
  -u unsure_migration.sql

psql -h pghost -d targetdb -f before_migration.sql

# Option B: AWS SCT (GUI tool)
# Download AWS Schema Conversion Tool, connect to both databases,
# review conversion assessment, then apply converted schema

Verify: psql -d targetdb -c "\dt" shows all expected tables. Compare table count with SQL Server.

3. Migrate the data

Load data from SQL Server into PostgreSQL. pgloader handles schema + data in a single command, including data type casting and error handling. pgloader can handle up to 3 TB/hour when properly tuned. [src3]

# Option A: pgloader (recommended for most migrations)
pgloader mssql://sa:password@sqlserver-host/sourcedb \
         postgresql://pguser:pgpass@pg-host/targetdb

# pgloader with a load file for more control
pgloader migration.load
# migration.load contents:
# LOAD DATABASE
#   FROM mssql://sa:password@sqlserver-host/sourcedb
#   INTO postgresql://pguser:pgpass@pg-host/targetdb
# WITH include drop, create tables, create indexes,
#      reset sequences, downcase identifiers
# CAST type datetime to timestamptz,
#      type nvarchar to text,
#      type money to numeric

# Option B: AWS DMS (for large databases or continuous replication)
# Create replication instance, source/target endpoints, and migration task
# Supports CDC for near-zero downtime cutover

Verify: Compare row counts across all tables between SQL Server and PostgreSQL.

4. Convert stored procedures and functions

Rewrite T-SQL stored procedures as PL/pgSQL functions. Automated tools handle 60–80% of cases; the rest needs manual conversion. [src4]

-- SQL Server T-SQL stored procedure
CREATE PROCEDURE dbo.GetActiveUsers
    @MinAge INT = 18,
    @Status NVARCHAR(20) = 'active'
AS
BEGIN
    SET NOCOUNT ON;
    SELECT u.user_id, u.name, u.email
    FROM dbo.users u
    WHERE u.age >= @MinAge AND u.status = @Status
    ORDER BY u.name;
END;

-- PostgreSQL PL/pgSQL equivalent
CREATE OR REPLACE FUNCTION get_active_users(
    p_min_age INTEGER DEFAULT 18,
    p_status TEXT DEFAULT 'active'
)
RETURNS TABLE(user_id INTEGER, name TEXT, email TEXT)
LANGUAGE plpgsql
AS $$
BEGIN
    RETURN QUERY
    SELECT u.user_id, u.name, u.email
    FROM users u
    WHERE u.age >= p_min_age AND u.status = p_status
    ORDER BY u.name;
END;
$$;

Verify: Call each converted function and compare output: SELECT * FROM get_active_users()

5. Convert T-SQL syntax in application code

Update SQL queries embedded in your application code. Replace SQL Server-specific syntax with PostgreSQL equivalents. [src2, src5]

-- SQL Server:
SELECT TOP 10 u.name, ISNULL(u.nickname, u.name) AS display_name,
       CONVERT(VARCHAR, u.created_at, 120) AS created_date
FROM [dbo].[users] u WITH (NOLOCK)
WHERE u.status = 'active'
ORDER BY u.created_at DESC;

-- PostgreSQL:
SELECT u.name, COALESCE(u.nickname, u.name) AS display_name,
       TO_CHAR(u.created_at, 'YYYY-MM-DD HH24:MI:SS') AS created_date
FROM users u
WHERE u.status = 'active'
ORDER BY u.created_at DESC
LIMIT 10;

Verify: Run your application's test suite against PostgreSQL. Check logs for syntax errors.

6. Update connection strings and drivers

Replace SQL Server drivers with PostgreSQL drivers in your application configuration. [src1]

# Node.js: mssql → pg
npm uninstall mssql && npm install pg

# Python: pyodbc → psycopg
pip uninstall pyodbc && pip install psycopg[binary]

# .NET: System.Data.SqlClient → Npgsql
dotnet remove package System.Data.SqlClient
dotnet add package Npgsql

# Java: mssql-jdbc → postgresql
# JDBC URL: jdbc:postgresql://host:5432/db

Verify: Application connects successfully. Run SELECT version(); through your app to confirm PostgreSQL connectivity.

7. Apply indexes and constraints

Run the deferred indexes and constraints after data migration for maximum import speed. [src6]

-- Apply indexes
CREATE INDEX idx_users_email ON users (email);
CREATE INDEX idx_users_status ON users (status) WHERE status = 'active';
CREATE UNIQUE INDEX idx_users_username ON users (LOWER(username));

-- Apply foreign key constraints
ALTER TABLE orders ADD CONSTRAINT fk_orders_users
    FOREIGN KEY (user_id) REFERENCES users (user_id);

-- Reset sequences after data import (critical!)
SELECT setval('users_user_id_seq', (SELECT MAX(user_id) FROM users));
SELECT setval('orders_order_id_seq', (SELECT MAX(order_id) FROM orders));

Verify: SELECT indexname FROM pg_indexes WHERE schemaname = 'public'; lists all expected indexes.

8. Validate and run parallel operations

After migration, run both databases in parallel during a validation period. Compare query results and performance before final cutover. [src7]

# Compare row counts between databases
psql -d pgdb -t -A -c "SELECT COUNT(*) FROM users WHERE status='active'" > pg_count.txt
sqlcmd -S sqlhost -d mydb -h -1 -Q "SELECT COUNT(*) FROM users WHERE status='active'" > mssql_count.txt
diff pg_count.txt mssql_count.txt

# Performance baseline comparison
psql -d pgdb -c "EXPLAIN ANALYZE SELECT * FROM orders WHERE created_at > NOW() - INTERVAL '30 days';"

Verify: All critical query results match. PostgreSQL query performance is within acceptable range of SQL Server baseline.

Code Examples

Python/psycopg: Migrating a SQL Server data access layer to PostgreSQL

Full script: python-psycopg-migrating-a-sql-server-data-access-.py (47 lines)

# Input:  Python app using pyodbc for SQL Server
# Output: Same app using psycopg for PostgreSQL

import psycopg
from contextlib import contextmanager

DATABASE_URL = "postgresql://pguser:pgpass@pghost:5432/mydb"

@contextmanager
def get_connection():
    conn = psycopg.connect(DATABASE_URL)
    try:
        yield conn
        conn.commit()
    except Exception:
        conn.rollback()
        raise
    finally:
        conn.close()

Node.js/pg: Converting Express routes from mssql to pg

Full script: node-js-pg-converting-express-routes-from-mssql-to.js (57 lines)

// Input:  Express app using mssql package for SQL Server
// Output: Same routes using pg package for PostgreSQL

import pg from 'pg';
const { Pool } = pg;

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 5000,
});

SQL: Common T-SQL to PL/pgSQL conversions

Full script: sql-common-t-sql-to-pl-pgsql-conversions.sql (54 lines)

-- Input:  T-SQL patterns commonly found in SQL Server applications
-- Output: PostgreSQL PL/pgSQL equivalents
-- 1. Temporary tables: # prefix → CREATE TEMP TABLE
CREATE TEMP TABLE temp_users AS
SELECT * FROM users WHERE active = TRUE;

-- 2. String concatenation: + → ||
SELECT first_name || ' ' || last_name AS full_name FROM users;

C#/.NET: Migrating from SqlClient to Npgsql

// Input:  .NET app using System.Data.SqlClient
// Output: Same app using Npgsql for PostgreSQL
using Npgsql;

// BEFORE: var conn = new SqlConnection("Server=sqlhost;Database=mydb;...");
// AFTER:
await using var conn = new NpgsqlConnection(
    "Host=pghost;Database=mydb;Username=pguser;Password=pgpass");
await conn.OpenAsync();

// @param → $1 positional, or use NpgsqlParameter
await using var cmd = new NpgsqlCommand(
    "SELECT * FROM users WHERE status = $1 LIMIT $2", conn);
cmd.Parameters.AddWithValue(NpgsqlTypes.NpgsqlDbType.Text, "active");
cmd.Parameters.AddWithValue(NpgsqlTypes.NpgsqlDbType.Integer, 10);
await using var reader = await cmd.ExecuteReaderAsync();

Anti-Patterns

Wrong: Using square brackets for identifiers in PostgreSQL

-- ❌ BAD — SQL Server bracket syntax doesn't work in PostgreSQL
SELECT [user_id], [first name], [Order Date]
FROM [dbo].[users]
WHERE [status] = 'active';

Correct: Use double quotes or lowercase unquoted identifiers

-- ✅ GOOD — PostgreSQL uses double quotes; prefer lowercase unquoted
SELECT user_id, "first name", "Order Date"
FROM users
WHERE status = 'active';

-- BEST — rename columns to avoid quoting entirely
ALTER TABLE users RENAME COLUMN "first name" TO first_name;

Wrong: Using TOP N instead of LIMIT

-- ❌ BAD — TOP is SQL Server syntax, not supported in PostgreSQL
SELECT TOP 10 * FROM orders ORDER BY created_at DESC;

Correct: Use LIMIT for row limiting

-- ✅ GOOD — PostgreSQL uses LIMIT (or FETCH FIRST per SQL standard)
SELECT * FROM orders ORDER BY created_at DESC LIMIT 10;

Wrong: Using ISNULL() for null coalescing

-- ❌ BAD — ISNULL is SQL Server-specific
SELECT ISNULL(nickname, name) AS display_name FROM users;

Correct: Use COALESCE (ANSI SQL standard)

-- ✅ GOOD — COALESCE is ANSI SQL, portable across all databases
SELECT COALESCE(nickname, name) AS display_name FROM users;

Wrong: Using NOLOCK hints in PostgreSQL

-- ❌ BAD — Table hints like NOLOCK are SQL Server-specific
SELECT * FROM orders WITH (NOLOCK) WHERE status = 'pending';

Correct: Rely on PostgreSQL's MVCC (no dirty reads by default)

-- ✅ GOOD — PostgreSQL MVCC means readers never block writers
SELECT * FROM orders WHERE status = 'pending';
-- No hint needed — READ COMMITTED prevents dirty reads

Wrong: Using += for string concatenation or GETDATE()

-- ❌ BAD — SQL Server string concatenation and date functions
DECLARE @fullname NVARCHAR(200);
SET @fullname = @first + ' ' + @last;
SELECT * FROM events WHERE event_date > GETDATE();

Correct: Use || for concatenation and NOW() for current time

-- ✅ GOOD — PostgreSQL standard syntax
-- In PL/pgSQL:
DECLARE v_fullname TEXT;
BEGIN
    v_fullname := v_first || ' ' || v_last;
END;

-- In queries:
SELECT * FROM events WHERE event_date > NOW();

Wrong: Using OPENJSON() in PostgreSQL (pre-17)

-- ❌ BAD — OPENJSON is SQL Server-specific; JSON_TABLE requires PG 17+
SELECT * FROM OPENJSON(@json_data);

Correct: Use jsonb functions or JSON_TABLE (PG 17+)

-- ✅ GOOD (PG 13+) — Use jsonb_to_recordset for JSON array to table
SELECT * FROM jsonb_to_recordset('[{"id":1,"name":"Alice"}]'::jsonb)
    AS t(id int, name text);

-- ✅ GOOD (PG 17+) — SQL/JSON JSON_TABLE for full OPENJSON parity
SELECT * FROM JSON_TABLE(
    '{"users":[{"id":1,"name":"Alice"}]}'::jsonb,
    '$.users[*]' COLUMNS (id int, name text)
);

Common Pitfalls

Diagnostic Commands

# Check PostgreSQL version and connection
psql -c "SELECT version();"

# Compare table counts between SQL Server and PostgreSQL
# SQL Server:
sqlcmd -S sqlhost -d mydb -Q "SELECT COUNT(*) FROM information_schema.tables WHERE table_type='BASE TABLE'"
# PostgreSQL:
psql -d mydb -c "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='public' AND table_type='BASE TABLE'"

# Compare row counts for all tables
psql -d mydb -c "SELECT relname, n_live_tup FROM pg_stat_user_tables ORDER BY relname;"

# Check for sequence drift after migration
psql -d mydb -c "SELECT sequencename, last_value FROM pg_sequences WHERE schemaname = 'public';"

# Verify foreign key constraints
psql -d mydb -c "SELECT conname, conrelid::regclass, confrelid::regclass FROM pg_constraint WHERE contype = 'f';"

# Check for missing indexes (high seq_scan, low idx_scan)
psql -d mydb -c "SELECT relname, seq_scan, idx_scan FROM pg_stat_user_tables WHERE seq_scan > 1000 AND idx_scan < 10 ORDER BY seq_scan DESC;"

# Test application driver connection
python -c "import psycopg; print(psycopg.connect('postgresql://user:pass@host/db').execute('SELECT 1').fetchone())"

Version History & Compatibility

VersionStatusKey Features for MigrationMigration Notes
PostgreSQL 17 (2024)CurrentJSON_TABLE, MERGE RETURNING, WHEN NOT MATCHED BY SOURCE, COPY ON_ERROR ignore, incremental backupBest migration target — closest SQL Server feature parity
PostgreSQL 16 (2023)SupportedLogical replication from standby, improved COPY perfGood target; improved bulk load performance
PostgreSQL 15 (2022)SupportedMERGE statement added, SECURITY INVOKER viewsSQL Server MERGE can be ported directly
PostgreSQL 14 (2021)MaintenanceMultirange types, RETURNING improvementsRETURNING improvements help IDENTITY migration
PostgreSQL 13 (2020)EOL Nov 2025Parallel vacuum, deduplicated B-treeNo longer supported — upgrade to 16+
SQL Server 2022CurrentLedger tables, enhanced query storeLatest source — full DMS/pgloader support
SQL Server 2019MainstreamUTF-8 collation supportUTF-8 collation simplifies text migration
SQL Server 2016–2017ExtendedJSON support, graph tablesJSON columns should migrate to JSONB

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Moving to open-source to reduce licensing costs (60–90% savings)You rely on SQL Server-specific features (SSRS, SSAS, SSIS)Keep SQL Server or migrate BI tools separately
Cloud migration to AWS/GCP (Aurora PostgreSQL, Cloud SQL)Database is < 100 MB and rarely changesExport to SQLite or CSV
Need advanced features (JSONB, full-text search, PostGIS)Team has no PostgreSQL experience and tight deadlineAzure SQL Managed Instance (lift-and-shift)
Application uses standard SQL with minimal T-SQLHundreds of complex stored procedures with CLR assembliesHybrid: migrate tables, keep proc logic in app layer
Linux/container deploymentWindows-only infrastructure with no plans to changeSQL Server on Linux or Azure SQL
Need JSON_TABLE, advanced partitioning, or extensionsRegulatory requirement to use specific vendor DBStay on SQL Server or evaluate alternatives

Important Caveats

Related Units