CDC Implementation Patterns: Debezium vs Oracle GoldenGate vs Salesforce CDC vs SAP SLT

Type: ERP Integration Systems: Debezium 3.4.x, GoldenGate 23ai, Salesforce CDC v66.0, SAP SLT DMIS 2018 SP4+ Confidence: 0.86 Sources: 8 Verified: 2026-03-07 Freshness: evolving

TL;DR

System Profile

This card compares four CDC implementations that serve fundamentally different architectural niches. Debezium is an open-source, log-based CDC platform built on Kafka Connect. Oracle GoldenGate is a commercial, high-performance replication engine optimized for Oracle environments. Salesforce CDC is a proprietary, platform-native event stream for Salesforce object changes. SAP SLT is a trigger-based replication server purpose-built for SAP HANA data provisioning.

SystemRoleCapture MethodPrimary Target
Debezium 3.4.xOpen-source log-based CDCTransaction log reading (WAL, binlog, redo)Kafka topics (any downstream consumer)
Oracle GoldenGate 23aiCommercial log-based replicationOracle redo log mining / LogMinerOracle, PostgreSQL, Kafka, Pulsar
Salesforce CDC (Spring '26)Platform-native change eventsInternal Salesforce change trackingPub/Sub API (gRPC) or CometD subscribers
SAP SLT (DMIS 2018 SP4+)Trigger-based replication serverDatabase triggers + logging tablesSAP HANA, SAP BW/4HANA

API Surfaces & Capabilities

CDC ToolProtocolBest ForCapture GranularityLatencySchema EvolutionBulk Capable?
Debezium (Kafka Connect)HTTP REST (mgmt), Kafka (data)Heterogeneous DB-to-Kafka CDCRow-level DML + DDLSub-second to secondsYes (schema registry)Snapshots only
Debezium ServerHTTP REST (mgmt), configurable sinksNon-Kafka deploymentsRow-level DML + DDLSub-second to secondsYesSnapshots only
GoldenGate ExtractCLI (GGSCI), REST Admin APIOracle-to-Oracle, Oracle-to-KafkaRow-level DML + DDL (committed txns)50-200msManual DDL replicationVia initial load
GoldenGate OCI (Managed)Web console, REST APICloud-native Oracle replicationSame as Extract50-200msManualVia initial load
Salesforce CDCgRPC (Pub/Sub API), Bayeux (CometD)Salesforce object change streamingField-level change eventsSecondsAutomaticNo
SAP SLTABAP RFC, DB triggersSAP-to-HANA real-time replicationRow-level via trigger captureSeconds to minutesManual (transformation rules)Initial load + delta

Rate Limits & Quotas

Per-Tool Throughput Limits

Limit TypeDebeziumGoldenGateSalesforce CDCSAP SLT
Max events/sec (sustained)100K+ (Kafka-dependent)100K+ (hardware-dependent)Limited by delivery allocationTrigger overhead-limited
Max concurrent connectorsUnlimited (Kafka Connect cluster)Per-license (source+target CPUs)N/A (platform-managed)Per-configuration (1:N from DMIS 2018 SP4)
Max message/event sizeKafka default 1 MB (configurable)Trail file segment 100 MB default1 MB per change eventRFC packet size (~64 KB default)
Replay/rewind windowKafka retention (configurable)Trail file retention (configurable)3 days (72 hours)No replay

Salesforce CDC-Specific Allocation

Limit TypeValueWindowEdition Differences
Event delivery (CometD + Pub/Sub)50,000 baseline24hEnterprise: 50K; with add-on: up to 4.5M/month
Event publishing250,000Per hourInternal (triggers, flows)
Concurrent CometD clients1,000Per orgShared across all streaming subscriptions
Event replay retention3 daysRollingSame across all editions
Max entities trackedAdmin-selectedN/AStandard + custom objects; not all standard objects supported

Debezium Scaling Reference

Target ThroughputKafka Connect WorkersMemory/WorkerConnector TasksPartitions
1K events/sec12 GB13-6
10K events/sec24 GB2-412-24
100K events/sec4+4-8 GB4-824-48
1M events/sec8+8 GB8-1648+

Authentication

CDC ToolAuth MethodCredential TypeRotationNotes
DebeziumDatabase nativeUser/password, SSL certs, KerberosManualNeeds SELECT + REPLICATION privileges
GoldenGateOracle DB auth + OS authDB credentials + trail encryption keysManual; OCI uses IAMRequires DBMS_GOLDENGATE_AUTH grants
Salesforce CDCOAuth 2.0JWT bearer or web server flowRefresh tokens auto-rotateConnected App required
SAP SLTRFC user authABAP system user + auth objectsSAP user management (SU01)Requires RFC trust between source and SLT server

Authentication Gotchas

Constraints

Integration Pattern Decision Tree

START -- Need CDC for ERP integration
+-- What is the source system?
|   +-- Salesforce
|   |   +-- Salesforce CDC (only option for SF object changes)
|   |       +-- Need >50K events/day? -> Add-on license required
|   |       +-- Need >3-day replay? -> Store events externally
|   |       +-- Pub/Sub API (gRPC) preferred over CometD
|   +-- SAP S/4HANA or ECC -> target is SAP HANA?
|   |   +-- YES -> SAP SLT (vendor-supported path)
|   |   +-- NO (target is Kafka/warehouse) -> Debezium or ODP extraction
|   +-- Oracle Database
|   |   +-- Budget for licensing? (>$17.5K/processor)
|   |   |   +-- YES -> GoldenGate (lowest latency, best Oracle integration)
|   |   |   +-- NO -> Debezium with Oracle connector (LogMiner-based)
|   |   +-- Need bidirectional? -> GoldenGate (native support)
|   +-- PostgreSQL / MySQL / SQL Server / MongoDB
|   |   +-- Debezium (best open-source coverage)
|   +-- Multiple heterogeneous sources
|       +-- All open-source DBs -> Debezium (one platform)
|       +-- Mix Oracle + open-source -> Debezium for all or GoldenGate + Debezium
+-- What latency is required?
|   +-- <100ms -> GoldenGate (tuned) or Debezium (tuned Kafka)
|   +-- <1 second -> Debezium (default config)
|   +-- <10 seconds -> Any tool works
|   +-- Minutes acceptable -> SAP SLT or polling
+-- What is the budget?
    +-- Zero license cost -> Debezium (Apache 2.0)
    +-- Moderate ($10K-100K/yr) -> Debezium + Confluent Cloud or OCI GG BYOL
    +-- Enterprise (>$100K/yr) -> GoldenGate on-prem or OCI managed

Quick Reference

CapabilityDebezium 3.4.xOracle GoldenGate 23aiSalesforce CDC (Spring '26)SAP SLT (DMIS 2018 SP4+)
Capture methodLog-based (WAL, binlog, redo)Log-based (redo log mining)Platform-internal trackingDatabase triggers + logging tables
LatencySub-second (Kafka-dependent)50-200ms (tuned)Seconds (near real-time)Seconds to minutes
Delivery semanticsAt-least-once (exactly-once in 3.3+)At-least-once (trail-based)At-least-once (replay-based)At-least-once (logging table)
Supported sourcesPostgreSQL, MySQL, SQL Server, MongoDB, Oracle, DB2, Cassandra, Vitess, SpannerOracle, PostgreSQL (23ai), MySQL, SQL Server, DB2Salesforce objects onlySAP ECC, S/4HANA, BW (ABAP-based)
Supported targetsKafka, Pulsar, Kinesis, EventHubs, Redis, HTTPOracle, PostgreSQL, Kafka, Pulsar (23ai), Big Data targetsPub/Sub API / CometD subscribersSAP HANA, SAP BW/4HANA
Schema DDL captureYes (schema history topic)Yes (manual DDL replication)No (data changes only)No (schema pre-configured)
Replay / rewindKafka retention (configurable)Trail file retention3 days onlyNo replay
Initial load / snapshotYes (incremental snapshots in 3.0+)Yes (initial load mode)No (CDC only)Yes (full load + delta)
Bidirectional syncNot nativeYes (native bidirectional)NoNo
LicensingApache 2.0 (free)~$17.5K/processor or ~$0.32/OCPU-hrIncluded (Enterprise+); add-on for high-volumeIncluded with S/4HANA license
Deployment complexityModerate (Kafka cluster required)High (specialized DBA skills)Low (platform-managed)Moderate (ABAP + RFC config)
Exactly-once supportYes (Kafka txns, v3.3+)NoNoNo

Step-by-Step Integration Guide

1. Configure Debezium CDC connector for PostgreSQL

Register a Debezium PostgreSQL connector via Kafka Connect REST API. Requires a running Kafka Connect cluster with the Debezium PostgreSQL plugin installed. [src1]

# Input:  Running Kafka Connect cluster, PostgreSQL with wal_level=logical
# Output: Connector registered, change events flowing to Kafka topic

curl -X POST http://localhost:8083/connectors \
  -H "Content-Type: application/json" \
  -d '{
    "name": "erp-postgres-cdc",
    "config": {
      "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
      "database.hostname": "erp-db.example.com",
      "database.port": "5432",
      "database.user": "debezium_repl",
      "database.password": "${file:/secrets/pg-password.txt:password}",
      "database.dbname": "erp_production",
      "topic.prefix": "erp.cdc",
      "table.include.list": "public.orders,public.invoices,public.customers",
      "plugin.name": "pgoutput",
      "slot.name": "debezium_erp_slot",
      "publication.name": "dbz_erp_publication",
      "snapshot.mode": "initial",
      "max.batch.size": "4096",
      "max.queue.size": "16384",
      "poll.interval.ms": "100",
      "heartbeat.interval.ms": "10000",
      "schema.history.internal.kafka.bootstrap.servers": "kafka:9092",
      "schema.history.internal.kafka.topic": "erp.schema-history"
    }
  }'

Verify: curl http://localhost:8083/connectors/erp-postgres-cdc/status | jq '.connector.state' -> expected: "RUNNING"

2. Subscribe to Salesforce CDC via Pub/Sub API (gRPC)

Use the Salesforce Pub/Sub API to subscribe to change events for a specific object. Requires OAuth 2.0 access token. [src3]

# Input:  Salesforce Connected App with OAuth JWT bearer flow configured
# Output: Stream of change events for Account object

import grpc
from salesforce_pubsub_api import pubsub_api_pb2, pubsub_api_pb2_grpc

access_token = get_salesforce_token()  # Your OAuth implementation
instance_url = "https://yourorg.my.salesforce.com"
tenant_id = "00D..."  # Your org ID

channel = grpc.secure_channel(
    "api.pubsub.salesforce.com:7443",
    grpc.ssl_channel_credentials()
)
stub = pubsub_api_pb2_grpc.PubSubStub(channel)
metadata = [
    ("accesstoken", access_token),
    ("instanceurl", instance_url),
    ("tenantid", tenant_id),
]

topic = "/data/AccountChangeEvent"
fetch_request = pubsub_api_pb2.FetchRequest(
    topic_name=topic,
    replay_preset=pubsub_api_pb2.ReplayPreset.LATEST,
    num_requested=100
)
for response in stub.Subscribe(iter([fetch_request]), metadata=metadata):
    for event in response.events:
        schema = get_schema(stub, response.schema_id, metadata)
        decoded = decode_avro(event.event.payload, schema)
        print(f"Change: {decoded['ChangeEventHeader']}")

Verify: Create/update an Account record -> expected: change event appears in subscriber output within seconds.

3. Configure Oracle GoldenGate Extract process

Set up an Extract process to capture changes from an Oracle source database. [src2]

-- Enable supplemental logging on Oracle source
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

-- Create GoldenGate admin user
CREATE USER ggadmin IDENTIFIED BY "SecurePassword123";
GRANT DBA TO ggadmin;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('ggadmin');
-- GoldenGate Extract parameter file (dirprm/ext_erp.prm)
EXTRACT ext_erp
SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
USERID ggadmin, PASSWORD SecurePassword123, ENCRYPTKEY DEFAULT
EXTTRAIL ./dirdat/et
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT

TABLE hr.employees;
TABLE fin.invoices;
TABLE fin.payments;
TABLE sales.orders;
# Register and start Extract via GGSCI
GGSCI> ADD EXTRACT ext_erp, INTEGRATED TRANLOG, BEGIN NOW
GGSCI> ADD EXTTRAIL ./dirdat/et, EXTRACT ext_erp, MEGABYTES 200
GGSCI> START EXTRACT ext_erp

Verify: GGSCI> INFO EXTRACT ext_erp, DETAIL -> expected: Status RUNNING, checkpoint advancing.

4. Configure SAP SLT replication

Set up SAP SLT to replicate tables from an SAP ECC or S/4HANA source to SAP HANA. [src5]

-- Step 1: Open transaction LTRS in the SLT system
-- Define a new configuration:
--   Source system: RFC destination to SAP source (SM59 connection)
--   Target system: DB connection to SAP HANA
--   Mass transfer ID: unique identifier
--   Number of data transfer jobs: 3-5 (based on sizing)
--   Number of initial load jobs: 5-10

-- Step 2: Configure RFC destination (SM59)
--   Connection type: 3 (ABAP Connection)
--   Target host: source system hostname
--   Logon user: SLT replication user (S_RFC, S_TABU_DIS authorizations)

-- Step 3: Add tables for replication (LTRC)
-- Select configuration, add tables: VBAK, VBAP, BKPF
-- Choose replication mode: "Replicate" (initial load + real-time delta)

Verify: Transaction LTRC -> Data Transfer Monitor -> expected: tables show status "Green" (active replication).

Code Examples

Python: Debezium CDC consumer with error handling

# Input:  Kafka cluster with Debezium CDC topics
# Output: Processed change events with idempotent handling

from confluent_kafka import Consumer, KafkaError  # confluent-kafka==2.6.1
import json

def create_cdc_consumer(bootstrap_servers, group_id, topics):
    consumer = Consumer({
        "bootstrap.servers": bootstrap_servers,
        "group.id": group_id,
        "auto.offset.reset": "earliest",
        "enable.auto.commit": False,
        "max.poll.interval.ms": 300000,
    })
    consumer.subscribe(topics)
    processed_lsns = set()

    try:
        while True:
            msg = consumer.poll(timeout=1.0)
            if msg is None:
                continue
            if msg.error():
                if msg.error().code() == KafkaError._PARTITION_EOF:
                    continue
                raise Exception(f"Consumer error: {msg.error()}")

            event = json.loads(msg.value().decode("utf-8"))
            lsn = event.get("source", {}).get("lsn")
            if lsn in processed_lsns:
                consumer.commit(msg)
                continue

            op = event.get("op")  # c=create, u=update, d=delete, r=snapshot
            if op in ("c", "u"):
                process_upsert(event["source"]["table"], event.get("after", {}))
            elif op == "d":
                process_delete(event["source"]["table"], event.get("before", {}))

            processed_lsns.add(lsn)
            consumer.commit(msg)
    finally:
        consumer.close()

Bash: GoldenGate health check and diagnostics

# Input:  GoldenGate installation with running processes
# Output: Status of all Extract and Replicat processes

echo "INFO ALL" | $OGG_HOME/ggsci
echo "INFO EXTRACT ext_erp, DETAIL" | $OGG_HOME/ggsci
du -sh $OGG_HOME/dirdat/
echo "STATS EXTRACT ext_erp, TOTAL, TABLE *.*" | $OGG_HOME/ggsci
tail -50 $OGG_HOME/dirrpt/ext_erp*.rpt

Error Handling & Failure Points

Common Error Patterns

CDC ToolErrorCauseResolution
DebeziumREPLICATION_SLOT_ALREADY_EXISTSPrevious connector not cleaned upDrop slot: SELECT pg_drop_replication_slot('slot_name');
DebeziumKafka lag increasing, connector RUNNINGSlow consumer or insufficient partitionsScale workers; increase max.batch.size and max.queue.size
GoldenGateOGG-00868: Trail file corruptedDisk full during trail writeRestore from checkpoint; ensure adequate disk space
GoldenGateOGG-01028: Extract lag exceeds thresholdSource redo faster than ExtractAdd parallel Extract threads; increase trail file size
Salesforce CDCEXCEEDED_EVENT_DELIVERY_LIMIT50K daily allocation consumedPurchase add-on license; reduce subscribed objects
Salesforce CDCMissed events after downtimeSubscriber offline >72 hoursFull re-sync required; implement external event store
SAP SLTIUUC_REPL_RUNTIME: Job cancelledSource table locked or RFC timeoutCheck SM21 logs; increase RFC timeout in SM59
SAP SLTTrigger creation failureMissing authorizationsGrant S_TABU_DIS; verify tablespace availability

Failure Points in Production

Anti-Patterns

Wrong: Using CDC for bulk data migration

# BAD -- CDC is designed for ongoing change streams, not one-time bulk loads.
# Using Debezium snapshot for 500M-row migration will:
# 1. Create massive Kafka lag
# 2. Hold a replication slot for hours/days, growing WAL
# 3. Block other replication slots from advancing
config = {
    "snapshot.mode": "initial",  # Scans entire 500M-row table
    "table.include.list": "public.massive_history_table",
}

Correct: Use bulk tools for migration, CDC for ongoing sync

# GOOD -- Use purpose-built bulk tools for initial load, then switch to CDC
# Step 1: Bulk migration with pg_dump/COPY or Spark
# Step 2: Start Debezium from current position
config = {
    "snapshot.mode": "no_data",  # Skip snapshot, start from current LSN
    "table.include.list": "public.massive_history_table",
}

Wrong: Ignoring schema evolution in CDC pipelines

# BAD -- Consuming CDC events as raw JSON without schema validation.
# When source adds a column, downstream breaks with KeyError.
event = json.loads(msg.value())
customer_name = event["after"]["customer_name"]  # Breaks when field renamed
order_total = event["after"]["total"]  # Breaks when type changes

Correct: Use schema registry with backward compatibility

# GOOD -- Schema Registry enforces compatibility and handles evolution
from confluent_kafka.schema_registry import SchemaRegistryClient
from confluent_kafka.schema_registry.avro import AvroDeserializer

sr_client = SchemaRegistryClient({"url": "http://schema-registry:8081"})
deserializer = AvroDeserializer(sr_client)
event = deserializer(msg.value(), None)
customer_name = event.get("after", {}).get("customer_name", "UNKNOWN")

Wrong: Subscribing to all Salesforce CDC events without filtering

# BAD -- Subscribing to /data/ChangeEvents captures ALL enabled objects.
# Burns through 50K/day delivery allocation in minutes on active orgs.
topic = "/data/ChangeEvents"  # Every object, every field, every change

Correct: Subscribe to specific object change events

# GOOD -- Subscribe only to specific objects you need
topic = "/data/AccountChangeEvent"  # Only Account changes
# Or use a Custom Channel to filter specific objects and fields

Common Pitfalls

Diagnostic Commands

# === Debezium Diagnostics ===
# Check connector status and task state
curl -s http://localhost:8083/connectors/erp-postgres-cdc/status | jq '.'

# Check PostgreSQL replication slot health
psql -h erp-db.example.com -U debezium_repl -d erp_production \
  -c "SELECT slot_name, active, restart_lsn, confirmed_flush_lsn,
      pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn))
      AS lag_bytes FROM pg_replication_slots;"

# Monitor Kafka consumer group lag
kafka-consumer-groups --bootstrap-server kafka:9092 \
  --describe --group erp-cdc-consumer-group

# === GoldenGate Diagnostics ===
echo "INFO ALL" | $OGG_HOME/ggsci
echo "SEND EXTRACT ext_erp, STATUS" | $OGG_HOME/ggsci
echo "STATS EXTRACT ext_erp, TOTALSONLY *.*" | $OGG_HOME/ggsci

# === Salesforce CDC Diagnostics ===
curl -s "https://yourorg.my.salesforce.com/services/data/v66.0/limits" \
  -H "Authorization: Bearer $ACCESS_TOKEN" | jq '.DailyDeliveredPlatformEvents'

# === SAP SLT Diagnostics ===
# Transaction LTRC -> Data Transfer Monitor
# Transaction LTRS -> Trigger Administration
# SE16 -> table IUUC_REPL_LOG (logging table sizes)
# SM37 -> job name IUUC_* (background jobs)

Version History & Compatibility

Tool/VersionRelease DateStatusBreaking ChangesMigration Notes
Debezium 3.4.02025-12CurrentKafka 4.1 baselineUpdate Kafka Connect to 4.1+
Debezium 3.3.02025-10SupportedExactly-once for all core connectorsEnable Kafka transactions
Debezium 3.0.02025-01SupportedIncremental snapshots GAReplace ad-hoc snapshots
GoldenGate 23ai2025-03CurrentPostgreSQL source, Pulsar outputNew connector types require 23ai
GoldenGate 21c2022-01SupportedNoneMinimum for OCI managed
Salesforce CDC (Spring '26)2026-02CurrentPub/Sub API v2 enhancementsMigrate from CometD to Pub/Sub API
SAP SLT DMIS 2018 SP4+2023-01CurrentNew CDC mechanism (1:N targets)Legacy triggers limited to 4 targets

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Need real-time change streaming from relational databases to KafkaOne-time bulk data migration (>100M rows)pg_dump, Oracle Data Pump, Bulk API, SAP DMF
Need event-driven integration between ERP systemsSimple scheduled batch sync (daily/weekly)business/erp-integration/batch-vs-realtime-integration/2026
Source DB supports log-based capture (WAL, binlog, redo)Source is SaaS with no DB accessPlatform-native CDC or API polling
Need to capture every change including intermediate statesOnly need current state / point-in-time snapshotFull table replication or API-based sync
Budget allows Kafka infra or Oracle licensingZero infrastructure budget and no Kafka clusterPlatform-native webhooks or managed ETL

Cross-System Comparison

CapabilityDebezium 3.4.xOracle GoldenGate 23aiSalesforce CDCSAP SLTNotes
ArchitectureKafka Connect connectorExtract/Pump/Replicat pipelinePlatform-native event streamABAP triggers + logging table
Capture methodTransaction log readingRedo log miningInternal object trackingDatabase triggersSLT only non-log-based tool
Source DB overheadMinimal (reads logs)Minimal (reads logs)Zero (platform-managed)Moderate (trigger writes)
Latency (typical)100ms-2s50-200ms1-5 seconds5-60 secondsGoldenGate lowest latency
Throughput ceiling1M+ events/sec100K+ events/sec50K-4.5M events/dayHardware-dependentSalesforce is daily, others per-second
Exactly-onceYes (v3.3+)NoNoNoDebezium only with Kafka txns
Schema evolutionAutomatic (registry)Manual DDL replicationAutomaticManual (transformation rules)
Data transformationSMTsColumn mapping, filteringField selection onlyABAP transformation rules
Replay / rewindKafka offset resetTrail file replay3-day replayNo replay
Bidirectional syncNo (custom logic)Yes (native)NoNoGoldenGate only
MonitoringREST + JMX metricsGGSCI + Enterprise MgrSetup > Platform EventsLTRC + SM37
Total cost (mid-market)$0 + Kafka ($5K-50K/yr)$50K-500K+/yrIncluded; add-on $10K+/yrIncluded with S/4HANAGoldenGate most expensive
Skill requirementsKafka + SQL DBAOracle DBA + GG specialistSF admin + developerSAP Basis + ABAP
Cloud-native optionConfluent Cloud, AWS MSKOCI GoldenGate (managed)Native (Salesforce)SAP BTP (limited)

Important Caveats

Related Units