CDC Implementation Patterns: Debezium vs Oracle GoldenGate vs Salesforce CDC vs SAP SLT
Type: ERP Integration
Systems: Debezium 3.4.x, GoldenGate 23ai, Salesforce CDC v66.0, SAP SLT DMIS 2018 SP4+
Confidence: 0.86
Sources: 8
Verified: 2026-03-07
Freshness: evolving
TL;DR
- Bottom line: Debezium is the best default for log-based CDC with open-source budgets and Kafka ecosystems; GoldenGate excels in Oracle-to-Oracle with sub-second latency at enterprise cost; Salesforce CDC is the only option for Salesforce object changes; SAP SLT is the only supported CDC path for SAP HANA replication but uses triggers, not log-based capture.
- Key limit: Salesforce CDC shares its 50K event/day delivery allocation with Platform Events (Enterprise edition); GoldenGate requires licensing both source AND target database processors, effectively doubling expected cost.
- Watch out for: SAP SLT uses database triggers -- not log-based CDC -- which adds write overhead to source systems. This is fundamentally different from Debezium and GoldenGate's log-reading approach.
- Best for: Cross-system comparison when selecting a CDC tool for ERP integration -- each tool owns a different niche and none is universally best.
- Authentication: Debezium uses database credentials (user/pass or SSL certs); GoldenGate uses Oracle DB credentials + trail file encryption; Salesforce CDC uses OAuth 2.0; SAP SLT uses RFC credentials (ABAP user).
System Profile
This card compares four CDC implementations that serve fundamentally different architectural niches. Debezium is an open-source, log-based CDC platform built on Kafka Connect. Oracle GoldenGate is a commercial, high-performance replication engine optimized for Oracle environments. Salesforce CDC is a proprietary, platform-native event stream for Salesforce object changes. SAP SLT is a trigger-based replication server purpose-built for SAP HANA data provisioning.
| System | Role | Capture Method | Primary Target |
| Debezium 3.4.x | Open-source log-based CDC | Transaction log reading (WAL, binlog, redo) | Kafka topics (any downstream consumer) |
| Oracle GoldenGate 23ai | Commercial log-based replication | Oracle redo log mining / LogMiner | Oracle, PostgreSQL, Kafka, Pulsar |
| Salesforce CDC (Spring '26) | Platform-native change events | Internal Salesforce change tracking | Pub/Sub API (gRPC) or CometD subscribers |
| SAP SLT (DMIS 2018 SP4+) | Trigger-based replication server | Database triggers + logging tables | SAP HANA, SAP BW/4HANA |
API Surfaces & Capabilities
| CDC Tool | Protocol | Best For | Capture Granularity | Latency | Schema Evolution | Bulk Capable? |
| Debezium (Kafka Connect) | HTTP REST (mgmt), Kafka (data) | Heterogeneous DB-to-Kafka CDC | Row-level DML + DDL | Sub-second to seconds | Yes (schema registry) | Snapshots only |
| Debezium Server | HTTP REST (mgmt), configurable sinks | Non-Kafka deployments | Row-level DML + DDL | Sub-second to seconds | Yes | Snapshots only |
| GoldenGate Extract | CLI (GGSCI), REST Admin API | Oracle-to-Oracle, Oracle-to-Kafka | Row-level DML + DDL (committed txns) | 50-200ms | Manual DDL replication | Via initial load |
| GoldenGate OCI (Managed) | Web console, REST API | Cloud-native Oracle replication | Same as Extract | 50-200ms | Manual | Via initial load |
| Salesforce CDC | gRPC (Pub/Sub API), Bayeux (CometD) | Salesforce object change streaming | Field-level change events | Seconds | Automatic | No |
| SAP SLT | ABAP RFC, DB triggers | SAP-to-HANA real-time replication | Row-level via trigger capture | Seconds to minutes | Manual (transformation rules) | Initial load + delta |
Rate Limits & Quotas
Per-Tool Throughput Limits
| Limit Type | Debezium | GoldenGate | Salesforce CDC | SAP SLT |
| Max events/sec (sustained) | 100K+ (Kafka-dependent) | 100K+ (hardware-dependent) | Limited by delivery allocation | Trigger overhead-limited |
| Max concurrent connectors | Unlimited (Kafka Connect cluster) | Per-license (source+target CPUs) | N/A (platform-managed) | Per-configuration (1:N from DMIS 2018 SP4) |
| Max message/event size | Kafka default 1 MB (configurable) | Trail file segment 100 MB default | 1 MB per change event | RFC packet size (~64 KB default) |
| Replay/rewind window | Kafka retention (configurable) | Trail file retention (configurable) | 3 days (72 hours) | No replay |
Salesforce CDC-Specific Allocation
| Limit Type | Value | Window | Edition Differences |
| Event delivery (CometD + Pub/Sub) | 50,000 baseline | 24h | Enterprise: 50K; with add-on: up to 4.5M/month |
| Event publishing | 250,000 | Per hour | Internal (triggers, flows) |
| Concurrent CometD clients | 1,000 | Per org | Shared across all streaming subscriptions |
| Event replay retention | 3 days | Rolling | Same across all editions |
| Max entities tracked | Admin-selected | N/A | Standard + custom objects; not all standard objects supported |
Debezium Scaling Reference
| Target Throughput | Kafka Connect Workers | Memory/Worker | Connector Tasks | Partitions |
| 1K events/sec | 1 | 2 GB | 1 | 3-6 |
| 10K events/sec | 2 | 4 GB | 2-4 | 12-24 |
| 100K events/sec | 4+ | 4-8 GB | 4-8 | 24-48 |
| 1M events/sec | 8+ | 8 GB | 8-16 | 48+ |
Authentication
| CDC Tool | Auth Method | Credential Type | Rotation | Notes |
| Debezium | Database native | User/password, SSL certs, Kerberos | Manual | Needs SELECT + REPLICATION privileges |
| GoldenGate | Oracle DB auth + OS auth | DB credentials + trail encryption keys | Manual; OCI uses IAM | Requires DBMS_GOLDENGATE_AUTH grants |
| Salesforce CDC | OAuth 2.0 | JWT bearer or web server flow | Refresh tokens auto-rotate | Connected App required |
| SAP SLT | RFC user auth | ABAP system user + auth objects | SAP user management (SU01) | Requires RFC trust between source and SLT server |
Authentication Gotchas
- Debezium PostgreSQL connector requires a dedicated replication slot; dropping and recreating the slot loses unprocessed WAL segments permanently. [src1]
- GoldenGate requires supplemental logging enabled at database level; forgetting this silently produces incomplete change records. [src2]
- Salesforce CDC subscriptions require "View All Data" or object-level read permissions; field-level security still applies and can silently exclude fields. [src3]
- SAP SLT RFC user needs exact authorization objects (S_RFC, S_TABU_DIS, S_BTCH_JOB) -- missing any one causes silent replication failures. [src5]
Constraints
- Debezium requires Apache Kafka (or Kafka-compatible broker) for Kafka Connect deployment; Debezium Server is the alternative but has fewer production deployments.
- Oracle GoldenGate licensing requires counting processors on BOTH source and target systems, effectively doubling expected license cost. On-premises ~$17,500/processor.
- Salesforce CDC is limited to objects explicitly enabled by admin in Setup; not all standard objects are supported, and junction objects are not available for CDC.
- SAP SLT trigger-based CDC adds write overhead to every INSERT, UPDATE, and DELETE on source tables -- each operation writes to a logging table, doubling write I/O.
- Salesforce CDC replay window is 3 days only -- if a subscriber is offline >72 hours, events are permanently lost with no recovery except full re-sync.
- GoldenGate requires supplemental logging on Oracle source; if redo logs cycle before Extract processes them, those transactions are permanently lost.
Integration Pattern Decision Tree
START -- Need CDC for ERP integration
+-- What is the source system?
| +-- Salesforce
| | +-- Salesforce CDC (only option for SF object changes)
| | +-- Need >50K events/day? -> Add-on license required
| | +-- Need >3-day replay? -> Store events externally
| | +-- Pub/Sub API (gRPC) preferred over CometD
| +-- SAP S/4HANA or ECC -> target is SAP HANA?
| | +-- YES -> SAP SLT (vendor-supported path)
| | +-- NO (target is Kafka/warehouse) -> Debezium or ODP extraction
| +-- Oracle Database
| | +-- Budget for licensing? (>$17.5K/processor)
| | | +-- YES -> GoldenGate (lowest latency, best Oracle integration)
| | | +-- NO -> Debezium with Oracle connector (LogMiner-based)
| | +-- Need bidirectional? -> GoldenGate (native support)
| +-- PostgreSQL / MySQL / SQL Server / MongoDB
| | +-- Debezium (best open-source coverage)
| +-- Multiple heterogeneous sources
| +-- All open-source DBs -> Debezium (one platform)
| +-- Mix Oracle + open-source -> Debezium for all or GoldenGate + Debezium
+-- What latency is required?
| +-- <100ms -> GoldenGate (tuned) or Debezium (tuned Kafka)
| +-- <1 second -> Debezium (default config)
| +-- <10 seconds -> Any tool works
| +-- Minutes acceptable -> SAP SLT or polling
+-- What is the budget?
+-- Zero license cost -> Debezium (Apache 2.0)
+-- Moderate ($10K-100K/yr) -> Debezium + Confluent Cloud or OCI GG BYOL
+-- Enterprise (>$100K/yr) -> GoldenGate on-prem or OCI managed
Quick Reference
| Capability | Debezium 3.4.x | Oracle GoldenGate 23ai | Salesforce CDC (Spring '26) | SAP SLT (DMIS 2018 SP4+) |
| Capture method | Log-based (WAL, binlog, redo) | Log-based (redo log mining) | Platform-internal tracking | Database triggers + logging tables |
| Latency | Sub-second (Kafka-dependent) | 50-200ms (tuned) | Seconds (near real-time) | Seconds to minutes |
| Delivery semantics | At-least-once (exactly-once in 3.3+) | At-least-once (trail-based) | At-least-once (replay-based) | At-least-once (logging table) |
| Supported sources | PostgreSQL, MySQL, SQL Server, MongoDB, Oracle, DB2, Cassandra, Vitess, Spanner | Oracle, PostgreSQL (23ai), MySQL, SQL Server, DB2 | Salesforce objects only | SAP ECC, S/4HANA, BW (ABAP-based) |
| Supported targets | Kafka, Pulsar, Kinesis, EventHubs, Redis, HTTP | Oracle, PostgreSQL, Kafka, Pulsar (23ai), Big Data targets | Pub/Sub API / CometD subscribers | SAP HANA, SAP BW/4HANA |
| Schema DDL capture | Yes (schema history topic) | Yes (manual DDL replication) | No (data changes only) | No (schema pre-configured) |
| Replay / rewind | Kafka retention (configurable) | Trail file retention | 3 days only | No replay |
| Initial load / snapshot | Yes (incremental snapshots in 3.0+) | Yes (initial load mode) | No (CDC only) | Yes (full load + delta) |
| Bidirectional sync | Not native | Yes (native bidirectional) | No | No |
| Licensing | Apache 2.0 (free) | ~$17.5K/processor or ~$0.32/OCPU-hr | Included (Enterprise+); add-on for high-volume | Included with S/4HANA license |
| Deployment complexity | Moderate (Kafka cluster required) | High (specialized DBA skills) | Low (platform-managed) | Moderate (ABAP + RFC config) |
| Exactly-once support | Yes (Kafka txns, v3.3+) | No | No | No |
Step-by-Step Integration Guide
1. Configure Debezium CDC connector for PostgreSQL
Register a Debezium PostgreSQL connector via Kafka Connect REST API. Requires a running Kafka Connect cluster with the Debezium PostgreSQL plugin installed. [src1]
# Input: Running Kafka Connect cluster, PostgreSQL with wal_level=logical
# Output: Connector registered, change events flowing to Kafka topic
curl -X POST http://localhost:8083/connectors \
-H "Content-Type: application/json" \
-d '{
"name": "erp-postgres-cdc",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.hostname": "erp-db.example.com",
"database.port": "5432",
"database.user": "debezium_repl",
"database.password": "${file:/secrets/pg-password.txt:password}",
"database.dbname": "erp_production",
"topic.prefix": "erp.cdc",
"table.include.list": "public.orders,public.invoices,public.customers",
"plugin.name": "pgoutput",
"slot.name": "debezium_erp_slot",
"publication.name": "dbz_erp_publication",
"snapshot.mode": "initial",
"max.batch.size": "4096",
"max.queue.size": "16384",
"poll.interval.ms": "100",
"heartbeat.interval.ms": "10000",
"schema.history.internal.kafka.bootstrap.servers": "kafka:9092",
"schema.history.internal.kafka.topic": "erp.schema-history"
}
}'
Verify: curl http://localhost:8083/connectors/erp-postgres-cdc/status | jq '.connector.state' -> expected: "RUNNING"
2. Subscribe to Salesforce CDC via Pub/Sub API (gRPC)
Use the Salesforce Pub/Sub API to subscribe to change events for a specific object. Requires OAuth 2.0 access token. [src3]
# Input: Salesforce Connected App with OAuth JWT bearer flow configured
# Output: Stream of change events for Account object
import grpc
from salesforce_pubsub_api import pubsub_api_pb2, pubsub_api_pb2_grpc
access_token = get_salesforce_token() # Your OAuth implementation
instance_url = "https://yourorg.my.salesforce.com"
tenant_id = "00D..." # Your org ID
channel = grpc.secure_channel(
"api.pubsub.salesforce.com:7443",
grpc.ssl_channel_credentials()
)
stub = pubsub_api_pb2_grpc.PubSubStub(channel)
metadata = [
("accesstoken", access_token),
("instanceurl", instance_url),
("tenantid", tenant_id),
]
topic = "/data/AccountChangeEvent"
fetch_request = pubsub_api_pb2.FetchRequest(
topic_name=topic,
replay_preset=pubsub_api_pb2.ReplayPreset.LATEST,
num_requested=100
)
for response in stub.Subscribe(iter([fetch_request]), metadata=metadata):
for event in response.events:
schema = get_schema(stub, response.schema_id, metadata)
decoded = decode_avro(event.event.payload, schema)
print(f"Change: {decoded['ChangeEventHeader']}")
Verify: Create/update an Account record -> expected: change event appears in subscriber output within seconds.
3. Configure Oracle GoldenGate Extract process
Set up an Extract process to capture changes from an Oracle source database. [src2]
-- Enable supplemental logging on Oracle source
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
-- Create GoldenGate admin user
CREATE USER ggadmin IDENTIFIED BY "SecurePassword123";
GRANT DBA TO ggadmin;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('ggadmin');
-- GoldenGate Extract parameter file (dirprm/ext_erp.prm)
EXTRACT ext_erp
SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
USERID ggadmin, PASSWORD SecurePassword123, ENCRYPTKEY DEFAULT
EXTTRAIL ./dirdat/et
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
TABLE hr.employees;
TABLE fin.invoices;
TABLE fin.payments;
TABLE sales.orders;
# Register and start Extract via GGSCI
GGSCI> ADD EXTRACT ext_erp, INTEGRATED TRANLOG, BEGIN NOW
GGSCI> ADD EXTTRAIL ./dirdat/et, EXTRACT ext_erp, MEGABYTES 200
GGSCI> START EXTRACT ext_erp
Verify: GGSCI> INFO EXTRACT ext_erp, DETAIL -> expected: Status RUNNING, checkpoint advancing.
4. Configure SAP SLT replication
Set up SAP SLT to replicate tables from an SAP ECC or S/4HANA source to SAP HANA. [src5]
-- Step 1: Open transaction LTRS in the SLT system
-- Define a new configuration:
-- Source system: RFC destination to SAP source (SM59 connection)
-- Target system: DB connection to SAP HANA
-- Mass transfer ID: unique identifier
-- Number of data transfer jobs: 3-5 (based on sizing)
-- Number of initial load jobs: 5-10
-- Step 2: Configure RFC destination (SM59)
-- Connection type: 3 (ABAP Connection)
-- Target host: source system hostname
-- Logon user: SLT replication user (S_RFC, S_TABU_DIS authorizations)
-- Step 3: Add tables for replication (LTRC)
-- Select configuration, add tables: VBAK, VBAP, BKPF
-- Choose replication mode: "Replicate" (initial load + real-time delta)
Verify: Transaction LTRC -> Data Transfer Monitor -> expected: tables show status "Green" (active replication).
Code Examples
Python: Debezium CDC consumer with error handling
# Input: Kafka cluster with Debezium CDC topics
# Output: Processed change events with idempotent handling
from confluent_kafka import Consumer, KafkaError # confluent-kafka==2.6.1
import json
def create_cdc_consumer(bootstrap_servers, group_id, topics):
consumer = Consumer({
"bootstrap.servers": bootstrap_servers,
"group.id": group_id,
"auto.offset.reset": "earliest",
"enable.auto.commit": False,
"max.poll.interval.ms": 300000,
})
consumer.subscribe(topics)
processed_lsns = set()
try:
while True:
msg = consumer.poll(timeout=1.0)
if msg is None:
continue
if msg.error():
if msg.error().code() == KafkaError._PARTITION_EOF:
continue
raise Exception(f"Consumer error: {msg.error()}")
event = json.loads(msg.value().decode("utf-8"))
lsn = event.get("source", {}).get("lsn")
if lsn in processed_lsns:
consumer.commit(msg)
continue
op = event.get("op") # c=create, u=update, d=delete, r=snapshot
if op in ("c", "u"):
process_upsert(event["source"]["table"], event.get("after", {}))
elif op == "d":
process_delete(event["source"]["table"], event.get("before", {}))
processed_lsns.add(lsn)
consumer.commit(msg)
finally:
consumer.close()
Bash: GoldenGate health check and diagnostics
# Input: GoldenGate installation with running processes
# Output: Status of all Extract and Replicat processes
echo "INFO ALL" | $OGG_HOME/ggsci
echo "INFO EXTRACT ext_erp, DETAIL" | $OGG_HOME/ggsci
du -sh $OGG_HOME/dirdat/
echo "STATS EXTRACT ext_erp, TOTAL, TABLE *.*" | $OGG_HOME/ggsci
tail -50 $OGG_HOME/dirrpt/ext_erp*.rpt
Error Handling & Failure Points
Common Error Patterns
| CDC Tool | Error | Cause | Resolution |
| Debezium | REPLICATION_SLOT_ALREADY_EXISTS | Previous connector not cleaned up | Drop slot: SELECT pg_drop_replication_slot('slot_name'); |
| Debezium | Kafka lag increasing, connector RUNNING | Slow consumer or insufficient partitions | Scale workers; increase max.batch.size and max.queue.size |
| GoldenGate | OGG-00868: Trail file corrupted | Disk full during trail write | Restore from checkpoint; ensure adequate disk space |
| GoldenGate | OGG-01028: Extract lag exceeds threshold | Source redo faster than Extract | Add parallel Extract threads; increase trail file size |
| Salesforce CDC | EXCEEDED_EVENT_DELIVERY_LIMIT | 50K daily allocation consumed | Purchase add-on license; reduce subscribed objects |
| Salesforce CDC | Missed events after downtime | Subscriber offline >72 hours | Full re-sync required; implement external event store |
| SAP SLT | IUUC_REPL_RUNTIME: Job cancelled | Source table locked or RFC timeout | Check SM21 logs; increase RFC timeout in SM59 |
| SAP SLT | Trigger creation failure | Missing authorizations | Grant S_TABU_DIS; verify tablespace availability |
Failure Points in Production
- Debezium replication slot growth: If consumer stops, PostgreSQL WAL segments accumulate behind the replication slot, eventually crashing the database. Fix:
Monitor pg_replication_slots; set max_slot_wal_keep_size (PostgreSQL 13+). [src1]
- GoldenGate redo log gap: If Extract falls behind and Oracle recycles redo logs, transactions are permanently lost. Fix:
Size redo logs to hold 2x Extract processing time; use integrated Extract. [src2]
- Salesforce CDC silent field exclusion: Fields the integration user lacks FLS on are silently omitted from change events. Fix:
Grant explicit FLS on every needed field; audit on initial setup. [src3]
- SAP SLT trigger-induced deadlocks: On high-volume tables, SLT triggers can cause deadlocks with application transactions. Fix:
Deploy SLT standalone with dedicated work processes; schedule off-peak for high-volume tables. [src5]
- Debezium schema evolution breakage: Schema changes during active replication cause deserialization failures without schema registry. Fix:
Use Avro + Confluent Schema Registry with BACKWARD or FULL compatibility. [src1]
Anti-Patterns
Wrong: Using CDC for bulk data migration
# BAD -- CDC is designed for ongoing change streams, not one-time bulk loads.
# Using Debezium snapshot for 500M-row migration will:
# 1. Create massive Kafka lag
# 2. Hold a replication slot for hours/days, growing WAL
# 3. Block other replication slots from advancing
config = {
"snapshot.mode": "initial", # Scans entire 500M-row table
"table.include.list": "public.massive_history_table",
}
Correct: Use bulk tools for migration, CDC for ongoing sync
# GOOD -- Use purpose-built bulk tools for initial load, then switch to CDC
# Step 1: Bulk migration with pg_dump/COPY or Spark
# Step 2: Start Debezium from current position
config = {
"snapshot.mode": "no_data", # Skip snapshot, start from current LSN
"table.include.list": "public.massive_history_table",
}
Wrong: Ignoring schema evolution in CDC pipelines
# BAD -- Consuming CDC events as raw JSON without schema validation.
# When source adds a column, downstream breaks with KeyError.
event = json.loads(msg.value())
customer_name = event["after"]["customer_name"] # Breaks when field renamed
order_total = event["after"]["total"] # Breaks when type changes
Correct: Use schema registry with backward compatibility
# GOOD -- Schema Registry enforces compatibility and handles evolution
from confluent_kafka.schema_registry import SchemaRegistryClient
from confluent_kafka.schema_registry.avro import AvroDeserializer
sr_client = SchemaRegistryClient({"url": "http://schema-registry:8081"})
deserializer = AvroDeserializer(sr_client)
event = deserializer(msg.value(), None)
customer_name = event.get("after", {}).get("customer_name", "UNKNOWN")
Wrong: Subscribing to all Salesforce CDC events without filtering
# BAD -- Subscribing to /data/ChangeEvents captures ALL enabled objects.
# Burns through 50K/day delivery allocation in minutes on active orgs.
topic = "/data/ChangeEvents" # Every object, every field, every change
Correct: Subscribe to specific object change events
# GOOD -- Subscribe only to specific objects you need
topic = "/data/AccountChangeEvent" # Only Account changes
# Or use a Custom Channel to filter specific objects and fields
Common Pitfalls
- Debezium: PostgreSQL replication slot limit: Default
max_replication_slots is 10. Each connector uses one slot. Fix: Set max_replication_slots = 2x expected connectors + headroom. [src1]
- GoldenGate: Trail file disk exhaustion: Trail files accumulate if Pump/Replicat lag behind. Fix:
Configure PURGEOLDEXTRACTS in Pump params; monitor dirdat/ size; alert at 70% disk. [src2]
- Salesforce CDC: Platform Events allocation sharing: CDC shares delivery allocation with Platform Events -- heavy PE usage starves CDC. Fix:
Monitor EventBus usage in Setup; separate high-volume PE to own allocation. [src4]
- SAP SLT: Source system performance impact: Unlike log-based CDC, SLT triggers execute synchronously -- high-frequency tables can see 15-30% write degradation. Fix:
Deploy SLT standalone; replicate only needed columns; monitor source response times. [src5]
- All tools: Ignoring timezone handling: CDC timestamps differ per tool. Debezium uses UTC, GoldenGate uses source DB timezone, Salesforce uses UTC, SAP SLT preserves source timezone. Fix:
Normalize all CDC timestamps to UTC at the consumer layer.
- Debezium: Connector task failure not auto-recovering: Kafka Connect does NOT auto-restart failed tasks. Connector shows RUNNING but task shows FAILED. Fix:
Configure "errors.retry.timeout": "-1"; deploy health check sidecar. [src7]
Diagnostic Commands
# === Debezium Diagnostics ===
# Check connector status and task state
curl -s http://localhost:8083/connectors/erp-postgres-cdc/status | jq '.'
# Check PostgreSQL replication slot health
psql -h erp-db.example.com -U debezium_repl -d erp_production \
-c "SELECT slot_name, active, restart_lsn, confirmed_flush_lsn,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn))
AS lag_bytes FROM pg_replication_slots;"
# Monitor Kafka consumer group lag
kafka-consumer-groups --bootstrap-server kafka:9092 \
--describe --group erp-cdc-consumer-group
# === GoldenGate Diagnostics ===
echo "INFO ALL" | $OGG_HOME/ggsci
echo "SEND EXTRACT ext_erp, STATUS" | $OGG_HOME/ggsci
echo "STATS EXTRACT ext_erp, TOTALSONLY *.*" | $OGG_HOME/ggsci
# === Salesforce CDC Diagnostics ===
curl -s "https://yourorg.my.salesforce.com/services/data/v66.0/limits" \
-H "Authorization: Bearer $ACCESS_TOKEN" | jq '.DailyDeliveredPlatformEvents'
# === SAP SLT Diagnostics ===
# Transaction LTRC -> Data Transfer Monitor
# Transaction LTRS -> Trigger Administration
# SE16 -> table IUUC_REPL_LOG (logging table sizes)
# SM37 -> job name IUUC_* (background jobs)
Version History & Compatibility
| Tool/Version | Release Date | Status | Breaking Changes | Migration Notes |
| Debezium 3.4.0 | 2025-12 | Current | Kafka 4.1 baseline | Update Kafka Connect to 4.1+ |
| Debezium 3.3.0 | 2025-10 | Supported | Exactly-once for all core connectors | Enable Kafka transactions |
| Debezium 3.0.0 | 2025-01 | Supported | Incremental snapshots GA | Replace ad-hoc snapshots |
| GoldenGate 23ai | 2025-03 | Current | PostgreSQL source, Pulsar output | New connector types require 23ai |
| GoldenGate 21c | 2022-01 | Supported | None | Minimum for OCI managed |
| Salesforce CDC (Spring '26) | 2026-02 | Current | Pub/Sub API v2 enhancements | Migrate from CometD to Pub/Sub API |
| SAP SLT DMIS 2018 SP4+ | 2023-01 | Current | New CDC mechanism (1:N targets) | Legacy triggers limited to 4 targets |
When to Use / When Not to Use
| Use When | Don't Use When | Use Instead |
| Need real-time change streaming from relational databases to Kafka | One-time bulk data migration (>100M rows) | pg_dump, Oracle Data Pump, Bulk API, SAP DMF |
| Need event-driven integration between ERP systems | Simple scheduled batch sync (daily/weekly) | business/erp-integration/batch-vs-realtime-integration/2026 |
| Source DB supports log-based capture (WAL, binlog, redo) | Source is SaaS with no DB access | Platform-native CDC or API polling |
| Need to capture every change including intermediate states | Only need current state / point-in-time snapshot | Full table replication or API-based sync |
| Budget allows Kafka infra or Oracle licensing | Zero infrastructure budget and no Kafka cluster | Platform-native webhooks or managed ETL |
Cross-System Comparison
| Capability | Debezium 3.4.x | Oracle GoldenGate 23ai | Salesforce CDC | SAP SLT | Notes |
| Architecture | Kafka Connect connector | Extract/Pump/Replicat pipeline | Platform-native event stream | ABAP triggers + logging table | |
| Capture method | Transaction log reading | Redo log mining | Internal object tracking | Database triggers | SLT only non-log-based tool |
| Source DB overhead | Minimal (reads logs) | Minimal (reads logs) | Zero (platform-managed) | Moderate (trigger writes) | |
| Latency (typical) | 100ms-2s | 50-200ms | 1-5 seconds | 5-60 seconds | GoldenGate lowest latency |
| Throughput ceiling | 1M+ events/sec | 100K+ events/sec | 50K-4.5M events/day | Hardware-dependent | Salesforce is daily, others per-second |
| Exactly-once | Yes (v3.3+) | No | No | No | Debezium only with Kafka txns |
| Schema evolution | Automatic (registry) | Manual DDL replication | Automatic | Manual (transformation rules) | |
| Data transformation | SMTs | Column mapping, filtering | Field selection only | ABAP transformation rules | |
| Replay / rewind | Kafka offset reset | Trail file replay | 3-day replay | No replay | |
| Bidirectional sync | No (custom logic) | Yes (native) | No | No | GoldenGate only |
| Monitoring | REST + JMX metrics | GGSCI + Enterprise Mgr | Setup > Platform Events | LTRC + SM37 | |
| Total cost (mid-market) | $0 + Kafka ($5K-50K/yr) | $50K-500K+/yr | Included; add-on $10K+/yr | Included with S/4HANA | GoldenGate most expensive |
| Skill requirements | Kafka + SQL DBA | Oracle DBA + GG specialist | SF admin + developer | SAP Basis + ABAP | |
| Cloud-native option | Confluent Cloud, AWS MSK | OCI GoldenGate (managed) | Native (Salesforce) | SAP BTP (limited) | |
Important Caveats
- Debezium's "exactly-once" semantics (v3.3+) require Kafka transactions enabled on the broker -- most existing Kafka clusters run without transactions, and enabling them has a ~10-15% throughput cost.
- Oracle GoldenGate pricing is per-processor-core with Oracle's core factor table -- ARM and Intel cores count differently. Always verify the core factor for your specific hardware.
- Salesforce CDC delivery allocations are shared with Platform Events across the org. An org that uses Platform Events for internal automation may have very little allocation remaining for CDC.
- SAP SLT performance impact on the source system is highly variable -- simple tables may see <5% overhead while tables with many triggers or complex indexes can see 20-30% write degradation.
- This comparison covers CDC for ERP integration patterns. For analytics/data warehouse CDC (Fivetran, Airbyte, AWS DMS), the decision criteria are different.
- All throughput numbers are approximate and hardware-dependent. Actual performance depends on source database load, network bandwidth, event size, and downstream consumer speed.
- GoldenGate 23ai's PostgreSQL and Pulsar support is new (March 2025) and may have fewer production deployments than its mature Oracle-to-Oracle path.
Related Units