Batch vs Real-Time vs Event-Driven Integration Patterns for ERPs

Type: ERP Integration System: Cross-Platform (Salesforce, SAP, Oracle, Dynamics 365, NetSuite, Workday) Confidence: 0.90 Sources: 7 Verified: 2026-03-02 Freshness: evolving

TL;DR

System Profile

This card is a cross-platform architecture pattern guide covering the three fundamental integration timing patterns — batch (scheduled), real-time (synchronous), and event-driven (asynchronous near-real-time) — mapped to concrete capabilities across Salesforce, SAP S/4HANA, Oracle ERP Cloud, Microsoft Dynamics 365, NetSuite, and Workday.

PropertyValue
ScopeCross-platform architecture pattern
Systems CoveredSalesforce, SAP S/4HANA, Oracle ERP Cloud, Dynamics 365, NetSuite, Workday
Pattern TypesBatch, Real-Time (Synchronous), Event-Driven (Asynchronous)
iPaaS RelevanceMuleSoft, Boomi, Workato, Celigo, SAP Integration Suite, OIC
DeploymentCloud-first (applicable to hybrid/on-premise with middleware)
StatusGA — all major ERPs support all three patterns

API Surfaces & Capabilities

Each ERP system maps its integration surfaces to the three timing patterns differently. [src1, src6]

ERP SystemReal-Time APIBatch/Bulk APIEvent-Driven MechanismFile-Based Import
SalesforceREST API (v62.0), Composite APIBulk API 2.0 (CSV, 150MB/file)Platform Events, CDC, Pub/Sub APIData Loader, Bulk API CSV
SAP S/4HANAOData v4, BAPI/RFC (on-prem only)IDoc (batch documents)Event Mesh, Business EventsBTP Integration Suite
Oracle ERP CloudREST API, SOAPFBDI (File-Based Data Import), ESS jobsBusiness Events, Oracle Integration CloudFBDI (CSV/XML upload)
Dynamics 365Dataverse Web API (OData v4)Data Management Framework (DMF), Dual WriteBusiness Events, Dataverse webhooksDMF packages (CSV/XML)
NetSuiteSuiteTalk REST/SOAP, RESTletsSuiteQL bulk queries, CSV ImportUser Event Scripts, SuiteScript workflowsCSV Import, SDF bundles
WorkdayREST API, SOAPEIB (Enterprise Interface Builder), RaaSBusiness Process EventsEIB (file-based), Custom Reports

Rate Limits & Quotas

Per-Pattern Throughput Limits

ERP SystemReal-Time LimitBatch/Bulk LimitEvent Delivery LimitNotes
Salesforce100K API calls/24h (Enterprise), 5M (Unlimited)15,000 Bulk API batches/24h, 150MB per file50K event deliveries/day (base), 250K publish/hourCDC + Platform Events share delivery allocation [src1]
SAP S/4HANAFair-use throttling, dialog work process poolIDoc: no hard limit, throughput depends on system sizingEvent Mesh: subscription-based, typically 1M events/monthCloud restricts RFC; OData is primary real-time surface [src6]
Oracle ERP CloudREST: throttled per tenant, no published hard limitFBDI: 250MB per file, ESS job queueBusiness Events: subscription-basedFBDI is the dominant bulk pattern
Dynamics 3656,000 API calls/5min per user, 60K/5min per orgDMF: entity-dependent limitsWebhooks: 500 concurrent subscriptionsDual Write bypasses some API limits
NetSuite10 concurrent requests, governance units per scriptCSV Import: 25,000 rows/file, SuiteQL: no hard row limitUser Event Scripts: trigger-basedGovernance units (not API call counts) are the real constraint
WorkdayThrottled per tenant (undisclosed limits)EIB: file size limits vary by integrationBusiness Process Events: subscription-basedRaaS is best for bulk reads

Latency Expectations by Pattern

PatternTypical LatencyBest CaseWorst CaseSuitable For
Real-time (synchronous)200ms-2s50ms120s (Salesforce timeout)User-facing transactions, credit checks, inventory locks
Event-driven (async)1s-60s500msMinutes (backpressure)Order status propagation, record change sync, notifications
Near-real-time batch5-15 min1 min (micro-batch)1 hourDashboard updates, lead scoring, non-critical status sync
Scheduled batch1-24 hours15 min24+ hoursFinancial reporting, data warehouse loads, reference data
File-based import30 min-24 hours15 minDays (manual review)Data migration, initial loads, regulatory reporting

Authentication

Authentication is pattern-independent — the same auth flows apply regardless of integration timing pattern.

PatternAuth ConsiderationGotcha
Real-timeToken must be cached and refreshed proactivelyOAuth token expiry mid-transaction can cause silent failures
Batch/BulkService account with elevated permissionsSession timeout during long-running batch jobs
Event-drivenSubscription credentials must be long-lived and auto-renewingSalesforce Pub/Sub API requires gRPC with OAuth

Authentication Gotchas

Constraints

Integration Pattern Decision Tree

Use this decision tree to assign the correct integration pattern for each data flow. [src1, src4]

START — Assign integration pattern for a specific data flow
├── Is someone actively waiting for the result?
│   ├── YES — Is it a financial-impact or compliance decision?
│   │   ├── YES → REAL-TIME (synchronous API call)
│   │   │   Examples: credit check, payment authorization, inventory reservation
│   │   └── NO — Would a 5-minute delay cause genuine business impact?
│   │       ├── YES → REAL-TIME (synchronous API call)
│   │       └── NO → EVENT-DRIVEN (near-real-time async)
│   └── NO — Is the data flow triggered by a record change?
│       ├── YES — Do multiple downstream systems need this change?
│       │   ├── YES → EVENT-DRIVEN (pub/sub pattern)
│       │   └── NO → EVENT-DRIVEN or BATCH (based on volume)
│       │       ├── < 1,000 records/day → EVENT-DRIVEN
│       │       └── > 1,000 records/day → BATCH
│       └── NO — Is it a scheduled data synchronization?
│           ├── YES — Volume per run?
│           │   ├── < 2,000 records → REST API (batch-style)
│           │   ├── 2,000-150,000 records → BULK API (single job)
│           │   ├── > 150,000 records → BULK API with job chunking or FILE-BASED
│           │   └── Full data migration → FILE-BASED IMPORT
│           └── NO → Analyze case-by-case
├── Error tolerance?
│   ├── Zero-loss required → EVENT-DRIVEN with replay + dead letter queue
│   ├── Retry acceptable → EVENT-DRIVEN with at-least-once delivery
│   └── Best-effort → BATCH with reconciliation report
└── Bidirectional sync needed?
    ├── YES → Design conflict resolution strategy FIRST
    └── NO → Proceed with chosen pattern

Quick Reference

Pattern Comparison Summary

CriterionReal-Time (Synchronous)Event-Driven (Async)Batch (Scheduled)
Latency50ms-2s1s-60sMinutes to hours
CouplingTight — both systems must be availableLoose — broker decouples systemsLoose — no runtime dependency
ThroughputLow (API call per record)Medium (event stream)High (bulk processing)
ComplexityLow (simple API call)High (broker, replay, ordering)Medium (scheduling, monitoring)
Error handlingImmediate — caller gets error responseDeferred — dead letter queue, replayDeferred — reconciliation reports
Data freshnessReal-time (current)Near-real-time (seconds behind)Stale (minutes to hours behind)
InfrastructureAPI endpoint onlyMessage broker / event meshJob scheduler, bulk API access
Cost (relative)High (per-call API charges, always-on)Medium (broker licensing, compute)Low (off-peak compute, batched calls)
ReliabilityLow (cascading failures)High (broker persistence, replay)High (retry entire batch, reconcile)
Best forUser-facing, low-volume, critical decisionsMulti-system sync, change propagationHigh-volume loads, reporting, migrations
Worst forHigh-volume data loads, background jobsSimple point-to-point, low frequencyTime-sensitive operations, user-facing

Pattern-to-ERP Surface Mapping

PatternSalesforceSAP S/4HANAOracle ERP CloudDynamics 365NetSuite
Real-timeREST/Composite APIOData v4, BAPI (on-prem)REST APIDataverse Web APISuiteTalk REST, RESTlets
Event-drivenCDC, Platform Events, Pub/Sub APIEvent Mesh, Business EventsBusiness Events, OICBusiness Events, WebhooksUser Event Scripts
Batch/BulkBulk API 2.0IDoc, BTP IntegrationFBDI, ESS jobsDMF, Dual WriteCSV Import, SuiteQL
File-basedData Loader CSVBTP file uploadFBDI (CSV/XML)DMF packagesCSV Import

Step-by-Step Integration Guide

1. Audit and categorize all data flows

Enumerate every integration point. For each, capture: direction, volume, latency requirement, and business criticality. [src4]

# Integration audit spreadsheet structure
Flow ID | Source System | Target System | Direction | Records/Day | Latency Need | Business Impact | Pattern
F-001   | Salesforce    | NetSuite      | Outbound  | 500         | < 5 min      | High (revenue)  | Event-Driven
F-002   | Warehouse     | Salesforce    | Inbound   | 50,000      | Nightly OK   | Medium          | Batch
F-003   | Website       | ERP           | Inbound   | 200         | < 1s         | Critical (order)| Real-Time

Verify: Every integration point has a pattern assignment. No flow should be left as "TBD."

2. Apply the decision tree to each flow

Walk each flow through the decision tree. The four questions: (1) Is someone waiting? (2) Is it change-triggered? (3) What is the volume? (4) What is the error tolerance? [src4]

3. Design error handling per pattern

Each pattern requires a different error handling strategy: circuit breakers for real-time, dead letter queues for event-driven, reconciliation reports for batch. [src2, src3]

4. Validate rate limit headroom

Calculate daily API consumption per pattern and compare against ERP limits. Leave at least 30% headroom for spikes. [src1]

Code Examples

Python: Event-driven integration with retry and dead letter queue

# Input:  Event payload from message broker (Kafka, RabbitMQ, or platform events)
# Output: Processed event or dead-letter routing

import time
import logging

MAX_RETRIES = 5
BACKOFF_BASE = 2  # seconds

def process_event_with_retry(event, target_api_client):
    event_id = event.get("event_id", "unknown")
    for attempt in range(MAX_RETRIES):
        try:
            result = target_api_client.upsert(
                object_type=event["object_type"],
                external_id_field=event["external_id_field"],
                external_id=event["external_id"],
                payload=event["data"]
            )
            return {"status": "success", "result": result}
        except RateLimitError:
            time.sleep(BACKOFF_BASE ** attempt)
        except PermanentError as e:
            return route_to_dead_letter(event, str(e))
    return route_to_dead_letter(event, f"Exhausted {MAX_RETRIES} retries")

JavaScript/Node.js: Batch integration with chunking

// Input:  Array of records to upsert via Bulk API
// Output: Job status with success/failure counts

const CHUNK_SIZE = 10000;
const MAX_CONCURRENT_JOBS = 3;

async function batchUpsert(records, bulkClient, objectType) {
  const chunks = [];
  for (let i = 0; i < records.length; i += CHUNK_SIZE) {
    chunks.push(records.slice(i, i + CHUNK_SIZE));
  }
  const results = { success: 0, failed: 0, errors: [] };
  for (let i = 0; i < chunks.length; i += MAX_CONCURRENT_JOBS) {
    const batch = chunks.slice(i, i + MAX_CONCURRENT_JOBS);
    const promises = batch.map(chunk =>
      bulkClient.createJob({ object: objectType, operation: "upsert",
        externalIdFieldName: "External_ID__c", data: chunk
      }).then(job => pollJobCompletion(bulkClient, job.id))
    );
    const batchResults = await Promise.allSettled(promises);
    for (const r of batchResults) {
      if (r.status === "fulfilled") {
        results.success += r.value.numberRecordsProcessed;
        results.failed += r.value.numberRecordsFailed;
      } else { results.errors.push(r.reason.message); }
    }
  }
  return results;
}

cURL: Test event subscription (Salesforce CDC via CometD)

# Input:  Salesforce access token, instance URL
# Output: CDC event stream for Account object changes

# Subscribe to CDC channel via CometD handshake
curl -X POST "https://YOUR_INSTANCE.salesforce.com/cometd/62.0" \
  -H "Authorization: Bearer $ACCESS_TOKEN" \
  -H "Content-Type: application/json" \
  -d '[{"channel":"/meta/handshake","version":"1.0",
       "supportedConnectionTypes":["long-polling"]}]'

Data Mapping

Pattern Selection Matrix by Data Flow Type

Data Flow TypeVolume/DayLatency NeedRecommended PatternERP SurfaceError Strategy
Order creation100-10K< 1sReal-timeREST/OData APICircuit breaker + sync error response
Order status update1K-100K< 60sEvent-drivenCDC / Platform EventsReplay + dead letter queue
Customer master sync500-50K< 15 minEvent-driven or micro-batchCDC or scheduled RESTIdempotent upsert + reconciliation
Product catalog refresh10K-500KNightly OKBatchBulk API / FBDIFull reconciliation report
Financial posting (GL)1K-100KEnd of dayBatchBulk API / IDoc / FBDIZero-loss: validate before commit
Inventory snapshot10K-1MHourly OKBatchBulk query / RaaSStale data is expected — timestamp it
Compliance/audit dataVariesReal-timeReal-timeREST API (synchronous)Must succeed — block transaction if fail

Data Type Gotchas

Error Handling & Failure Points

Common Error Patterns by Integration Type

PatternError TypeFrequencyImpactResolution
Real-time429 Rate LimitCommon at scaleBlocked transactionsExponential backoff; move to batch if persistent
Real-timeTimeout (504/408)OccasionalHung transactionsCircuit breaker; async fallback
Real-timeERP down (503)Rare but criticalCascading failureCircuit breaker; queue and retry
Event-drivenMissed eventsRareData inconsistencyReplay from last known ID; periodic reconciliation
Event-drivenDuplicate eventsCommonDuplicate recordsIdempotent receivers (external ID-based upsert)
Event-drivenOut-of-order eventsCommonData corruptionSequence numbers; last-modified-wins logic
BatchPartial failureCommonIncomplete syncPer-record error logging; retry failed subset
BatchJob timeoutOccasionalNo data syncedChunk into smaller jobs; extend batch window

Failure Points in Production

Anti-Patterns

Wrong: Defaulting to real-time for all integrations

# BAD — synchronous API call for 50K-record nightly product catalog sync
for product in all_products:  # 50,000 products
    response = erp_api.update_product(product)  # 1 API call per product
    if response.status_code == 429:
        time.sleep(60)  # Takes days to complete

Correct: Use batch/bulk for high-volume scheduled operations

# GOOD — Bulk API for the same 50K-record sync
for chunk in chunks(all_products, 10000):
    csv_data = build_csv(chunk)
    job = bulk_client.create_job("Product2", "upsert", "External_ID__c")
    bulk_client.upload_data(job.id, csv_data)
    bulk_client.close_job(job.id)
# 5 bulk jobs, completes in minutes

Wrong: Using batch for time-sensitive financial decisions

# BAD — hourly batch check for credit limits
# Between batches, exceeded-credit customers can still place orders
def check_credit(customer_id, order_amount):
    cached_limit = cache.get(f"credit:{customer_id}")  # Up to 1 hour stale
    return order_amount <= cached_limit

Correct: Use real-time for financial-impact decisions

# GOOD — real-time credit check at point of order
def check_credit(customer_id, order_amount):
    response = erp_api.get(f"/accounts/{customer_id}/credit_status")
    if response.status_code != 200:
        return False  # Fail-closed: deny if ERP unreachable
    credit = response.json()
    return order_amount <= (credit["limit"] - credit["used"])

Wrong: Event-driven without idempotency

# BAD — INSERT on every event (duplicates on redelivery)
def handle_order_event(event):
    db.execute("INSERT INTO orders (erp_id, amount) VALUES (?, ?)",
               event["order_id"], event["amount"])

Correct: Event-driven with idempotent upsert

# GOOD — UPSERT on external ID (safe for redelivery)
def handle_order_event(event):
    db.execute("""INSERT INTO orders (erp_id, amount, last_event_ts)
        VALUES (?, ?, ?)
        ON CONFLICT (erp_id) DO UPDATE SET amount = EXCLUDED.amount,
            last_event_ts = EXCLUDED.last_event_ts
        WHERE orders.last_event_ts < EXCLUDED.last_event_ts""",
        event["order_id"], event["amount"], event["timestamp"])

Common Pitfalls

Diagnostic Commands

# Check Salesforce API usage (remaining daily calls)
curl "https://YOUR_INSTANCE.salesforce.com/services/data/v62.0/limits" \
  -H "Authorization: Bearer $ACCESS_TOKEN" | jq '.DailyApiRequests, .DailyBulkV2QueryJobs'

# Check Salesforce Bulk API job status
curl "https://YOUR_INSTANCE.salesforce.com/services/data/v62.0/jobs/ingest/$JOB_ID" \
  -H "Authorization: Bearer $ACCESS_TOKEN" | jq '.state, .numberRecordsProcessed'

# Dynamics 365: Check API throttling headers
curl -v "https://YOUR_ORG.api.crm.dynamics.com/api/data/v9.2/accounts?\$top=1" \
  -H "Authorization: Bearer $ACCESS_TOKEN" 2>&1 | grep -i "x-ms-ratelimit"

Version History & Compatibility

EraIntegration StyleDominant PatternKey TechnologyStatus
Pre-2010Point-to-pointBatch file transferFTP, flat files, IDocsLegacy — still common in manufacturing
2010-2015ESB-centricSOA / real-timeSOAP, ESB (MuleSoft, TIBCO)Declining — ESBs seen as bottlenecks
2015-2020API-firstREST API + batchREST, Bulk API, JSON, OAuth 2.0Current standard for most orgs
2020-2025Event-drivenHybrid (event + batch)Kafka, CDC, Platform EventsGrowing adoption, especially cloud-native
2025+AI-augmentedIntelligent routingML-based pattern selectionEmerging

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Starting a new ERP integration project — need to assign patterns to all data flowsAlready know you need a specific API surfaceSystem-specific API capability card
Evaluating whether to move from batch to real-time for a specific flowNeed implementation code for a specific ERP's event mechanismChange Data Capture card
Designing a hybrid integration architecture across multiple ERPsComparing event-driven capabilities across specific ERP systemsEvent-Driven Comparison card
Justifying pattern choices to stakeholdersNeed rate limit numbers for a specific ERP editionRate Limits Comparison card
Troubleshooting pattern mismatch issuesNeed error handling patterns specificallyError Handling card

Cross-System Comparison

CapabilitySalesforceSAP S/4HANAOracle ERP CloudDynamics 365NetSuite
Real-time APIREST API v62.0OData v4 (Cloud), BAPI/RFC (on-prem)REST APIDataverse Web API (OData v4)SuiteTalk REST + RESTlets
Real-time limit100K-5M calls/24h by editionFair-use throttlingPer-tenant throttling6K/5min per user10 concurrent requests
Bulk/Batch APIBulk API 2.0 (150MB/file)IDoc (batch documents)FBDI (250MB/file)DMF (entity packages)CSV Import (25K rows)
Event mechanismCDC + Platform Events + Pub/Sub APIEvent Mesh + Business EventsBusiness Events + OICBusiness Events + WebhooksUser Event Scripts
Event retention72h (Platform Events), 3d (CDC)Subscription-dependentSubscription-dependentSubscription-dependentN/A (trigger-based)
File-based importData Loader CSVBTP file uploadFBDI (CSV/XML)DMF packagesCSV Import
Best batch patternBulk API 2.0 (async, parallel)IDoc + Process IntegrationFBDI + ESS scheduled jobsDMF + recurring importsSuiteQL + scheduled scripts
Best event patternCDC for record changes, Platform Events for customEvent Mesh (cloud-first)Business Events + OIC subscriptionsDataverse webhooksUser Event Scripts
Hybrid maturityHighHigh (Cloud), Medium (ECC)Medium-HighMedium-HighMedium

Important Caveats

Related Units