Salesforce Platform Events, Change Data Capture, and Pub/Sub API
Type: ERP Integration
System: Salesforce Platform (API v62.0)
Confidence: 0.90
Sources: 8
Verified: 2026-03-01
Freshness: 2026-03-01
TL;DR
- Bottom line: Salesforce provides three complementary event-driven mechanisms — Platform Events (custom event messages), Change Data Capture (automatic record-change events), and Pub/Sub API (gRPC-based subscribe/publish) — each with distinct limits and use cases. Pub/Sub API is the recommended modern interface for external subscribers. [src4]
- Key limit: 250,000 platform event publishes/hour (Performance/Unlimited); delivery allocation is per-subscriber (1,000 events × 10 subscribers = 10,000 deliveries). [src1]
- Watch out for: CDC has a 5-entity selection limit without add-on license, and events are retained for only 72 hours — if your subscriber goes offline beyond that window, you lose events permanently. [src3, src6]
- Best for: Event-driven architectures where external systems need near-real-time notification of Salesforce data changes without polling REST API.
- Authentication: OAuth 2.0 (JWT bearer for server-to-server) — same auth as REST API; Pub/Sub API connects via gRPC with OAuth access token in metadata. [src5]
System Profile
Salesforce's event-driven architecture centers on the Event Bus, a unified message backbone that carries Platform Events, Change Data Capture events, and standard platform events. External consumers can subscribe via the legacy CometD-based Streaming API or the newer gRPC-based Pub/Sub API (GA since Spring '22). All new custom platform events default to high-volume since Spring '23; legacy standard-volume events can no longer be created. [src1, src4]
| Property | Value |
| Vendor | Salesforce |
| System | Salesforce Platform (API v62.0, Spring '26) |
| API Surface | Platform Events, Change Data Capture, Pub/Sub API, Streaming API |
| Current API Version | v62.0 |
| Editions Covered | Enterprise, Performance, Unlimited, Developer |
| Deployment | Cloud |
| API Docs | Platform Events Developer Guide |
| Status | GA (Pub/Sub API GA since Spring '22; Streaming API GA, not deprecated) |
API Surfaces & Capabilities
| API Surface | Protocol | Best For | Event Retention | Rate Limit | Real-time? | Direction |
| Platform Events (High-Volume) | Event Bus | Custom event messages between systems | 72 hours | 250K publishes/hour (Perf/Unlim) | Yes | Publish + Subscribe |
| Change Data Capture (CDC) | Event Bus | Automatic record-change notifications | 72 hours (3 days) | Shared with PE delivery allocation | Yes | Subscribe only |
| Pub/Sub API | gRPC / HTTP/2 / Avro | External subscription, high throughput | N/A (consumes from Event Bus) | 1,000 concurrent gRPC streams/channel | Yes | Publish + Subscribe |
| Streaming API | CometD / Bayeux / HTTP/1.1 / JSON | Legacy external subscription | N/A (consumes from Event Bus) | 2,000 concurrent CometD clients/org | Yes | Subscribe only |
[src1, src2, src3, src4]
Rate Limits & Quotas
Per-Request Limits
| Limit Type | Value | Applies To | Notes |
| Max events per FetchRequest | 100 | Pub/Sub API Subscribe RPC | Maximum num_requested value per request [src2] |
| Max event message size | 1 MB | Pub/Sub API PublishRequest | Per individual event in a batch [src2] |
| Recommended batch size | 200 events | Pub/Sub API PublishRequest | Total batch < 3 MB recommended [src2] |
| Hard gRPC message limit | 4 MB | Pub/Sub API PublishRequest | Exceeding causes publish failure and stream closure [src5] |
| Concurrent gRPC streams | 1,000 | Pub/Sub API per channel | Active RPC calls on same gRPC channel [src5] |
| Max managed subscriptions | 200 | Pub/Sub API per org | Unique managed subscriptions per Salesforce org [src2] |
| Max custom fields per PE | 25 | Platform Event definition | Per platform event definition [src1] |
Rolling / Daily Limits
| Limit Type | Value | Window | Edition Differences |
| Platform Event hourly publishing | 250,000 | 1 hour | Performance/Unlimited: 250K; add-on adds +25K/hour [src1] |
| Platform Event + CDC daily delivery | 50,000 base | 24 hours | Performance/Unlimited: 50K; add-on adds +100K/day (shifts to 3M/month) [src1] |
| CDC entity selection | 5 objects | Ongoing | All editions: 5 entities without add-on; add-on removes limit [src3] |
| Concurrent CometD clients | 2,000 | Per org | Shared across all Streaming API subscriptions [src7] |
| Event retention (high-volume PE) | 72 hours | Rolling | Events beyond 72h are permanently deleted [src6] |
| Event retention (CDC) | 72 hours (3 days) | Rolling | Same retention window as high-volume platform events [src3] |
| Event retention (legacy standard-volume PE) | 24 hours | Rolling | Standard-volume events can no longer be created [src1] |
Delivery Counting Methodology
Delivery allocations are consumed per subscriber, not per published event. Publishing 1,000 events to 10 subscribers consumes 10,000 deliveries (1,000 × 10) from the daily allocation. This is the single most impactful limit for fan-out architectures. [src1, src7]
Authentication
| Flow | Use When | Token Lifetime | Refresh? | Notes |
| OAuth 2.0 JWT Bearer | Server-to-server Pub/Sub API connections | Session timeout (default 2h) | New JWT per request | Recommended for unattended integrations [src5] |
| OAuth 2.0 Web Server | User-context event publishing | Access: 2h, Refresh: until revoked | Yes | Requires callback URL |
| Connected App + Client Credentials | Service-to-service without user context | Configurable | Yes | Available since Winter '23 |
Authentication Gotchas
- Pub/Sub API requires the OAuth access token in the gRPC call metadata (not as a URL parameter). The token goes in the
accesstoken metadata key, and the instance URL in the instanceurl key. [src5]
- JWT bearer flow connected apps require a digital certificate — self-signed works for development, but CA-signed is recommended for production. [src5]
- Session timeout is configurable by the Salesforce admin. Do not hardcode a 2-hour expiry — handle
UNAUTHENTICATED gRPC errors with token refresh. [src5]
Constraints
- CDC 5-entity limit: Without the Platform Events add-on license, CDC is limited to 5 selected standard/custom objects per org. This is a hard licensing constraint. [src3]
- 72-hour replay window: Events older than 72 hours cannot be replayed. If a subscriber is offline for more than 3 days, those events are permanently lost. Design for reconciliation. [src6]
- Delivery allocation is per-subscriber: Fan-out patterns multiply delivery consumption. 1 event to N subscribers = N deliveries counted. [src1, src7]
- Platform Events are immutable: Published events cannot be updated or deleted. Design event schemas carefully. [src1]
- Pub/Sub API payload is Avro binary: Unlike Streaming API (JSON), Pub/Sub API delivers events as Apache Avro binary payloads. Subscribers must fetch the schema and deserialize. [src4, src5]
- No guaranteed ordering across partitions: High-volume platform events may be delivered out of order across different transactions. Within a single Apex transaction, events are ordered. [src1]
Integration Pattern Decision Tree
START -- Need event-driven integration with Salesforce
|-- What triggers the event?
| |-- Custom business logic (order placed, status changed)
| | |-- Need custom event schema?
| | | |-- YES --> Platform Events (define __e custom event)
| | | |-- NO --> Standard platform events (LoginEvent, etc.)
| |-- Any record change on specific objects (create/update/delete/undelete)
| | |-- YES --> Change Data Capture (CDC)
| | | |-- Tracking <= 5 objects?
| | | | |-- YES --> CDC works without add-on license
| | | | |-- NO --> Requires Platform Events add-on license
| | |-- Need field-level change tracking?
| | |-- YES --> CDC (delivers changed fields + ChangeEventHeader)
| | |-- NO --> Platform Events (custom schema, publish explicitly)
|-- Where is the subscriber?
| |-- External system (outside Salesforce)
| | |-- Greenfield / new integration?
| | | |-- YES --> Pub/Sub API (gRPC) -- modern, efficient, recommended
| | | |-- NO (existing CometD) --> Streaming API still works, migrate when ready
| | |-- Need > 2,000 concurrent subscribers?
| | |-- YES --> Fan-out via middleware (Heroku, Kafka, EventBridge)
| | |-- NO --> Direct Pub/Sub API subscription
| |-- Inside Salesforce (Apex, Flow, LWC)
| |-- Apex trigger on platform event (after insert)
| |-- Flow: Platform Event-Triggered
| |-- LWC: empApi for real-time UI updates
|-- Volume and reliability?
|-- < 250K events/hour, < 50K deliveries/day?
| |-- YES --> Standard allocation sufficient
| |-- NO --> Purchase Platform Events add-on
|-- Need guaranteed delivery?
|-- YES --> Store replay ID, resubscribe on disconnect
|-- NO --> Fire-and-forget acceptable
Quick Reference
Platform Events vs CDC vs Pub/Sub API
| Capability | Platform Events | Change Data Capture | Pub/Sub API |
| What it does | Custom event messages you define and publish | Automatic events on record CUD operations | gRPC interface to subscribe/publish events |
| Event schema | Custom (__e object definition) | Automatic (mirrors object fields) | Consumes PE + CDC events |
| Publishing | Apex, API, Flow, Process Builder | Automatic (system-generated) | gRPC PublishRequest |
| Subscribing | Apex trigger, Flow, empApi, CometD, gRPC | Apex trigger, Flow, empApi, CometD, gRPC | gRPC Subscribe/ManagedSubscribe RPC |
| Retention | 72h (high-volume) | 72h (3 days) | N/A (reads from Event Bus) |
| Replay support | Yes (replay ID) | Yes (replay ID) | Yes (replay ID or custom preset) |
| Payload format (external) | JSON (CometD) / Avro (Pub/Sub) | JSON (CometD) / Avro (Pub/Sub) | Avro binary only |
| Protocol | Via Streaming API or Pub/Sub API | Via Streaming API or Pub/Sub API | gRPC / HTTP/2 |
| Ordering | Within single Apex transaction | Per object per transaction | Preserved from source |
Streaming API (CometD) vs Pub/Sub API (gRPC)
| Feature | Streaming API (CometD) | Pub/Sub API (gRPC) |
| Protocol | HTTP/1.1, Bayeux/CometD | HTTP/2, gRPC |
| Payload format | JSON | Apache Avro (binary, compressed) |
| Delivery model | Push (server pushes to client) | Pull (client requests N events) |
| Flow control | None (server controls pace) | Client-controlled (num_requested) |
| Efficiency | Higher bandwidth, larger payloads | ~7-10x faster; compressed byte buffers |
| Language support | JavaScript (CometD client) | 11+ languages (Python, Java, Node, C++, Go) |
| Publishing | Not supported (subscribe only) | Supported (PublishRequest RPC) |
| Concurrent limit | 2,000 CometD clients/org | 1,000 streams/gRPC channel (no org-wide cap) |
| Managed subscriptions | No | Yes (up to 200/org) |
| Deprecation status | Not deprecated, no new features | Actively developed, recommended |
[src4, src5]
Step-by-Step Integration Guide
1. Define a Platform Event (if using custom events)
Navigate to Setup > Platform Events > New Platform Event. Define fields for your event payload. The API name ends with __e. For CDC, skip this step. [src1]
Setup > Integrations > Platform Events > New Platform Event
- Label: Order Placed
- API Name: Order_Placed__e
- Publish Behavior: Publish After Commit
- Add custom fields: Order_Id__c, Amount__c, Status__c
Verify: GET /services/data/v62.0/sobjects/Order_Placed__e/describe — returns field definitions.
2. Enable CDC for target objects (if using CDC)
Navigate to Setup > Integrations > Change Data Capture. Select up to 5 objects (without add-on). [src3]
Setup > Integrations > Change Data Capture
- Select objects: Account, Opportunity, Contact, Lead, Case
- Save -- CDC begins immediately
- Channel name: /data/AccountChangeEvent, /data/OpportunityChangeEvent, etc.
Verify: Update a record on a selected object — the change event should appear on the corresponding channel within seconds.
3. Connect via Pub/Sub API (gRPC)
Establish a gRPC connection to api.pubsub.salesforce.com:7443. Use OAuth access token in gRPC metadata. [src5]
import grpc
PUBSUB_ENDPOINT = "api.pubsub.salesforce.com:7443"
channel_credentials = grpc.ssl_channel_credentials()
channel = grpc.secure_channel(PUBSUB_ENDPOINT, channel_credentials)
metadata = (
("accesstoken", "YOUR_OAUTH_ACCESS_TOKEN"),
("instanceurl", "https://yourinstance.my.salesforce.com"),
("tenantid", "YOUR_ORG_ID"),
)
Verify: Call GetTopic RPC with your event channel name — returns topic metadata and schema ID.
4. Subscribe to events and handle replay
Use the Subscribe RPC to receive events. Store the replay ID after each batch. Max 100 events per FetchRequest. [src2, src6]
def subscribe_to_events(stub, topic_name, replay_id=None):
fetch_request = {
"topic_name": topic_name,
"num_requested": 100, # max per FetchRequest
}
if replay_id:
fetch_request["replay_id"] = replay_id
else:
fetch_request["replay_preset"] = "LATEST"
for response in stub.Subscribe(iter([fetch_request]), metadata=metadata):
for event in response.events:
payload = decode_avro(event.event.payload, response.schema_id)
last_replay_id = event.replay_id # Store for reconnection
yield {"topic_name": topic_name, "num_requested": 100}
Verify: Publish a test event — subscriber receives it within seconds. Store replay_id and verify reconnection resumes correctly.
Code Examples
JavaScript/Node.js: Subscribe to Platform Events via CometD
// Input: Salesforce OAuth token, event channel name
// Output: Real-time event notifications via CometD long-polling
const jsforce = require("jsforce"); // v2.x
const conn = new jsforce.Connection({
loginUrl: "https://login.salesforce.com",
accessToken: process.env.SF_ACCESS_TOKEN,
instanceUrl: process.env.SF_INSTANCE_URL,
});
// Subscribe to Platform Event
conn.streaming.topic("/event/Order_Placed__e").subscribe((message) => {
console.log("Event received:", JSON.stringify(message, null, 2));
const replayId = message.event.replayId;
// Store replayId persistently for reconnection
});
// Subscribe to CDC
conn.streaming.topic("/data/AccountChangeEvent").subscribe((message) => {
const header = message.payload.ChangeEventHeader;
console.log(`Change: ${header.changeType}, IDs: ${header.recordIds}`);
});
cURL: Publish a Platform Event via REST API
# Publish a single platform event
curl -X POST \
"https://yourinstance.my.salesforce.com/services/data/v62.0/sobjects/Order_Placed__e" \
-H "Authorization: Bearer ${SF_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Order_Id__c":"ORD-2026-001","Amount__c":1500.00,"Status__c":"Confirmed"}'
# Response: {"id":"e01xx0000000001AAA","success":true,"errors":[]}
# Check platform event usage limits
curl -s "https://yourinstance.my.salesforce.com/services/data/v62.0/limits" \
-H "Authorization: Bearer ${SF_ACCESS_TOKEN}" \
| jq '.HourlyPublishedPlatformEvents, .DailyDeliveredPlatformEvents'
Data Mapping
CDC ChangeEventHeader Fields
| Field | Type | Description | Gotcha |
changeType | String | CREATE, UPDATE, DELETE, UNDELETE, GAP_* | GAP_ types indicate missed events — trigger full reconciliation [src3] |
changedFields | String[] | API names of changed fields | Empty for CREATE (all fields in body) [src3] |
recordIds | String[] | IDs of affected records | Can contain multiple IDs when batched [src3] |
commitTimestamp | Long | Epoch timestamp of DML commit | Not the publish time — delivery delay is possible [src3] |
transactionKey | String | Unique ID for the transaction | Use to group events from same Apex transaction [src3] |
sequenceNumber | Integer | Order within transaction | Events with same transactionKey ordered by this [src3] |
entityName | String | SObject API name | Use to route from compound channels [src3] |
commitUser | String | User ID who made the change | May be automation user [src3] |
Data Type Gotchas
- DateTime fields are ISO 8601 UTC strings in CometD JSON but Avro logical types (long milliseconds since epoch) in Pub/Sub API. Convert carefully. [src3, src4]
- Multi-select picklist values are semicolon-delimited strings in CDC events, matching Salesforce API convention. External systems may expect arrays. [src3]
- Currency fields in CDC include the value but not the currency code — in multi-currency orgs, CurrencyIsoCode must be tracked separately. [src3]
Error Handling & Failure Points
Common Error Codes
| Code | Meaning | Cause | Resolution |
| OPERATION_TOO_LARGE | Event payload exceeds 1 MB | Single event too large | Reduce payload; split into multiple events [src2] |
| UNAVAILABLE (gRPC) | Server temporarily unavailable | Service disruption | Exponential backoff: 1s, 2s, 4s, 8s, max 60s [src5] |
| UNAUTHENTICATED (gRPC) | Invalid or expired token | OAuth token expired | Refresh token and reconnect [src5] |
| RESOURCE_EXHAUSTED (gRPC) | Rate limit exceeded | Exceeds concurrent stream or publishing limit | Reduce streams or publishing rate [src2] |
| GAP_OVERFLOW | CDC events lost | Event volume exceeded delivery capacity | Full data reconciliation required [src3] |
| INVALID_REPLAY_ID | Replay ID outside retention window | Replay ID older than 72 hours | Use EARLIEST preset or LATEST; full sync may be needed [src6] |
Failure Points in Production
- Subscriber offline > 72 hours: Events permanently lost. Fix:
Implement heartbeat + full reconciliation fallback when gap exceeds 72 hours. [src6]
- Fan-out exhausts daily delivery allocation: 50K events to 10 subscribers = 500K deliveries. Fix:
Use middleware fan-out (Kafka, EventBridge) with single Salesforce subscriber. [src1, src7]
- CDC GAP events during high-volume DML: Salesforce emits GAP events instead of individual changes. Fix:
Handle all GAP_ change types; on GAP_OVERFLOW, trigger full sync. [src3]
- gRPC channel exhaustion: Hitting 1,000 concurrent streams. Fix:
Create multiple gRPC channels (one per event type), each with own connection pool. [src5]
- Avro schema mismatch: Schema changes cause deserialization failures. Fix:
Always call GetSchema before processing; cache by schema_id; refresh on error. [src5]
- Publish callbacks silently failing: EventBus.PublishCallback swallows exceptions. Fix:
Log callback results; use Database.SaveResult for individual event failures. [src1]
Anti-Patterns
Wrong: Polling REST API for changes instead of using CDC
# BAD -- Queries all records every 5 minutes to find changes
# Wastes API calls, misses changes between polls
while True:
results = sf.query("SELECT Id, Name FROM Opportunity WHERE SystemModstamp > {last_poll}")
process_changes(results)
time.sleep(300)
Correct: Subscribe to CDC for real-time change notifications
# GOOD -- CDC delivers changes in real-time, zero API call overhead
def handle_cdc_event(event):
header = event["ChangeEventHeader"]
if header["changeType"] in ("CREATE", "UPDATE"):
sync_to_target(header["recordIds"], event)
elif header["changeType"] == "DELETE":
delete_from_target(header["recordIds"])
elif header["changeType"].startswith("GAP_"):
trigger_full_reconciliation(header["entityName"])
Wrong: One CometD subscription per downstream consumer
// BAD -- Each CometD client counts against 2,000 org limit
services.forEach((svc) => {
const client = new CometDClient(sfCredentials);
client.subscribe("/event/Order_Placed__e", svc.handler);
});
Correct: Single subscriber with middleware fan-out
// GOOD -- One subscriber, fan-out via Kafka
const pubsubClient = new SalesforcePubSubClient(credentials);
pubsubClient.subscribe("/event/Order_Placed__e", async (event) => {
await kafkaProducer.send({
topic: "salesforce.order-placed",
messages: [{ value: JSON.stringify(event) }],
});
});
Wrong: Ignoring replay IDs on reconnect
# BAD -- Always subscribes from LATEST, losing events during downtime
subscriber.subscribe(topic="/data/AccountChangeEvent", replay_preset="LATEST")
Correct: Persisting replay IDs for gap-free reconnection
# GOOD -- Stores replay ID, resumes from last position
last_id = load_replay_id(topic)
if last_id:
subscriber.subscribe(topic=topic, replay_id=last_id)
else:
subscriber.subscribe(topic=topic, replay_preset="EARLIEST")
for event in subscriber.events():
process(event)
save_replay_id(topic, event.replay_id)
Common Pitfalls
- Exceeding delivery allocation without realizing it: Each subscriber counts separately. 10 triggers + 5 Flows + 3 external = 18 deliveries per event. Fix:
Monitor via /services/data/v62.0/limits; consolidate subscribers. [src1, src7]
- CDC 5-entity limit surprise: Attempting to select a 6th object without add-on may silently fail. Fix:
Check selections via Setup > Change Data Capture before assuming CDC is active. [src3]
- Publish After Commit vs Publish Immediately: "Publish After Commit" only publishes if transaction succeeds; "Publish Immediately" publishes even on rollback. Fix:
Use Publish After Commit for data consistency. [src1]
- Replay ID arithmetic: IDs are not sequential or contiguous. Fix:
Store exact replay ID; never compute derived values. [src6]
- Avro schema evolution: New fields break old cached schemas. Fix:
Fetch schema via GetSchema RPC using schema_id; cache with refresh-on-error. [src5]
- Sandbox event limits differ: Sandbox may have lower allocations. Fix:
Test with production-like volumes in full sandbox; verify limits via Limits API. [src1]
Diagnostic Commands
# Check platform event usage / remaining limits
curl -s "https://${SF_INSTANCE}/services/data/v62.0/limits" \
-H "Authorization: Bearer ${SF_ACCESS_TOKEN}" \
| jq '{HourlyPublishedPlatformEvents, DailyDeliveredPlatformEvents}'
# List all platform event definitions in the org
curl -s "https://${SF_INSTANCE}/services/data/v62.0/sobjects" \
-H "Authorization: Bearer ${SF_ACCESS_TOKEN}" \
| jq '[.sobjects[] | select(.name | endswith("__e")) | {name, label}]'
# Describe a specific platform event schema
curl -s "https://${SF_INSTANCE}/services/data/v62.0/sobjects/Order_Placed__e/describe" \
-H "Authorization: Bearer ${SF_ACCESS_TOKEN}" \
| jq '{name, fields: [.fields[] | {name, type, length}]}'
# Test publishing a platform event
curl -X POST "https://${SF_INSTANCE}/services/data/v62.0/sobjects/Order_Placed__e" \
-H "Authorization: Bearer ${SF_ACCESS_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"Order_Id__c":"TEST-001","Amount__c":100,"Status__c":"Test"}'
Version History & Compatibility
| API/Feature | Release | Status | Key Changes | Notes |
| Pub/Sub API | Spring '22 (v54.0) | GA | Initial GA — gRPC Subscribe + Publish | Replaced need for external CometD libraries [src8] |
| Pub/Sub Managed Subscriptions | Spring '24 (v60.0) | Beta | Server-managed subscription state | Salesforce tracks replay position [src2] |
| High-Volume PE as default | Spring '23 (v57.0) | GA | All new PE definitions are high-volume | Standard-volume creation disabled [src1] |
| Change Data Capture | Winter '19 (v44.0) | GA | Initial GA for selected standard objects | Entity limit: 5 without add-on [src3] |
| Streaming API (CometD) | API v21.0 | GA | Long-standing, stable | Not deprecated; Pub/Sub API preferred [src4] |
| Platform Events | Winter '17 (v38.0) | GA | Initial GA | Apex triggers, Flow, API publishing [src1] |
When to Use / When Not to Use
| Use When | Don't Use When | Use Instead |
| Need real-time notification of Salesforce record changes | Need to bulk-export historical data | Bulk API 2.0 |
| External system needs to react to Salesforce events | Need to query current record state | REST API SOQL query |
| Building event-driven microservice architecture | Need request-response pattern | REST API or Composite API |
| Tracking field-level changes on specific objects (CDC) | Tracking > 5 objects without add-on | Platform Events with Apex triggers |
| Need guaranteed delivery with replay (within 72h) | Need event retention > 72 hours | External broker (Kafka, AWS SQS) |
| Fan-out to single middleware subscriber | Fan-out to > 2,000 direct subscribers | Middleware hub-and-spoke pattern |
Important Caveats
- Platform event allocation limits vary by Salesforce edition and are subject to change each release. Always verify via the Limits REST API (
/services/data/v62.0/limits). [src1]
- The Platform Events add-on license changes both CDC entity limit (removes 5-object cap) and delivery allocation (+100K/day, shifts to monthly enforcement at 3M/month). Contact Salesforce for pricing. [src1, src3]
- Sandbox orgs have independent event allocations that may be lower than production. Performance testing may not reflect production capacity. [src1]
- Pub/Sub API is only for external connections — Apex code within Salesforce cannot use gRPC. Use Apex triggers on platform events or empApi in LWC for internal consumption. [src5]
- Replay IDs are org-specific and environment-specific. A replay ID from sandbox cannot be used in production. [src6]
Related Units