Poison Message Handling: Triage and Replay of Failed ERP Integration Messages

Type: ERP Integration System: Cross-Platform (AWS SQS, Azure Service Bus, Kafka, MuleSoft, Boomi) Confidence: 0.87 Sources: 7 Verified: 2026-03-07 Freshness: evolving

TL;DR

System Profile

This card covers poison message handling as a cross-platform architecture pattern for ERP integrations. It focuses specifically on what happens after a message exhausts its retry budget and lands in a dead letter queue — detection, classification, triage, remediation, and replay. For retry strategies that determine when a message becomes a poison message (exponential backoff, circuit breakers), see the companion card on error handling and DLQ fundamentals.

The patterns apply across all major message brokers (AWS SQS, Azure Service Bus, Apache Kafka, RabbitMQ) and iPaaS platforms (MuleSoft Anypoint MQ, Boomi Atom Queue, Workato, Celigo). The specific ERP system at either end (Salesforce, SAP, Oracle, NetSuite, Dynamics 365, Workday) does not change the poison message handling approach — it changes the error codes and data mapping fixes needed during remediation.

SystemRoleAPI SurfaceDirection
Source ERP (e.g., Salesforce)Event producer — generates change events or outbound messagesREST, Platform Events, CDCOutbound
Message Broker (e.g., AWS SQS, Kafka)Message transport + DLQ infrastructureSQS API, Kafka ProtocolTransport
iPaaS (e.g., MuleSoft, Boomi)Integration orchestrator — message transformation and routingAnypoint MQ, Atom QueueOrchestrator
Target ERP (e.g., SAP S/4HANA)Message consumer — processes inbound recordsOData, BAPI, IDocInbound

API Surfaces & Capabilities

Poison message handling capabilities vary significantly across platforms. The key differentiators are automatic DLQ routing, DLQ inspection APIs, and native replay/redrive support: [src3, src4, src5]

PlatformDLQ TypeAuto-RouteMax Delivery CountInspection APINative ReplayDLQ Retention
AWS SQSSeparate queueYes (redrive policy)Configurable (1-1000)ReceiveMessage on DLQYes (DLQ Redrive API)Same as source (max 14 days)
Azure Service BusSub-queue ($deadletterqueue)Yes (MaxDeliveryCount)Default 10, configurablePeek/receive on sub-queueManual (receive + re-send)Unlimited (Premium)
Apache KafkaSeparate topic (DLT)Application-levelApplication-levelConsumer on DLT topicApplication-levelTopic retention config
RabbitMQSeparate queue (x-dead-letter-exchange)Yes (x-delivery-limit)Configurable via quorum queuesAMQP consume on DLQManual (consume + re-publish)Queue TTL config
MuleSoft Anypoint MQSeparate queueYes (max delivery attempts)ConfigurableAnypoint MQ APIYes (REM)7 days default
Boomi Atom QueueBuilt-in DLQYes (after 7 attempts)7 (6 retries + original)Queue Management panelYes (resend dead letters)Atom storage lifecycle

Rate Limits & Quotas

DLQ Throughput Limits

PlatformReplay Rate LimitConcurrent ReplaysMax DLQ SizeNotes
AWS SQSSystem-optimized or custom max velocity1 active redrive task per source queueNo hard limit (cost-based)Redrive task max duration: 36 hours; max 100 active tasks per account [src4]
Azure Service BusNo built-in rate limit on replayN/A (manual process)Entity size limit (Premium: 80 GB)No automatic cleanup — messages persist until explicitly completed [src3]
Apache KafkaConsumer throughputConsumer group parallelismTopic retention (size or time)No native redrive — must implement consumer that reads DLT and produces to main topic [src5]
MuleSoft Anypoint MQAPI rate limits applyPer-queue basis120,000 in-flight messagesREM feature provides managed replay with visibility [src6]
BoomiQueue throughputPer-atom basisAtom storage capacityDead letters visible in Queue Management panel; batch resend available

Monitoring Thresholds

MetricTargetAlert When
DLQ ingestion rate< 1% of incoming throughputSustained > 1% for 15 minutes [src1]
DLQ backlog (depth)< 1,000 messagesGrowing for > 1 hour without triage [src1]
Oldest message age in DLQ< 24 hours for critical streamsAny message > 24 hours untriaged [src1]
Replay success rate> 95%Below 90% on any replay batch [src1]
Poison ratio (DLQ / total)< 5%Above 5% sustained [src1]
Time to first triage< 4h (critical), < 24h (standard)Exceeding SLA threshold [src1]

Authentication

N/A — pattern-level card. Authentication is handled at the broker/iPaaS layer:

PlatformAuth MethodNotes
AWS SQSIAM roles / policiesDLQ access requires sqs:ReceiveMessage + sqs:DeleteMessage + sqs:SendMessage on both source and DLQ
Azure Service BusSAS or Azure AD (RBAC)DLQ is a sub-queue — same connection string, append /$deadletterqueue [src3]
Apache KafkaSASL/SCRAM, mTLS, or ACLsDLT is a regular topic — requires separate ACL for consumer group [src5]
MuleSoftAnypoint Platform credentialsDLQ management requires Manage Queues permission [src6]

Constraints

Integration Pattern Decision Tree

START — Message has failed processing and landed in DLQ
├── Step 1: Classify the failure
│   ├── Transient error? (timeout, 429, 503, network error)
│   │   ├── YES → Should NOT be in DLQ — investigate why retries exhausted
│   │   │   ├── maxDeliveryCount too low? → Increase to 3-5
│   │   │   ├── Backoff delay too short? → Increase max backoff
│   │   │   └── Upstream system down for extended period? → Expected; replay now
│   │   └── Action: REPLAY IMMEDIATELY (system has recovered)
│   ├── Data quality error? (schema violation, missing field, invalid reference)
│   │   ├── Can the message be fixed automatically?
│   │   │   ├── YES → Auto-remediate → REPLAY WITH IDEMPOTENCY CHECK
│   │   │   └── NO → Route to manual review queue
│   │   └── Action: FIX DATA → REPLAY WITH IDEMPOTENCY CHECK
│   ├── Permanent error? (invalid endpoint, auth failure, business rule violation)
│   │   ├── Code/config bug? → Fix, deploy → REPLAY ENTIRE BATCH
│   │   └── Business rule rejection? → Fix target state or DISCARD + ALERT
│   └── Unknown error? → QUARANTINE → MANUAL TRIAGE
├── Step 2: Remediate
│   ├── Automated fix possible? → Apply transform → validate → replay
│   └── Manual fix needed? → Alert ops → ticket → SLA clock starts
├── Step 3: Replay
│   ├── Verify idempotency key present
│   ├── Verify ordering (parent before child)
│   ├── Replay to original queue (NOT directly to consumer)
│   ├── Monitor replay success rate
│   └── If fails again → QUARANTINE (no infinite loop)
└── Step 4: Post-mortem
    ├── New failure category? → Add classifier rule
    ├── Recurring pattern? → Fix upstream validation
    └── Update monitoring thresholds

Quick Reference

ScenarioActionReplay?Idempotency?Alert Level
Schema violation (missing field)Fix data, validate, replayYesYesWarning
Invalid foreign key referenceCreate parent first, then replayYes (ordered)YesWarning
Rate limit exhaustion (429)Should not be in DLQ — increase retry budgetYes (immediate)YesInfo
Authentication failure (401/403)Fix credentials, replay batchYesYesCritical
Business rule violationFix target ERP state or discardConditionalYesWarning
Malformed payload (unparseable)Discard — cannot be fixedNoN/AError
Target system decommissionedDiscard + archive for auditNoN/ACritical
Duplicate record conflict (409)Already processed — safe to discardNoN/AInfo
Cascading failure (parent failed)Fix parent first, replay children in orderYes (ordered)YesWarning
Unknown/unclassified errorQuarantine for investigationPending triageYesError

Step-by-Step Integration Guide

1. Classify errors at the consumer level

Before a message reaches the DLQ, classify the error type in your consumer. This metadata travels with the message and determines the triage path. [src1, src7]

def classify_error(exception, message):
    """Classify processing errors to determine DLQ triage path."""
    error_info = {
        "error_class": type(exception).__name__,
        "error_message": str(exception)[:500],
        "timestamp": datetime.utcnow().isoformat(),
        "message_id": message.get("message_id"),
        "attempt_count": message.get("approximate_receive_count", 0),
    }
    if isinstance(exception, (TimeoutError, ConnectionError)):
        error_info["category"] = "transient"
        error_info["retry_eligible"] = True
    elif isinstance(exception, (ValidationError, SchemaError)):
        error_info["category"] = "data_quality"
        error_info["retry_eligible"] = False
    elif isinstance(exception, (AuthenticationError, PermissionError)):
        error_info["category"] = "permanent"
        error_info["retry_eligible"] = False
    else:
        error_info["category"] = "unknown"
        error_info["retry_eligible"] = False
    return error_info

Verify: Check DLQ messages have category attribute set → confirms classification is running.

2. Configure platform-specific DLQ routing

Set up automatic dead-letter routing with appropriate delivery count thresholds. [src3, src4]

# AWS SQS — Create DLQ and attach redrive policy
aws sqs create-queue --queue-name erp-orders-dlq \
  --attributes '{"MessageRetentionPeriod":"1209600"}'

aws sqs set-queue-attributes \
  --queue-url https://sqs.us-east-1.amazonaws.com/123456789/erp-orders \
  --attributes '{
    "RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:123456789:erp-orders-dlq\",\"maxReceiveCount\":\"5\"}"
  }'

# Azure Service Bus — Set MaxDeliveryCount (recommend 5 for ERP)
az servicebus queue update \
  --resource-group erp-integration \
  --namespace-name erp-bus \
  --name erp-orders \
  --max-delivery-count 5

Verify: Send a message that always fails → confirm it appears in DLQ after 5 attempts.

3. Build the DLQ triage consumer

Create a dedicated consumer that reads from the DLQ, classifies messages, and routes them through the triage workflow. [src1, src7]

import json, boto3
from datetime import datetime

sqs = boto3.client("sqs")
DLQ_URL = "https://sqs.us-east-1.amazonaws.com/123456789/erp-orders-dlq"
SOURCE_URL = "https://sqs.us-east-1.amazonaws.com/123456789/erp-orders"

def triage_dlq_messages(max_messages=10):
    """Read DLQ, classify, and route for remediation or replay."""
    response = sqs.receive_message(
        QueueUrl=DLQ_URL,
        MaxNumberOfMessages=max_messages,
        MessageAttributeNames=["All"],
        AttributeNames=["All"],
    )
    for msg in response.get("Messages", []):
        error_category = msg.get("MessageAttributes", {}).get(
            "error_category", {}).get("StringValue", "unknown")
        receive_count = int(msg["Attributes"].get("ApproximateReceiveCount", 0))

        if receive_count > 3:  # Prevent infinite triage loops
            quarantine_message(msg, reason="triage_loop_detected")
            continue

        if error_category == "transient":
            replay_message(msg, json.loads(msg["Body"]), SOURCE_URL)
        elif error_category == "data_quality":
            attempt_auto_fix(msg, json.loads(msg["Body"]))
        elif error_category == "permanent":
            route_to_manual_review(msg, json.loads(msg["Body"]))
        else:
            quarantine_message(msg, reason="unclassified")

Verify: aws sqs get-queue-attributes --queue-url $DLQ_URL --attribute-names ApproximateNumberOfMessages → count decreasing as triage runs.

4. Implement safe replay with idempotency check

Replay messages back to the source queue with idempotency verification. [src1, src4]

def replay_message(dlq_msg, body, target_queue_url):
    """Replay a DLQ message with idempotency safety."""
    idempotency_key = body.get("idempotency_key")
    if not idempotency_key:
        quarantine_message(dlq_msg, reason="missing_idempotency_key")
        return

    if is_already_processed(idempotency_key):
        sqs.delete_message(QueueUrl=DLQ_URL, ReceiptHandle=dlq_msg["ReceiptHandle"])
        return  # already handled

    body["_replay"] = {
        "replayed_at": datetime.utcnow().isoformat(),
        "replay_attempt": body.get("_replay", {}).get("replay_attempt", 0) + 1,
    }
    if body["_replay"]["replay_attempt"] > 3:
        quarantine_message(dlq_msg, reason="max_replay_attempts_exceeded")
        return

    sqs.send_message(
        QueueUrl=target_queue_url,
        MessageBody=json.dumps(body),
        MessageAttributes={
            "idempotency_key": {"DataType": "String", "StringValue": idempotency_key},
            "is_replay": {"DataType": "String", "StringValue": "true"},
        },
    )
    sqs.delete_message(QueueUrl=DLQ_URL, ReceiptHandle=dlq_msg["ReceiptHandle"])

Verify: Replay a known-good message → confirm no duplicate in target ERP.

Code Examples

Python: DLQ Depth Monitoring with CloudWatch Alerting

# Input:  DLQ queue name, alert threshold, SNS topic ARN
# Output: CloudWatch alarms for DLQ depth and message age

import boto3
cloudwatch = boto3.client("cloudwatch")

def create_dlq_depth_alarm(queue_name, threshold=100, sns_topic_arn=None):
    cloudwatch.put_metric_alarm(
        AlarmName=f"dlq-depth-{queue_name}",
        AlarmDescription=f"DLQ {queue_name} has > {threshold} messages",
        Namespace="AWS/SQS",
        MetricName="ApproximateNumberOfMessagesVisible",
        Dimensions=[{"Name": "QueueName", "Value": queue_name}],
        Statistic="Maximum",
        Period=300, EvaluationPeriods=2,
        Threshold=threshold,
        ComparisonOperator="GreaterThanThreshold",
        AlarmActions=[sns_topic_arn] if sns_topic_arn else [],
    )

def create_dlq_age_alarm(queue_name, max_age_seconds=86400, sns_topic_arn=None):
    cloudwatch.put_metric_alarm(
        AlarmName=f"dlq-age-{queue_name}",
        AlarmDescription=f"DLQ {queue_name} has messages older than {max_age_seconds}s",
        Namespace="AWS/SQS",
        MetricName="ApproximateAgeOfOldestMessage",
        Dimensions=[{"Name": "QueueName", "Value": queue_name}],
        Statistic="Maximum",
        Period=300, EvaluationPeriods=1,
        Threshold=max_age_seconds,
        ComparisonOperator="GreaterThanThreshold",
        AlarmActions=[sns_topic_arn] if sns_topic_arn else [],
    )

JavaScript/Node.js: Kafka DLT Consumer with Triage Logic

// Input:  Kafka connection config, DLT topic name
// Output: Triage consumer that classifies and routes failed messages

const { Kafka } = require("kafkajs"); // [email protected]
const kafka = new Kafka({ brokers: ["broker:9092"] });
const consumer = kafka.consumer({ groupId: "dlq-triage" });
const producer = kafka.producer({ idempotent: true });

async function runDLTTriageConsumer(dltTopic, mainTopic) {
  await consumer.connect();
  await producer.connect();
  await consumer.subscribe({ topic: dltTopic, fromBeginning: false });

  await consumer.run({
    eachMessage: async ({ message }) => {
      const headers = message.headers || {};
      const errorType = headers["error-type"]?.toString() || "unknown";
      const retryCount = parseInt(headers["retry-count"]?.toString() || "0");
      const idempotencyKey = headers["idempotency-key"]?.toString();

      if (!idempotencyKey) {
        await logToQuarantine(message, "missing_idempotency_key");
        return;
      }
      if (retryCount > 3) {
        await logToQuarantine(message, "max_retries_exceeded");
        return;
      }
      switch (errorType) {
        case "transient":
          await producer.send({ topic: mainTopic, messages: [{
            key: message.key, value: message.value,
            headers: { ...headers, "is-replay": "true",
              "retry-count": String(retryCount + 1) },
          }] });
          break;
        case "data_quality":
          await routeToRemediationTopic(message);
          break;
        default:
          await logToQuarantine(message, errorType);
          await alertOpsTeam(message, errorType);
      }
    },
  });
}

cURL: Azure Service Bus DLQ Inspection

# Input:  Service Bus namespace, queue name, SAS token
# Output: Peek at dead-lettered messages for triage

SAS_TOKEN="SharedAccessSignature sr=..."

# Peek messages in DLQ (non-destructive)
curl -X POST \
  "https://erp-bus.servicebus.windows.net/erp-orders/\$deadletterqueue/messages/head?timeout=30" \
  -H "Authorization: $SAS_TOKEN"

# Complete (delete) a DLQ message after successful triage
curl -X DELETE \
  "https://erp-bus.servicebus.windows.net/erp-orders/\$deadletterqueue/messages/{messageId}/{lockToken}" \
  -H "Authorization: $SAS_TOKEN"

Data Mapping

Poison Message Context Preservation

When a message moves to the DLQ, critical context must be preserved for effective triage and replay:

FieldPurposeRequired for Replay?Notes
original_message_idTrace back to original messageYesIdempotency dedup and audit trail
idempotency_keyPrevent duplicate processingYesWithout this, replay creates duplicates
error_categoryTriage classificationYesDetermines triage path
error_messageRoot cause descriptionNo (helpful)Truncate to 500 chars
source_queueOriginal queue/topicYesRequired for replay routing
original_timestampWhen first producedYesDetect aging and retention deadline
attempt_countDelivery attempt countYesHelps tune retry budget
correlation_idLinks related messagesConditionalRequired for ordered replay

Platform-Specific DLQ Metadata

PlatformAuto-Captured MetadataCustom MetadataAccess Pattern
AWS SQSApproximateReceiveCount, SentTimestampMessageAttributes (up to 10)ReceiveMessage with AttributeNames=All [src4]
Azure Service BusDeliveryCount, EnqueuedTimeUtc, DeadLetterReasonCustom properties (unlimited)Peek/receive on $deadletterqueue [src3]
Apache KafkaOffset, partition, timestampHeaders (key-value byte arrays)Consumer on DLT topic [src5]
MuleSoft Anypoint MQdeliveryCount, destinationCustom propertiesAnypoint MQ API or REM console [src6]

Error Handling & Failure Points

Common Error Codes That Create Poison Messages

CodeMeaningSource SystemTriage Action
400Payload validation failureTarget ERP APIData quality fix → replay
404Referenced record does not existTarget ERP APICreate parent → replay children
409Duplicate record — already existsTarget ERP APISafe to discard
422Business rule violationTarget ERP APIFix target state → replay
INVALID_FIELDField not writableSalesforce APIUpdate field mapping → replay
UNABLE_TO_LOCK_ROWRecord lockedSalesforce APITransient — increase retry budget
GOVERNANCE_LIMITSuiteScript governance exhaustedNetSuiteReduce batch size, replay
-ERR_PARSEMalformed XML/JSONAny consumerPermanent — discard + log

Failure Points in Production

Anti-Patterns

Wrong: Infinite retry loop with no DLQ

# BAD — schema violation retries forever, blocks queue, burns compute
def process_message(message):
    while True:
        try:
            call_erp_api(message)
            return
        except Exception:
            time.sleep(5)  # never gives up

Correct: Bounded retry with classification and DLQ routing

# GOOD — classify error, retry transient only, DLQ permanent failures
def process_message(message, max_retries=5):
    for attempt in range(max_retries):
        try:
            call_erp_api(message)
            return
        except TransientError:
            delay = min(2 ** attempt + random.uniform(0, 1), 60)
            time.sleep(delay)
        except (ValidationError, SchemaError) as e:
            route_to_dlq(message, category="data_quality", error=str(e))
            return
        except Exception as e:
            route_to_dlq(message, category="permanent", error=str(e))
            return
    route_to_dlq(message, category="transient_exhausted", error="max retries")

Wrong: Silent message discard on failure

# BAD — failed messages logged and forgotten. Data is lost forever.
def process_message(message):
    try:
        call_erp_api(message)
    except Exception as e:
        logger.error(f"Failed: {e}")
        acknowledge(message)  # message deleted, data lost

Correct: Route to DLQ with full context for later triage

# GOOD — failed messages preserved with diagnostic context
def process_message(message):
    try:
        call_erp_api(message)
    except Exception as e:
        error_context = classify_error(e, message)
        route_to_dlq(message, category=error_context["category"],
            error=str(e), correlation_id=message.get("correlation_id"))
        acknowledge(message)  # now safely in DLQ

Wrong: Replaying without idempotency check

# BAD — replay sends to ERP without checking if already processed
def replay_from_dlq(dlq_messages):
    for msg in dlq_messages:
        call_erp_api(msg)  # may create duplicate invoice/order
        delete_from_dlq(msg)

Correct: Replay with idempotency verification

# GOOD — check if already processed before replay
def replay_from_dlq(dlq_messages):
    for msg in dlq_messages:
        idempotency_key = msg.get("idempotency_key")
        if is_already_processed(idempotency_key):
            delete_from_dlq(msg)
            continue
        try:
            call_erp_api_with_upsert(msg)  # upsert, not insert
            mark_as_processed(idempotency_key)
            delete_from_dlq(msg)
        except Exception as e:
            quarantine(msg, reason=str(e))  # no infinite loop

Common Pitfalls

Diagnostic Commands

# === AWS SQS DLQ Diagnostics ===
# Check DLQ message count
aws sqs get-queue-attributes \
  --queue-url https://sqs.us-east-1.amazonaws.com/123456789/erp-orders-dlq \
  --attribute-names ApproximateNumberOfMessages ApproximateNumberOfMessagesNotVisible

# Check oldest message age (seconds)
aws sqs get-queue-attributes \
  --queue-url https://sqs.us-east-1.amazonaws.com/123456789/erp-orders-dlq \
  --attribute-names ApproximateAgeOfOldestMessage

# Initiate DLQ redrive to source queue
aws sqs start-message-move-task \
  --source-arn arn:aws:sqs:us-east-1:123456789:erp-orders-dlq \
  --destination-arn arn:aws:sqs:us-east-1:123456789:erp-orders \
  --max-number-of-messages-per-second 50

# Check redrive task status
aws sqs list-message-move-tasks \
  --source-arn arn:aws:sqs:us-east-1:123456789:erp-orders-dlq

# === Azure Service Bus DLQ Diagnostics ===
# Check DLQ message count
az servicebus queue show \
  --resource-group erp-integration \
  --namespace-name erp-bus \
  --name erp-orders \
  --query "countDetails.deadLetterMessageCount"

# === Apache Kafka DLT Diagnostics ===
# Check DLT topic consumer lag
kafka-consumer-groups.sh --bootstrap-server broker:9092 \
  --describe --group dlq-triage

# === MuleSoft Anypoint MQ Diagnostics ===
curl -X GET "https://anypoint.mulesoft.com/mq/admin/api/v1/organizations/{orgId}/environments/{envId}/regions/{region}/destinations/erp-orders-dlq/stats" \
  -H "Authorization: Bearer $ANYPOINT_TOKEN"

Version History & Compatibility

FeatureRelease DatePlatformBreaking ChangesMigration Notes
SQS DLQ Redrive API2024-06AWS SQSN/A (new feature)Replaces custom redrive consumers; velocity control
Anypoint MQ REM2025-01MuleSoftN/A (new feature)Managed replay — replaces manual consume + re-publish
MaxDeliveryCountGAAzure Service BusN/ADefault 10; recommend 5 for ERP integrations
Spring @RetryableTopic + DLT2021Kafka/SpringN/AAuto-creates retry-N and -dlt topics
Quorum Queue delivery-limit2020RabbitMQ 3.8Classic queues unsupportedMust migrate to quorum queues
Boomi Event Streams DLQ2024BoomiN/AConfigurable max retries with exponential backoff

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Messages repeatedly fail and block queue processingSimple transient failures that resolve with retry + backoffError handling & DLQ fundamentals
Failed messages must be diagnosed, fixed, and replayedFire-and-forget integrations (message loss acceptable)Simple error logging + monitoring
Multi-step flows with parent/child message dependenciesSingle API call with synchronous responseDirect API error handling with retry
Compliance requires no data loss in integration pipelineHigh-throughput streaming where per-message triage is cost-prohibitiveBatch error aggregation + statistical monitoring
Multiple failure categories need different remediationAll failures have the same root causeSingle-path retry strategy

Cross-System Comparison

CapabilityAWS SQSAzure Service BusApache KafkaMuleSoft Anypoint MQBoomi
DLQ ArchitectureSeparate queueSub-queue ($deadletterqueue)Separate topic (DLT)Separate queueBuilt-in DLQ
Auto Dead-LetterYes (redrive policy)Yes (MaxDeliveryCount)No (application-level)YesYes (after 7 attempts)
Max Delivery Config1-10001-2000 (default 10)Application-definedConfigurableFixed at 7
Native ReplayYes (Redrive API)No (manual)No (application-level)Yes (REM)Yes (resend)
DLQ RetentionMax 14 daysUnlimited (Premium)Topic config7 days defaultAtom lifecycle
DLQ Reason MetadataCustom attributesDeadLetterReason headerCustom headersCustom propertiesLimited
Non-Destructive PeekVisibility timeoutPeek-lockConsumer offset mgmtAPI browsePanel view
FIFO SupportFIFO DLQ for FIFO queueFIFO within sessionsPartition-orderedFIFO queueNo
MonitoringCloudWatch metricsAzure MonitorConsumer group lagAnypoint MonitoringDashboard

Important Caveats

Related Units