Salesforce Bulk API 2.0 Capabilities

Type: ERP Integration System: Salesforce (API v62.0) Confidence: 0.92 Sources: 8 Verified: 2026-03-01 Freshness: 2026-03-01

TL;DR

System Profile

Salesforce Bulk API 2.0 is the asynchronous, high-volume data processing API for Salesforce CRM and Platform. It was introduced to simplify the developer experience of the original Bulk API (1.0) by eliminating manual batch management — you submit data and Salesforce handles the chunking, batching, retries, and parallel processing internally. It shares the same REST API framework and OAuth authentication as the Salesforce REST API.

This card covers Bulk API 2.0 as available in API v62.0 (Spring '26) for Enterprise, Unlimited, Performance, and Developer editions. Professional edition has limited Bulk API access. Essentials edition does not include Bulk API. The limits documented here apply to Salesforce Production and Sandbox orgs, though sandbox orgs may have lower limits.

PropertyValue
VendorSalesforce
SystemSalesforce CRM / Platform (API v62.0, Spring '26)
API SurfaceBulk API 2.0
Current API Versionv62.0
Editions CoveredEnterprise, Unlimited, Performance, Developer
DeploymentCloud
API DocsBulk API 2.0 Developer Guide
StatusGA (Generally Available)

API Surfaces & Capabilities

Bulk API 2.0 supports two job types — ingest (write operations) and query (read operations). Here is where Bulk API 2.0 fits within the Salesforce API ecosystem:

API SurfaceProtocolBest ForMax Records/RequestRate LimitReal-time?Bulk?
REST APIHTTPS/JSONIndividual record CRUD, <2K records200 (composite), 2,000 (query)100K calls/24h (Enterprise)YesNo
Bulk API 2.0HTTPS/CSV or JSONETL, data migration, >2K records150M records per file100M records/24hNo (async)Yes
Bulk API 1.0HTTPS/CSV, XML, JSONSerial processing, XML requirement10,000 per batch15,000 batches/24hNo (async)Yes
SOAP APIHTTPS/XMLMetadata operations, legacy systems2,000 per callShared with RESTYesNo
Composite APIHTTPS/JSONMulti-object transactions25 subrequestsShared with RESTYesNo
Streaming APIBayeux/CometDReal-time notifications (CDC, PushTopics)N/AEdition-dependentYesN/A

Supported Operations (Ingest)

OperationDescriptionExternal ID Required?Notes
insertCreates new recordsNoFails on duplicate if no external ID
updateModifies existing recordsNoRequires Salesforce record ID in CSV
upsertInsert or update based on external IDYesSpecify externalIdFieldName on job creation
deleteMoves records to Recycle BinNoRequires Salesforce record ID
hardDeletePermanently deletes (bypasses Recycle Bin)NoRequires "Bulk API Hard Delete" permission

Supported Operations (Query)

OperationDescriptionNotes
queryExecutes SOQL, returns active recordsStandard query semantics
queryAllExecutes SOQL, includes soft-deleted and archived recordsIncludes Recycle Bin records (up to 15-day retention)

Rate Limits & Quotas

Per-Request Limits

Limit TypeValueApplies ToNotes
Max upload size per job150 MB (base64 encoded)Ingest jobsKeep unencoded CSV/JSON under 100 MB
Max records per file150,000,000Ingest jobsPractical limit — files rarely approach this
Internal batch size10,000 recordsIngest jobs (internal)Salesforce auto-chunks; not configurable
Query result chunk size100,000-250,000 recordsQuery jobs (internal)Salesforce auto-chunks query output
Max query result size15 GBQuery jobsPer single query job
Query result expiry7 daysQuery jobsMust download results within 7 days of job completion
Batch processing timeout5 minutesPer internal batchBatch paused/requeued if exceeded; retried up to 10 times
Max fields per record5,000All operationsStandard Salesforce object limit

Rolling / Daily Limits

Limit TypeValueWindowEdition Differences
Total records processed100,000,00024h rollingSame across all editions with Bulk API access
Concurrent ingest jobs25Per orgShared between Bulk API 2.0 and 1.0
Concurrent query jobs25Per orgShared between Bulk API 2.0 and 1.0
Total jobs in any state100,000Per orgDelete completed/aborted jobs to free capacity
Bulk API 1.0 batches (if using v1)15,00024h rollingOnly applies to Bulk API 1.0

Transaction / Governor Limits

Each internal batch of 10,000 records processes in transactions of 200 records. Standard Apex governor limits apply per 200-record chunk if triggers or flows fire:

Limit TypePer-Transaction ValueNotes
SOQL queries100Includes queries from triggers — cascading triggers consume from same pool
DML statements150Each insert/update/delete counts as 1
Callouts100HTTP requests to external services within a transaction
CPU time10,000 ms (sync), 60,000 ms (async)Exceeded = transaction abort
Heap size6 MB (sync), 12 MB (async)Large record processing can hit this
Total email invocations10Workflow email actions per transaction

Authentication

Bulk API 2.0 uses the same authentication as Salesforce REST API — all standard OAuth 2.0 flows are supported. This is a key improvement over Bulk API 1.0, which required SOAP-based session ID authentication.

FlowUse WhenToken LifetimeRefresh?Notes
OAuth 2.0 JWT BearerServer-to-server, no user contextSession timeout (default 2h)New JWT per requestRecommended for integrations
OAuth 2.0 Web ServerUser-context operations, interactive appsAccess: 2h, Refresh: until revokedYesRequires callback URL
OAuth 2.0 Client CredentialsMachine-to-machine (no user)Session timeoutYesAvailable since Winter '23
Username-PasswordLegacy, testing onlySession timeoutNoDo NOT use in production — no MFA support

Authentication Gotchas

Constraints

Integration Pattern Decision Tree

START — User needs to bulk-process data in Salesforce
├── What's the operation?
│   ├── Ingest (insert/update/upsert/delete/hardDelete)
│   │   ├── Data volume < 2,000 records?
│   │   │   ├── YES → Use REST API (simpler, synchronous)
│   │   │   └── NO ↓
│   │   ├── Need serial processing (record lock issues)?
│   │   │   ├── YES → Use Bulk API 1.0 (concurrencyMode: serial)
│   │   │   └── NO ↓
│   │   ├── Data volume < 150 MB per file?
│   │   │   ├── YES → Single Bulk API 2.0 job
│   │   │   └── NO → Split into multiple jobs (each ≤100 MB unencoded)
│   │   └── Need XML format?
│   │       ├── YES → Use Bulk API 1.0
│   │       └── NO → Bulk API 2.0 with CSV or JSON
│   └── Query (extract data)
│       ├── Data volume < 2,000 records?
│       │   ├── YES → Use REST API SOQL query
│       │   └── NO ↓
│       ├── Need soft-deleted/archived records?
│       │   ├── YES → Bulk API 2.0 queryAll operation
│       │   └── NO → Bulk API 2.0 query operation
│       └── Result set > 15 GB?
│           ├── YES → Split SOQL with WHERE clause date ranges
│           └── NO → Single Bulk API 2.0 query job
├── CSV delimiter requirements?
│   ├── Standard comma → Default (no config needed)
│   └── Other (pipe, semicolon, tab, caret, backquote) → Set columnDelimiter
└── Error tolerance?
    ├── Zero-loss → Poll status + retrieve failedResults + reprocess
    └── Best-effort → Poll status, log failures, move on

Quick Reference

Ingest Job API Endpoints

OperationMethodEndpointPayloadNotes
Create ingest jobPOST/services/data/v62.0/jobs/ingestJSON (job config)Returns id and contentUrl
Upload dataPUT/services/data/v62.0/jobs/ingest/{jobId}/batchesCSV or JSON dataContent-Type: text/csv
Close job (start processing)PATCH/services/data/v62.0/jobs/ingest/{jobId}{"state":"UploadComplete"}Triggers async processing
Check job statusGET/services/data/v62.0/jobs/ingest/{jobId}N/AReturns state + record counts
Get successful resultsGET/services/data/v62.0/jobs/ingest/{jobId}/successfulResultsN/ACSV with sf__Id, sf__Created
Get failed resultsGET/services/data/v62.0/jobs/ingest/{jobId}/failedResultsN/ACSV with sf__Error column
Get unprocessed recordsGET/services/data/v62.0/jobs/ingest/{jobId}/unprocessedrecordsN/ARecords not attempted
Abort jobPATCH/services/data/v62.0/jobs/ingest/{jobId}{"state":"Aborted"}Stops processing
Delete jobDELETE/services/data/v62.0/jobs/ingest/{jobId}N/AFrees job count quota
List all ingest jobsGET/services/data/v62.0/jobs/ingestN/AFilter by isPkChunkingEnabled, jobType

Job States

StateMeaningTransitions To
OpenAccepting data uploadsUploadComplete, Aborted
UploadCompleteData received, queued for processingInProgress
InProgressSalesforce is processing internal batchesJobComplete, Failed, Aborted
JobCompleteAll records processed (some may have failed individually)
FailedJob-level failure (e.g., 10 batch retries exhausted)
AbortedManually or automatically aborted

Step-by-Step Integration Guide

1. Authenticate and obtain an OAuth access token

Obtain an access token using the JWT Bearer flow for server-to-server integration. [src4]

curl -X POST https://login.salesforce.com/services/oauth2/token \
  -d "grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer" \
  -d "assertion=${JWT_TOKEN}"

Verify: curl -H "Authorization: Bearer ${ACCESS_TOKEN}" ${INSTANCE_URL}/services/data/v62.0/limits → expected: JSON with DailyBulkV2QueryJobs field.

2. Create a Bulk API 2.0 ingest job

Define the job parameters: object, operation, content type. [src1]

curl -X POST ${INSTANCE_URL}/services/data/v62.0/jobs/ingest \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{"object":"Contact","operation":"upsert","externalIdFieldName":"External_ID__c","contentType":"CSV"}'

Verify: Response includes "state": "Open" and a valid id field.

3. Upload CSV data

Send CSV data as the request body. First line must be field API names. [src1]

curl -X PUT ${INSTANCE_URL}/services/data/v62.0/jobs/ingest/${JOB_ID}/batches \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  -H "Content-Type: text/csv" \
  --data-binary @contacts.csv

Verify: HTTP 201 Created response.

4. Close the job to start processing

Signal to Salesforce that all data has been uploaded. [src1]

curl -X PATCH ${INSTANCE_URL}/services/data/v62.0/jobs/ingest/${JOB_ID} \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{"state": "UploadComplete"}'

Verify: Response shows "state": "UploadComplete".

5. Poll for job completion

Check status periodically. Recommended: 30s for <100K records, 60s for larger jobs. [src1]

curl ${INSTANCE_URL}/services/data/v62.0/jobs/ingest/${JOB_ID} \
  -H "Authorization: Bearer ${ACCESS_TOKEN}"
# Response: {"state":"JobComplete","numberRecordsProcessed":50000,"numberRecordsFailed":12,...}

Verify: state is JobComplete. Check numberRecordsFailed.

6. Retrieve failed records and reprocess

Download failed records CSV, fix issues, submit new job with corrected records. [src1]

curl ${INSTANCE_URL}/services/data/v62.0/jobs/ingest/${JOB_ID}/failedResults \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  -o failed_records.csv

Verify: Open failed_records.csv — each row includes sf__Error column.

Code Examples

Python: Bulk upsert with job monitoring and retry

# Input:  CSV file path, Salesforce credentials (instance_url, access_token)
# Output: Job completion summary with success/failure counts

import requests
import time

def bulk_upsert(instance_url, access_token, object_name, csv_path,
                external_id_field, api_version="v62.0"):
    base = f"{instance_url}/services/data/{api_version}/jobs/ingest"
    headers = {"Authorization": f"Bearer {access_token}"}

    # Create job
    job_resp = requests.post(base, headers={**headers, "Content-Type": "application/json"},
        json={"object": object_name, "operation": "upsert",
              "externalIdFieldName": external_id_field,
              "contentType": "CSV", "lineEnding": "LF"})
    job_resp.raise_for_status()
    job_id = job_resp.json()["id"]

    # Upload CSV
    with open(csv_path, "rb") as f:
        requests.put(f"{base}/{job_id}/batches",
            headers={**headers, "Content-Type": "text/csv"}, data=f).raise_for_status()

    # Close job
    requests.patch(f"{base}/{job_id}",
        headers={**headers, "Content-Type": "application/json"},
        json={"state": "UploadComplete"}).raise_for_status()

    # Poll until complete
    while True:
        info = requests.get(f"{base}/{job_id}", headers=headers).json()
        if info["state"] in ("JobComplete", "Failed", "Aborted"):
            break
        time.sleep(30)

    # Retrieve failed records if any
    failed = info.get("numberRecordsFailed", 0)
    if failed > 0:
        failed_csv = requests.get(f"{base}/{job_id}/failedResults", headers=headers).text
        print(f"{failed} failed records:\n{failed_csv[:2000]}")

    return {"job_id": job_id, "state": info["state"],
            "processed": info.get("numberRecordsProcessed", 0), "failed": failed}

JavaScript/Node.js: Bulk query with result pagination

// Input:  Salesforce credentials, SOQL query string
// Output: Array of CSV result chunks (handles locator-based pagination)

async function bulkQuery(instanceUrl, accessToken, soql, apiVersion = 'v62.0') {
  const base = `${instanceUrl}/services/data/${apiVersion}/jobs/query`;
  const headers = { 'Authorization': `Bearer ${accessToken}` };

  // Create query job
  const jobResp = await fetch(base, {
    method: 'POST',
    headers: { ...headers, 'Content-Type': 'application/json' },
    body: JSON.stringify({ operation: 'query', query: soql })
  });
  const { id: jobId } = await jobResp.json();

  // Poll for completion
  let state = 'UploadComplete';
  while (!['JobComplete', 'Failed', 'Aborted'].includes(state)) {
    await new Promise(r => setTimeout(r, 10000));
    const info = await (await fetch(`${base}/${jobId}`, { headers })).json();
    state = info.state;
  }
  if (state !== 'JobComplete') throw new Error(`Query job ${state}`);

  // Retrieve results with locator-based pagination
  let allRecords = [], locator = null;
  do {
    const url = locator ? `${base}/${jobId}/results?locator=${locator}`
                        : `${base}/${jobId}/results`;
    const resp = await fetch(url, { headers });
    locator = resp.headers.get('Sforce-Locator');
    if (locator === 'null') locator = null;
    allRecords.push(await resp.text());
  } while (locator);

  return allRecords;
}

cURL: Quick ingest job test

# Input:  Valid access token and instance URL
# Output: Job ID and processing status

# 1. Create job
JOB_ID=$(curl -s -X POST ${INSTANCE_URL}/services/data/v62.0/jobs/ingest \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{"object":"Account","operation":"insert","contentType":"CSV"}' \
  | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])")

# 2. Upload CSV
curl -X PUT ${INSTANCE_URL}/services/data/v62.0/jobs/ingest/${JOB_ID}/batches \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  -H "Content-Type: text/csv" \
  -d 'Name,Industry,NumberOfEmployees
Acme Corp,Technology,500
Beta Inc,Finance,200'

# 3. Close job and 4. Check status
curl -X PATCH ${INSTANCE_URL}/services/data/v62.0/jobs/ingest/${JOB_ID} \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  -H "Content-Type: application/json" -d '{"state":"UploadComplete"}'

Data Mapping

CSV Format Requirements

PropertyValueNotes
Header rowRequired — field API namesNot display labels
EncodingUTF-8BOM characters cause job failure
Line endingsLF or CRLFConfigurable via lineEnding param
Column delimiterComma (default)Options: COMMA, BACKQUOTE, CARET, PIPE, SEMICOLON, TAB
Null values#N/AEmpty string clears field (not null)
Boolean valuestrue / falseCase-insensitive
Date formatYYYY-MM-DDISO 8601
DateTime formatYYYY-MM-DDThh:mm:ss.sssZUTC recommended
Number formatNo thousands separator1234.56 not 1,234.56
EscapingDouble-quote fields with delimitersEscape quotes by doubling: ""

Data Type Gotchas

Error Handling & Failure Points

Common Error Codes

CodeMeaningCauseResolution
DUPLICATE_VALUEDuplicate external ID or unique fieldRecord with same value existsUse upsert instead of insert
REQUIRED_FIELD_MISSINGMandatory field not in CSVRequired field omitted or nullEnsure all required fields have values
FIELD_CUSTOM_VALIDATION_EXCEPTIONValidation rule failedData violates custom validation ruleFix data or deactivate rule during migration
UNABLE_TO_LOCK_ROWRow lock contentionConcurrent updates to same recordUse Bulk API 1.0 serial mode; add retry with jitter
INVALID_FIELDField doesn't exist or isn't writableWrong API name or field-level securityVerify field API names and permissions
STORAGE_LIMIT_EXCEEDEDOrg data storage fullOrg exceeded storage allocationFree storage or purchase additional
REQUEST_LIMIT_EXCEEDEDDaily Bulk API limit hit100M record daily limit exceededWait for 24h rolling window
InvalidBatchJob-level failure after 10 retriesInternal batch failed repeatedlyReview org automation complexity

Failure Points in Production

Anti-Patterns

Wrong: Polling job status every second

# BAD — hammers the API, consumes quota, does not speed up processing
while True:
    status = check_status(job_id)
    if status['state'] == 'JobComplete': break
    time.sleep(1)  # 3,600 API calls/hour wasted

Correct: Exponential backoff polling

# GOOD — starts at 10s, backs off to 60s max
wait = 10
while True:
    status = check_status(job_id)
    if status['state'] in ('JobComplete', 'Failed', 'Aborted'): break
    time.sleep(min(wait, 60))
    wait = min(wait * 1.5, 60)

Wrong: Uploading one giant 500 MB file

# BAD — exceeds 150 MB limit, job creation fails
with open('huge_export.csv', 'rb') as f:
    requests.put(f"{base}/{job_id}/batches", data=f)

Correct: Chunking files at 90 MB boundaries

# GOOD — split at ~90 MB (safe margin below 150 MB base64 limit)
def chunk_csv(input_path, max_bytes=90_000_000):
    chunks, current_chunk, current_size = [], [], 0
    with open(input_path, 'r', encoding='utf-8-sig') as f:
        reader = csv.reader(f)
        header = ','.join(next(reader)) + '\n'
        for row in reader:
            line = ','.join(row) + '\n'
            if current_size + len(line.encode()) > max_bytes and current_chunk:
                chunks.append(header + ''.join(current_chunk))
                current_chunk, current_size = [], 0
            current_chunk.append(line)
            current_size += len(line.encode())
    if current_chunk: chunks.append(header + ''.join(current_chunk))
    return chunks  # Submit each as separate job

Wrong: Ignoring failed records after JobComplete

# BAD — assumes JobComplete means 100% success
if status['state'] == 'JobComplete':
    print("All done!")  # Could have thousands of failed records

Correct: Always check failed record count

# GOOD — explicitly handle partial success
if status['state'] == 'JobComplete':
    failed = status.get('numberRecordsFailed', 0)
    if failed > 0:
        failed_csv = get_failed_results(job_id)
        save_for_retry(failed_csv)
        alert_team(f"Bulk job {job_id}: {failed} records failed")

Common Pitfalls

Diagnostic Commands

# Check API usage / remaining Bulk API limits
curl -s ${INSTANCE_URL}/services/data/v62.0/limits \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  | python3 -c "import sys,json; d=json.load(sys.stdin); [print(f'{k}: {v}') for k,v in d.items() if 'Bulk' in k or 'Daily' in k]"

# List all ingest jobs (most recent first)
curl -s "${INSTANCE_URL}/services/data/v62.0/jobs/ingest" \
  -H "Authorization: Bearer ${ACCESS_TOKEN}"

# Check specific job status
curl -s ${INSTANCE_URL}/services/data/v62.0/jobs/ingest/${JOB_ID} \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" | python3 -m json.tool

# Test authentication
curl -s -o /dev/null -w "%{http_code}" ${INSTANCE_URL}/services/data/v62.0/ \
  -H "Authorization: Bearer ${ACCESS_TOKEN}"
# Expected: 200

# Describe target object fields
curl -s ${INSTANCE_URL}/services/data/v62.0/sobjects/Contact/describe \
  -H "Authorization: Bearer ${ACCESS_TOKEN}" \
  | python3 -c "import sys,json; [print(f'{f[\"name\"]:40} {f[\"type\"]:15}') for f in json.load(sys.stdin)['fields']]"

# Delete a completed job to free quota
curl -X DELETE ${INSTANCE_URL}/services/data/v62.0/jobs/ingest/${JOB_ID} \
  -H "Authorization: Bearer ${ACCESS_TOKEN}"

Version History & Compatibility

API VersionReleaseStatusKey ChangesNotes
v62.0Spring '26 (Feb 2026)CurrentNo breaking changesLatest GA version
v61.0Winter '26 (Oct 2025)SupportedNo breaking changes
v60.0Summer '25 (Jun 2025)SupportedNo breaking changes
v56.0Spring '23 (Feb 2023)SupportedBulk API 2.0 became default in Data LoaderMajor adoption milestone
v47.0Winter '20 (Oct 2019)SupportedQuery operations addedPreviously ingest-only
v41.0Winter '18 (Oct 2017)Minimum for Bulk API 2.0Initial GA releaseIngest operations only

Deprecation Policy

Salesforce supports API versions for a minimum of 3 years. Versions are retired in groups — typically 10+ versions at once, with at least 1 year advance notice. No Bulk API 2.0-era versions have been retired as of Spring '26.

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Data migration or ETL of 2,000+ recordsReal-time individual record operations needing <1s responseREST API
Scheduled nightly/hourly batch synchronizationYou need serial processing to avoid lock contentionBulk API 1.0 (serial mode)
Initial data load for new Salesforce orgYou need XML format for legacy integrationBulk API 1.0 (XML support)
Large SOQL query exports (>2,000 records)You need immediate query results (sub-second)REST API SOQL query
Extracting soft-deleted records (queryAll)You need real-time change notificationsStreaming API / CDC
Idempotent bulk upserts via external IDYou need all-or-nothing transactional behaviorComposite API

Important Caveats

Related Units