Oracle BICC Data Extraction Capabilities & Limits

Type: ERP Integration System: Oracle Fusion Cloud Applications (Release 25C) Confidence: 0.90 Sources: 8 Verified: 2026-03-01 Freshness: 2026-03-01

TL;DR

System Profile

Oracle BICC is a native data extraction tool embedded in Oracle Fusion Cloud Applications (ERP, HCM, SCM, CX, and Procurement). It provides pre-built data stores called "offerings" that map to functional areas. Each offering contains one or more Public View Objects (PVOs) — specifically, ExtractPVOs designed for bulk extraction efficiency. BICC is NOT a general-purpose API — it is purpose-built for scheduled, high-volume outbound data movement to external systems. This card covers BICC extraction only (outbound). For inbound data loading, see the FBDI card. [src1, src7]

PropertyValue
VendorOracle
SystemOracle Fusion Cloud Applications (Release 25C)
API SurfaceBICC (BI Cloud Connector) — bulk CSV extraction
Current Version25C (Update 25.06)
Editions CoveredAll Oracle Fusion Cloud editions (ERP, HCM, SCM, CX, Procurement)
DeploymentCloud
API DocsBICC Documentation
StatusGA — actively maintained

API Surfaces & Capabilities

BICC is not a traditional API — it is a managed extraction service with a console UI, SOAP API for scheduling, and REST API for job management. Data is extracted from pre-built PVOs and written to storage targets as compressed CSV files. [src1, src2, src4]

API SurfaceProtocolBest ForMax Records/RequestRate LimitReal-time?Bulk?
BICC Console (UI)Web UIManual/ad-hoc extracts, configurationUnlimited (PVO-dependent)N/ANoYes
BICC SOAP APISOAP/XMLProgrammatic scheduling and triggeringUnlimited (PVO-dependent)No explicit limit; 10h timeout per VONoYes
BICC REST APIHTTPS/JSONJob status monitoring, metadata queriesN/A (monitoring only)No explicit limitNoN/A
UCM WebDAVWebDAV/HTTPSDownloading extracted files from UCM storage5GB per fileSubject to UCM storage limitsNoYes
OCI Object Storage APIHTTPS/JSON (S3-compatible)Downloading extracted files from OCI storageNo per-file limit on downloadOCI service limits applyNoYes

Rate Limits & Quotas

Per-Extract Limits

Limit TypeValueApplies ToNotes
Max file size per CSV5GBAll extracted filesSilently truncated if exceeded — no error raised [src2, src4]
Split file size (configurable)1GB-5GB (default 1GB)Per-VO extraction outputSet in BICC Console under extract preferences [src2]
Extract timeout (default)10 hours per VOIndividual VO within an extract jobConfigurable via Manage Offerings > Job Settings [src7]
Parallel processing degree1-8 (recommended max 8)Per extract jobBeyond 8 threads, diminishing returns [src4]
Max concurrent extract jobs1 per offeringPer BICC offeringMultiple offerings can run concurrently [src1]

Scheduling Limits

Limit TypeValueWindowNotes
Minimum recommended interval4 hoursBetween extracts of same offeringShorter intervals risk overlapping jobs and Fusion performance degradation [src4]
Scheduling recurrence optionsImmediate, Hourly, Daily, Day of Month, YearPer extract scheduleHourly is the most granular UI option [src1, src2]
Prune time for incremental dataConfigurable (hours)Per offeringCritical for data consistency — recommended 2-4 hours [src7]
Job queue positionFIFO within offeringPer offeringJobs queued if previous extract still running [src1]

Storage Limits

Limit TypeValueApplies ToNotes
UCM storageShared with Fusion instanceAll UCM content (not just BICC)Monitor utilization — full UCM blocks all BICC extracts [src3]
OCI Object StoragePer OCI tenancy limitsDedicated BICC bucketRecommended: separate compartment for BICC data [src3, src8]
File retention on UCMNo auto-cleanupExtracted files accumulateMust implement manual or scripted cleanup [src3]
File retention on OCILifecycle policies availablePer bucketConfigure auto-archive/delete policies [src3, src8]

Authentication

BICC requires Oracle Fusion Cloud credentials with specific roles provisioned. For API-driven scheduling, JWT token authentication is available. [src1, src8]

FlowUse WhenToken LifetimeRefresh?Notes
Oracle Fusion SSOInteractive BICC Console accessSession-basedYes (session refresh)Standard Fusion login — requires BICC-specific roles [src1]
JWT Token AuthenticationProgrammatic SOAP/REST API accessConfigurableNew token per requestRequires X.509 certificate registration; more secure [src8]
Basic Auth (username/password)Legacy API access, testingSession-basedNoNot recommended for production — no MFA support [src8]

Authentication Gotchas

Constraints

Integration Pattern Decision Tree

START -- Need to extract bulk data from Oracle Fusion Cloud
|-- What's the data volume?
|   |-- < 1,000 records/day
|   |   |-- Need real-time? --> REST API (different card)
|   |   |-- Batch OK? --> BICC works but may be overkill
|   |-- 1,000-100,000 records/day
|   |   |-- BICC is the right tool
|   |   |-- Use incremental extraction to minimize volume
|   |-- > 100,000 records/day
|   |   |-- BICC is required -- REST API cannot handle this volume
|   |   |-- Configure split file size = 2GB, parallel degree = 8
|   |   |-- Validate row counts post-extraction (5GB truncation risk)
|-- What's the target system?
|   |-- Oracle ADW / OCI Data Lakehouse
|   |   |-- Use OCI Object Storage target (native integration)
|   |-- Third-party data warehouse (Snowflake, BigQuery, Redshift)
|   |   |-- OCI Object Storage --> external stage or S3-compatible fetch
|   |-- On-premise data warehouse
|   |   |-- UCM WebDAV download or OCI Object Storage with VPN
|-- Extraction frequency?
|   |-- Once (initial load) --> Full extract, split at 2GB, 20h+ timeout
|   |-- Daily --> Incremental extraction with prune time
|   |-- More frequent than daily --> Minimum 4h interval recommended
|-- Functional areas?
|   |-- Financials --> Financial Analytics offering
|   |-- Procurement --> Procurement Analytics offering
|   |-- HCM --> HCM Analytics offering
|   |-- Cross-functional --> Multiple offerings (no transactional consistency)

Quick Reference

FeatureValueNotes
Output formatCompressed CSV (gzip)One manifest + one or more CSV files per VO [src2, src5]
Storage targetsUCM (default) or OCI Object StorageOCI recommended for new deployments [src3]
Extraction typesFull or IncrementalIncremental based on LastUpdateDate [src1, src7]
SchedulingOn-demand or scheduled (hourly/daily/monthly)Minimum 4-hour interval recommended [src1, src4]
Data sourcePre-built PVOs per offeringUse ExtractPVOs only [src7]
Max file size5GB (silently truncated)Configure split at 1-2GB [src2, src4]
Default split size1GBConfigurable up to 5GB [src2]
Default timeout10 hours per VOAdjustable via Manage Offerings [src7]
Parallel threads1-8 recommendedOptimal is 4-8 [src4]
CompressiongzipReduces transfer size by ~70% [src4, src5]
Manifest fileJSONContains VO name, row count, file list, timestamps [src5, src6]
Incremental supportYes (per PVO)Requires LastUpdateDate column [src7]
Scheduling APISOAPFor programmatic trigger/scheduling [src1]
Monitoring APIRESTFor job status polling [src1]

Step-by-Step Integration Guide

1. Configure BICC external storage target

Set up OCI Object Storage as the extraction target (recommended over UCM for new deployments). [src3, src8]

1. Log into Oracle Fusion Cloud as admin
2. Navigate to: Tools > BI Cloud Connector
3. Go to: Manage Extracts > Configure External Storage
4. Select: Oracle Cloud Infrastructure Object Storage
5. Enter: Tenancy OCID, User OCID, Fingerprint, Private key (PEM), Region, Namespace, Bucket name
6. Test Connection > Save

Verify: Navigate to Manage Extracts > External Storage tab. Status should show "Connected".

2. Select data store offerings and PVOs

Choose the functional area and specific view objects for extraction. Use ExtractPVOs only. [src1, src7]

1. In BICC Console: Manage Extracts > Select Offerings
2. Choose offering (e.g., "Financial Analytics")
3. Select specific ExtractPVOs (avoid OTBI PVOs)
4. Select required columns (include LastUpdateDate for incremental)
5. Set split file size: 2GB
6. Set parallel degree: 4-8

Verify: Selected PVOs appear under the offering with column selections saved.

3. Run initial full extract

Run the first full extraction to establish the baseline dataset. [src1, src2]

1. In BICC Console: Manage Extracts > Create Extract
2. Select offering, Extract type: Full
3. Storage target: OCI Object Storage
4. Timeout: 20 hours (for large initial loads)
5. Click: Submit > Monitor under Monitor Extracts

Verify: Job status shows "Completed". Download manifest and verify row counts match expected totals.

4. Set up incremental extraction schedule

Configure recurring incremental extracts for ongoing data sync. [src1, src7]

1. In BICC Console: Manage Extracts > Schedule
2. Extract type: Incremental
3. Recurrence: Daily (or every 4+ hours)
4. Prune time: 2 hours
5. Start time: During Fusion maintenance window

Verify: Scheduled job appears under Monitor Extracts. After first scheduled run, incremental files contain only changed records.

5. Download and validate extracted files

Retrieve files from OCI Object Storage and validate completeness. [src5, src6]

# List extracted files in the BICC bucket
oci os object list --bucket-name bicc-extracts --prefix "data/" --output json

# Download manifest and data files
oci os object get --bucket-name bicc-extracts --name "data/manifest.json" --file manifest.json

# Validate row count matches manifest
MANIFEST_COUNT=$(jq '.rowCount' manifest.json)
ACTUAL_COUNT=$(zcat extracted_file.csv.gz | wc -l)
echo "Manifest: $MANIFEST_COUNT, Actual: $((ACTUAL_COUNT - 1))"

Verify: Manifest row count equals actual CSV row count minus 1 (header). If actual is lower, file was truncated.

Code Examples

Python: Download and validate BICC extracts from OCI Object Storage

# Input:  OCI config, bucket name, extract prefix
# Output: Downloaded CSV files with row count validation

import oci  # oci>=2.119.0
import gzip
import json
import os

def download_bicc_extract(config_file, bucket_name, prefix, output_dir):
    """Download BICC extract from OCI Object Storage and validate."""
    config = oci.config.from_file(config_file)
    object_storage = oci.object_storage.ObjectStorageClient(config)
    namespace = object_storage.get_namespace().data

    objects = object_storage.list_objects(
        namespace, bucket_name, prefix=prefix
    ).data.objects

    manifest = None
    csv_files = []

    for obj in objects:
        local_path = os.path.join(output_dir, os.path.basename(obj.name))
        response = object_storage.get_object(namespace, bucket_name, obj.name)
        with open(local_path, "wb") as f:
            for chunk in response.data.raw.stream(1024 * 1024):
                f.write(chunk)
        if obj.name.endswith("manifest.json"):
            with open(local_path) as f:
                manifest = json.load(f)
        elif obj.name.endswith(".csv.gz"):
            csv_files.append(local_path)

    # Validate row counts against manifest
    if manifest:
        for csv_path in csv_files:
            with gzip.open(csv_path, "rt") as f:
                actual_rows = sum(1 for _ in f) - 1  # subtract header
            expected = manifest.get("rowCount", "unknown")
            status = "OK" if actual_rows >= int(expected) else "TRUNCATED"
            print(f"{os.path.basename(csv_path)}: expected={expected}, actual={actual_rows} [{status}]")
    return csv_files

Python: Trigger BICC extract via SOAP API

# Input:  Oracle Fusion SOAP endpoint, credentials, offering name
# Output: Extract job ID for monitoring

import requests  # requests>=2.31.0
from xml.etree import ElementTree

def trigger_bicc_extract(fusion_url, username, password, offering_name):
    """Trigger a BICC incremental extract via SOAP API."""
    soap_url = f"{fusion_url}/biacm/ws/BIACMService"
    soap_body = f"""<?xml version="1.0" encoding="UTF-8"?>
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                      xmlns:typ="http://xmlns.oracle.com/apps/biacm/types">
      <soapenv:Header/>
      <soapenv:Body>
        <typ:submitExtractRequest>
          <typ:offeringName>{offering_name}</typ:offeringName>
          <typ:extractType>INCREMENTAL</typ:extractType>
        </typ:submitExtractRequest>
      </soapenv:Body>
    </soapenv:Envelope>"""

    response = requests.post(
        soap_url, data=soap_body,
        headers={"Content-Type": "text/xml; charset=utf-8"},
        auth=(username, password), timeout=120)
    response.raise_for_status()
    root = ElementTree.fromstring(response.text)
    job_id = root.find(".//{http://xmlns.oracle.com/apps/biacm/types}jobId")
    return job_id.text if job_id is not None else None

cURL: Monitor BICC job status

# Input:  Oracle Fusion URL, credentials, job ID
# Output: Job status (RUNNING / COMPLETED / FAILED)

# Check job status
curl -s -u "bicc_user:password" \
  "https://your-instance.fa.us2.oraclecloud.com/biacm/api/v1/jobs/${JOB_ID}/status" \
  -H "Content-Type: application/json"

# List all BICC files in OCI Object Storage bucket
oci os object list --bucket-name bicc-extracts --prefix "data/" --output table

# Check for files approaching 5GB truncation limit
oci os object list --bucket-name bicc-extracts --prefix "data/" \
  --output json | jq '.data[] | select(.size > 4000000000) | {name, size}'

Data Mapping

BICC Output File Structure

ComponentPurposeFormatLocation
Manifest fileMetadata: VO name, row count, file list, timestampsJSONRoot of extract directory [src5, src6]
Data filesExtracted records from each PVOCompressed CSV (gzip)One or more files per PVO (split by configured size) [src2]
Log filesExtraction status, errors, timings per VOJSONPer-job log directory [src6]

Data Type Gotchas

Error Handling & Failure Points

Common Error Conditions

ConditionSymptomCauseResolution
5GB silent truncationJob shows "Completed" but CSV has fewer rowsFile exceeded 5GB limitReduce split file size to 2GB; filter columns [src2, src4]
Extract timeoutJob shows "Failed" after 10+ hoursVO has too many records for initial loadIncrease timeout; partition by business unit [src7]
UCM storage fullNew extracts fail to write filesUCM shared storage exhaustedClean up old extracts; migrate to OCI Object Storage [src3]
PVO performance degradationExtract runs 10x slower than normalUsing OTBI PVO instead of ExtractPVOSwitch to designated ExtractPVO [src7]
Overlapping extract jobsSecond job queued indefinitelyPrevious extract still runningWait for completion or cancel; extend interval [src1]
Incremental returning no dataEmpty CSV filesLastUpdateDate not selected or prune time too narrowVerify column selection; adjust prune time [src7]
OCI Object Storage auth failureExtract fails at file writeExpired API key or insufficient IAMRotate key; verify IAM policy [src8]

Failure Points in Production

Anti-Patterns

Wrong: Using OTBI reporting PVOs for BICC extraction

-- BAD: Selecting OTBI PVO for bulk extraction
-- These PVOs join transactional tables, degrade Fusion performance for all users
Selected PVO: FscmTopModelAM.FinancialAnalysisAM.TransactionHeaderPVO
Result: 10x slower extraction, Fusion UI sluggish

Correct: Use designated ExtractPVOs

-- GOOD: Use the purpose-built ExtractPVO for the same data
-- ExtractPVOs read from denormalized views optimized for bulk extraction
Selected PVO: FscmTopModelAM.FinExtractAM.ApInvoicesExtractPVO
Result: Fast extraction with minimal Fusion performance impact

Wrong: Setting split file size to 5GB

-- BAD: Maximizing split size to reduce file count
-- Split file size: 5GB
-- A PVO producing 5.1GB will be silently truncated to 5GB
-- You lose data with a "Completed" status and no error

Correct: Set split file size to 2GB with post-extraction validation

-- GOOD: Keep split size well under the 5GB ceiling
-- Split file size: 2GB
-- A 5.1GB extraction produces three files: 2GB + 2GB + 1.1GB
-- All data preserved, no truncation risk
-- Always compare manifest rowCount vs actual CSV row count

Wrong: Running BICC extracts every 30 minutes

-- BAD: Treating BICC like a real-time integration
-- Schedule: Every 30 minutes
-- Previous extract may not finish, queue buildup,
-- Fusion performance degrades under continuous extraction load

Correct: Use 4+ hour intervals for BICC; REST API for real-time

-- GOOD: Right tool for the right interval
-- BICC schedule: Every 4-6 hours (or daily for most use cases)
-- For sub-hour data needs: Use Oracle REST API
-- For real-time events: Use Oracle Business Events / webhooks

Common Pitfalls

Diagnostic Commands

# Check OCI Object Storage for BICC extract files
oci os object list --bucket-name bicc-extracts --prefix "data/" --output table

# Validate extracted file sizes (check for files approaching 5GB)
oci os object list --bucket-name bicc-extracts --prefix "data/" \
  --output json | jq '.data[] | select(.size > 4000000000) | {name, size}'

# Download and inspect manifest for row counts
oci os object get --bucket-name bicc-extracts \
  --name "data/manifest.json" --file /tmp/manifest.json
cat /tmp/manifest.json | jq '.'

# Count rows in extracted CSV to detect truncation
zcat extracted_file.csv.gz | wc -l

# Check BICC job status via Fusion REST
curl -s -u "admin:password" \
  "https://your-instance.fa.us2.oraclecloud.com/biacm/api/v1/jobs?status=RUNNING" \
  -H "Accept: application/json" | jq '.'

Version History & Compatibility

ReleaseDateStatusKey BICC ChangesNotes
25C (Update 25.06)2025-06CurrentOCI Object Storage enhancementsImproved connectivity and monitoring [src1]
25B (Update 25.03)2025-03SupportedOCI Object Storage recommended over UCMNew deployments guided to OCI OS [src3]
24D (Update 24.12)2024-12SupportedEnhanced extraction loggingJSON log files with per-VO timings [src6]
24C (Update 24.09)2024-09SupportedCustom object extraction supportExtractPVOs for custom objects [src5]
24B (Update 24.06)2024-06SupportedSplit file size configurability1-5GB configurable per VO [src2]

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Bulk outbound data extraction for warehousing/analyticsReal-time individual record operations (<1s latency)Oracle ERP Cloud REST API
Scheduled data sync with >100K records/dayInbound data loading into Oracle FusionFBDI (File-Based Data Import)
Data lake population from Oracle FusionSub-hour data freshness requirementsREST API + Business Events
Initial data migration to external systemsWriting data back to Oracle FusionREST API or FBDI
Cross-functional analytics data extractionSmall data volumes (<1,000 records/day)REST API (simpler, real-time)
Populating Oracle ADW or OCI data lakehouseNeed JSON/XML/Parquet output formatREST API or custom extraction

Important Caveats

Related Units