Oracle BICC is a native data extraction tool embedded in Oracle Fusion Cloud Applications (ERP, HCM, SCM, CX, and Procurement). It provides pre-built data stores called "offerings" that map to functional areas. Each offering contains one or more Public View Objects (PVOs) — specifically, ExtractPVOs designed for bulk extraction efficiency. BICC is NOT a general-purpose API — it is purpose-built for scheduled, high-volume outbound data movement to external systems. This card covers BICC extraction only (outbound). For inbound data loading, see the FBDI card. [src1, src7]
| Property | Value |
|---|---|
| Vendor | Oracle |
| System | Oracle Fusion Cloud Applications (Release 25C) |
| API Surface | BICC (BI Cloud Connector) — bulk CSV extraction |
| Current Version | 25C (Update 25.06) |
| Editions Covered | All Oracle Fusion Cloud editions (ERP, HCM, SCM, CX, Procurement) |
| Deployment | Cloud |
| API Docs | BICC Documentation |
| Status | GA — actively maintained |
BICC is not a traditional API — it is a managed extraction service with a console UI, SOAP API for scheduling, and REST API for job management. Data is extracted from pre-built PVOs and written to storage targets as compressed CSV files. [src1, src2, src4]
| API Surface | Protocol | Best For | Max Records/Request | Rate Limit | Real-time? | Bulk? |
|---|---|---|---|---|---|---|
| BICC Console (UI) | Web UI | Manual/ad-hoc extracts, configuration | Unlimited (PVO-dependent) | N/A | No | Yes |
| BICC SOAP API | SOAP/XML | Programmatic scheduling and triggering | Unlimited (PVO-dependent) | No explicit limit; 10h timeout per VO | No | Yes |
| BICC REST API | HTTPS/JSON | Job status monitoring, metadata queries | N/A (monitoring only) | No explicit limit | No | N/A |
| UCM WebDAV | WebDAV/HTTPS | Downloading extracted files from UCM storage | 5GB per file | Subject to UCM storage limits | No | Yes |
| OCI Object Storage API | HTTPS/JSON (S3-compatible) | Downloading extracted files from OCI storage | No per-file limit on download | OCI service limits apply | No | Yes |
| Limit Type | Value | Applies To | Notes |
|---|---|---|---|
| Max file size per CSV | 5GB | All extracted files | Silently truncated if exceeded — no error raised [src2, src4] |
| Split file size (configurable) | 1GB-5GB (default 1GB) | Per-VO extraction output | Set in BICC Console under extract preferences [src2] |
| Extract timeout (default) | 10 hours per VO | Individual VO within an extract job | Configurable via Manage Offerings > Job Settings [src7] |
| Parallel processing degree | 1-8 (recommended max 8) | Per extract job | Beyond 8 threads, diminishing returns [src4] |
| Max concurrent extract jobs | 1 per offering | Per BICC offering | Multiple offerings can run concurrently [src1] |
| Limit Type | Value | Window | Notes |
|---|---|---|---|
| Minimum recommended interval | 4 hours | Between extracts of same offering | Shorter intervals risk overlapping jobs and Fusion performance degradation [src4] |
| Scheduling recurrence options | Immediate, Hourly, Daily, Day of Month, Year | Per extract schedule | Hourly is the most granular UI option [src1, src2] |
| Prune time for incremental data | Configurable (hours) | Per offering | Critical for data consistency — recommended 2-4 hours [src7] |
| Job queue position | FIFO within offering | Per offering | Jobs queued if previous extract still running [src1] |
| Limit Type | Value | Applies To | Notes |
|---|---|---|---|
| UCM storage | Shared with Fusion instance | All UCM content (not just BICC) | Monitor utilization — full UCM blocks all BICC extracts [src3] |
| OCI Object Storage | Per OCI tenancy limits | Dedicated BICC bucket | Recommended: separate compartment for BICC data [src3, src8] |
| File retention on UCM | No auto-cleanup | Extracted files accumulate | Must implement manual or scripted cleanup [src3] |
| File retention on OCI | Lifecycle policies available | Per bucket | Configure auto-archive/delete policies [src3, src8] |
BICC requires Oracle Fusion Cloud credentials with specific roles provisioned. For API-driven scheduling, JWT token authentication is available. [src1, src8]
| Flow | Use When | Token Lifetime | Refresh? | Notes |
|---|---|---|---|---|
| Oracle Fusion SSO | Interactive BICC Console access | Session-based | Yes (session refresh) | Standard Fusion login — requires BICC-specific roles [src1] |
| JWT Token Authentication | Programmatic SOAP/REST API access | Configurable | New token per request | Requires X.509 certificate registration; more secure [src8] |
| Basic Auth (username/password) | Legacy API access, testing | Session-based | No | Not recommended for production — no MFA support [src8] |
manage objects in compartment on the target bucket — a separate permission layer from Fusion roles. [src8]START -- Need to extract bulk data from Oracle Fusion Cloud
|-- What's the data volume?
| |-- < 1,000 records/day
| | |-- Need real-time? --> REST API (different card)
| | |-- Batch OK? --> BICC works but may be overkill
| |-- 1,000-100,000 records/day
| | |-- BICC is the right tool
| | |-- Use incremental extraction to minimize volume
| |-- > 100,000 records/day
| | |-- BICC is required -- REST API cannot handle this volume
| | |-- Configure split file size = 2GB, parallel degree = 8
| | |-- Validate row counts post-extraction (5GB truncation risk)
|-- What's the target system?
| |-- Oracle ADW / OCI Data Lakehouse
| | |-- Use OCI Object Storage target (native integration)
| |-- Third-party data warehouse (Snowflake, BigQuery, Redshift)
| | |-- OCI Object Storage --> external stage or S3-compatible fetch
| |-- On-premise data warehouse
| | |-- UCM WebDAV download or OCI Object Storage with VPN
|-- Extraction frequency?
| |-- Once (initial load) --> Full extract, split at 2GB, 20h+ timeout
| |-- Daily --> Incremental extraction with prune time
| |-- More frequent than daily --> Minimum 4h interval recommended
|-- Functional areas?
| |-- Financials --> Financial Analytics offering
| |-- Procurement --> Procurement Analytics offering
| |-- HCM --> HCM Analytics offering
| |-- Cross-functional --> Multiple offerings (no transactional consistency)
| Feature | Value | Notes |
|---|---|---|
| Output format | Compressed CSV (gzip) | One manifest + one or more CSV files per VO [src2, src5] |
| Storage targets | UCM (default) or OCI Object Storage | OCI recommended for new deployments [src3] |
| Extraction types | Full or Incremental | Incremental based on LastUpdateDate [src1, src7] |
| Scheduling | On-demand or scheduled (hourly/daily/monthly) | Minimum 4-hour interval recommended [src1, src4] |
| Data source | Pre-built PVOs per offering | Use ExtractPVOs only [src7] |
| Max file size | 5GB (silently truncated) | Configure split at 1-2GB [src2, src4] |
| Default split size | 1GB | Configurable up to 5GB [src2] |
| Default timeout | 10 hours per VO | Adjustable via Manage Offerings [src7] |
| Parallel threads | 1-8 recommended | Optimal is 4-8 [src4] |
| Compression | gzip | Reduces transfer size by ~70% [src4, src5] |
| Manifest file | JSON | Contains VO name, row count, file list, timestamps [src5, src6] |
| Incremental support | Yes (per PVO) | Requires LastUpdateDate column [src7] |
| Scheduling API | SOAP | For programmatic trigger/scheduling [src1] |
| Monitoring API | REST | For job status polling [src1] |
Set up OCI Object Storage as the extraction target (recommended over UCM for new deployments). [src3, src8]
1. Log into Oracle Fusion Cloud as admin
2. Navigate to: Tools > BI Cloud Connector
3. Go to: Manage Extracts > Configure External Storage
4. Select: Oracle Cloud Infrastructure Object Storage
5. Enter: Tenancy OCID, User OCID, Fingerprint, Private key (PEM), Region, Namespace, Bucket name
6. Test Connection > Save
Verify: Navigate to Manage Extracts > External Storage tab. Status should show "Connected".
Choose the functional area and specific view objects for extraction. Use ExtractPVOs only. [src1, src7]
1. In BICC Console: Manage Extracts > Select Offerings
2. Choose offering (e.g., "Financial Analytics")
3. Select specific ExtractPVOs (avoid OTBI PVOs)
4. Select required columns (include LastUpdateDate for incremental)
5. Set split file size: 2GB
6. Set parallel degree: 4-8
Verify: Selected PVOs appear under the offering with column selections saved.
Run the first full extraction to establish the baseline dataset. [src1, src2]
1. In BICC Console: Manage Extracts > Create Extract
2. Select offering, Extract type: Full
3. Storage target: OCI Object Storage
4. Timeout: 20 hours (for large initial loads)
5. Click: Submit > Monitor under Monitor Extracts
Verify: Job status shows "Completed". Download manifest and verify row counts match expected totals.
Configure recurring incremental extracts for ongoing data sync. [src1, src7]
1. In BICC Console: Manage Extracts > Schedule
2. Extract type: Incremental
3. Recurrence: Daily (or every 4+ hours)
4. Prune time: 2 hours
5. Start time: During Fusion maintenance window
Verify: Scheduled job appears under Monitor Extracts. After first scheduled run, incremental files contain only changed records.
Retrieve files from OCI Object Storage and validate completeness. [src5, src6]
# List extracted files in the BICC bucket
oci os object list --bucket-name bicc-extracts --prefix "data/" --output json
# Download manifest and data files
oci os object get --bucket-name bicc-extracts --name "data/manifest.json" --file manifest.json
# Validate row count matches manifest
MANIFEST_COUNT=$(jq '.rowCount' manifest.json)
ACTUAL_COUNT=$(zcat extracted_file.csv.gz | wc -l)
echo "Manifest: $MANIFEST_COUNT, Actual: $((ACTUAL_COUNT - 1))"
Verify: Manifest row count equals actual CSV row count minus 1 (header). If actual is lower, file was truncated.
# Input: OCI config, bucket name, extract prefix
# Output: Downloaded CSV files with row count validation
import oci # oci>=2.119.0
import gzip
import json
import os
def download_bicc_extract(config_file, bucket_name, prefix, output_dir):
"""Download BICC extract from OCI Object Storage and validate."""
config = oci.config.from_file(config_file)
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
objects = object_storage.list_objects(
namespace, bucket_name, prefix=prefix
).data.objects
manifest = None
csv_files = []
for obj in objects:
local_path = os.path.join(output_dir, os.path.basename(obj.name))
response = object_storage.get_object(namespace, bucket_name, obj.name)
with open(local_path, "wb") as f:
for chunk in response.data.raw.stream(1024 * 1024):
f.write(chunk)
if obj.name.endswith("manifest.json"):
with open(local_path) as f:
manifest = json.load(f)
elif obj.name.endswith(".csv.gz"):
csv_files.append(local_path)
# Validate row counts against manifest
if manifest:
for csv_path in csv_files:
with gzip.open(csv_path, "rt") as f:
actual_rows = sum(1 for _ in f) - 1 # subtract header
expected = manifest.get("rowCount", "unknown")
status = "OK" if actual_rows >= int(expected) else "TRUNCATED"
print(f"{os.path.basename(csv_path)}: expected={expected}, actual={actual_rows} [{status}]")
return csv_files
# Input: Oracle Fusion SOAP endpoint, credentials, offering name
# Output: Extract job ID for monitoring
import requests # requests>=2.31.0
from xml.etree import ElementTree
def trigger_bicc_extract(fusion_url, username, password, offering_name):
"""Trigger a BICC incremental extract via SOAP API."""
soap_url = f"{fusion_url}/biacm/ws/BIACMService"
soap_body = f"""<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:typ="http://xmlns.oracle.com/apps/biacm/types">
<soapenv:Header/>
<soapenv:Body>
<typ:submitExtractRequest>
<typ:offeringName>{offering_name}</typ:offeringName>
<typ:extractType>INCREMENTAL</typ:extractType>
</typ:submitExtractRequest>
</soapenv:Body>
</soapenv:Envelope>"""
response = requests.post(
soap_url, data=soap_body,
headers={"Content-Type": "text/xml; charset=utf-8"},
auth=(username, password), timeout=120)
response.raise_for_status()
root = ElementTree.fromstring(response.text)
job_id = root.find(".//{http://xmlns.oracle.com/apps/biacm/types}jobId")
return job_id.text if job_id is not None else None
# Input: Oracle Fusion URL, credentials, job ID
# Output: Job status (RUNNING / COMPLETED / FAILED)
# Check job status
curl -s -u "bicc_user:password" \
"https://your-instance.fa.us2.oraclecloud.com/biacm/api/v1/jobs/${JOB_ID}/status" \
-H "Content-Type: application/json"
# List all BICC files in OCI Object Storage bucket
oci os object list --bucket-name bicc-extracts --prefix "data/" --output table
# Check for files approaching 5GB truncation limit
oci os object list --bucket-name bicc-extracts --prefix "data/" \
--output json | jq '.data[] | select(.size > 4000000000) | {name, size}'
| Component | Purpose | Format | Location |
|---|---|---|---|
| Manifest file | Metadata: VO name, row count, file list, timestamps | JSON | Root of extract directory [src5, src6] |
| Data files | Extracted records from each PVO | Compressed CSV (gzip) | One or more files per PVO (split by configured size) [src2] |
| Log files | Extraction status, errors, timings per VO | JSON | Per-job log directory [src6] |
| Condition | Symptom | Cause | Resolution |
|---|---|---|---|
| 5GB silent truncation | Job shows "Completed" but CSV has fewer rows | File exceeded 5GB limit | Reduce split file size to 2GB; filter columns [src2, src4] |
| Extract timeout | Job shows "Failed" after 10+ hours | VO has too many records for initial load | Increase timeout; partition by business unit [src7] |
| UCM storage full | New extracts fail to write files | UCM shared storage exhausted | Clean up old extracts; migrate to OCI Object Storage [src3] |
| PVO performance degradation | Extract runs 10x slower than normal | Using OTBI PVO instead of ExtractPVO | Switch to designated ExtractPVO [src7] |
| Overlapping extract jobs | Second job queued indefinitely | Previous extract still running | Wait for completion or cancel; extend interval [src1] |
| Incremental returning no data | Empty CSV files | LastUpdateDate not selected or prune time too narrow | Verify column selection; adjust prune time [src7] |
| OCI Object Storage auth failure | Extract fails at file write | Expired API key or insufficient IAM | Rotate key; verify IAM policy [src8] |
set split file size to 2GB; implement post-extraction row count validation. [src2, src4]increase timeout to 24+ hours; partition extraction by ledger/business unit. [src7]implement automated cleanup via UCM WebDAV API; or migrate to OCI Object Storage with lifecycle policies. [src3]monitor row counts after any offering reset; temporarily reduce split file size. [src7]audit PVO selections -- ExtractPVO names follow *ExtractPVO or *BICVO pattern. [src7]-- BAD: Selecting OTBI PVO for bulk extraction
-- These PVOs join transactional tables, degrade Fusion performance for all users
Selected PVO: FscmTopModelAM.FinancialAnalysisAM.TransactionHeaderPVO
Result: 10x slower extraction, Fusion UI sluggish
-- GOOD: Use the purpose-built ExtractPVO for the same data
-- ExtractPVOs read from denormalized views optimized for bulk extraction
Selected PVO: FscmTopModelAM.FinExtractAM.ApInvoicesExtractPVO
Result: Fast extraction with minimal Fusion performance impact
-- BAD: Maximizing split size to reduce file count
-- Split file size: 5GB
-- A PVO producing 5.1GB will be silently truncated to 5GB
-- You lose data with a "Completed" status and no error
-- GOOD: Keep split size well under the 5GB ceiling
-- Split file size: 2GB
-- A 5.1GB extraction produces three files: 2GB + 2GB + 1.1GB
-- All data preserved, no truncation risk
-- Always compare manifest rowCount vs actual CSV row count
-- BAD: Treating BICC like a real-time integration
-- Schedule: Every 30 minutes
-- Previous extract may not finish, queue buildup,
-- Fusion performance degrades under continuous extraction load
-- GOOD: Right tool for the right interval
-- BICC schedule: Every 4-6 hours (or daily for most use cases)
-- For sub-hour data needs: Use Oracle REST API
-- For real-time events: Use Oracle Business Events / webhooks
# Check OCI Object Storage for BICC extract files
oci os object list --bucket-name bicc-extracts --prefix "data/" --output table
# Validate extracted file sizes (check for files approaching 5GB)
oci os object list --bucket-name bicc-extracts --prefix "data/" \
--output json | jq '.data[] | select(.size > 4000000000) | {name, size}'
# Download and inspect manifest for row counts
oci os object get --bucket-name bicc-extracts \
--name "data/manifest.json" --file /tmp/manifest.json
cat /tmp/manifest.json | jq '.'
# Count rows in extracted CSV to detect truncation
zcat extracted_file.csv.gz | wc -l
# Check BICC job status via Fusion REST
curl -s -u "admin:password" \
"https://your-instance.fa.us2.oraclecloud.com/biacm/api/v1/jobs?status=RUNNING" \
-H "Accept: application/json" | jq '.'
| Release | Date | Status | Key BICC Changes | Notes |
|---|---|---|---|---|
| 25C (Update 25.06) | 2025-06 | Current | OCI Object Storage enhancements | Improved connectivity and monitoring [src1] |
| 25B (Update 25.03) | 2025-03 | Supported | OCI Object Storage recommended over UCM | New deployments guided to OCI OS [src3] |
| 24D (Update 24.12) | 2024-12 | Supported | Enhanced extraction logging | JSON log files with per-VO timings [src6] |
| 24C (Update 24.09) | 2024-09 | Supported | Custom object extraction support | ExtractPVOs for custom objects [src5] |
| 24B (Update 24.06) | 2024-06 | Supported | Split file size configurability | 1-5GB configurable per VO [src2] |
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Bulk outbound data extraction for warehousing/analytics | Real-time individual record operations (<1s latency) | Oracle ERP Cloud REST API |
| Scheduled data sync with >100K records/day | Inbound data loading into Oracle Fusion | FBDI (File-Based Data Import) |
| Data lake population from Oracle Fusion | Sub-hour data freshness requirements | REST API + Business Events |
| Initial data migration to external systems | Writing data back to Oracle Fusion | REST API or FBDI |
| Cross-functional analytics data extraction | Small data volumes (<1,000 records/day) | REST API (simpler, real-time) |
| Populating Oracle ADW or OCI data lakehouse | Need JSON/XML/Parquet output format | REST API or custom extraction |