Oracle BI Publisher as Data Source: When to Use BIP vs BICC vs REST

Type: ERP Integration System: Oracle Fusion Cloud Applications (25A/25B) Confidence: 0.87 Sources: 7 Verified: 2026-03-09 Freshness: 2026-03-09

TL;DR

System Profile

Oracle BI Publisher (rebranded as Oracle Analytics Publisher in 24C) is the embedded reporting engine in Oracle Fusion Cloud Applications. It sits atop Oracle Transactional Business Intelligence (OTBI) data models and can generate output in PDF, CSV, XML, Excel, and HTML. While designed for formatted reporting, BIP is frequently used as a data extraction mechanism because it can reach Fusion data that BICC view objects do not expose. This card covers Oracle Fusion Cloud SaaS deployments (ERP, HCM, SCM, CX) running Release 25A/25B.

PropertyValue
VendorOracle
SystemOracle Fusion Cloud Applications + BI Publisher
API SurfaceSOAP (ReportService, ScheduleService) + REST (v1/reports)
Current API Versionv2 (SOAP), v1 (REST)
Editions CoveredEnterprise (all Fusion Cloud SKUs include BIP)
DeploymentCloud (SaaS)
API DocsBI Publisher Web Services
StatusGA — but data extraction pattern is officially unsupported

API Surfaces & Capabilities

API SurfaceProtocolBest ForMax Records/RequestTimeoutReal-time?Bulk?
SOAP ReportService (runReport)HTTPS/XML, base64 responseSmall reports <50K rows, synchronousMemory-guarded240sYesNo
SOAP ExternalReportWSSServiceHTTPS/XML, WS-Security headersSame as ReportService, secure authMemory-guarded240sYesNo
SOAP ScheduleServiceHTTPS/XMLScheduling reports, retrieving completed outputN/A (async)None (async)NoPartial
REST v1/reports/{path}/runHTTPS/JSON + multipartAd-hoc report executionMemory-guarded240sYesNo
ESS Job (BIPJobType)Oracle Enterprise SchedulerLarge extractions via async executionNo hard capNone (async)NoYes
OCI Data Integration BIP ConnectorREST + Object Storage stagingETL pipelines, incremental loadsReport-dependentConfigurableNoYes

Rate Limits & Quotas

Per-Request Limits

Limit TypeValueApplies ToNotes
Synchronous timeout240 secondsrunReport (SOAP and REST)Request fails with SOAP fault after timeout; no partial results returned
Memory guardServer-enforced (no published limit)All BIP report executionOracle enforces in-memory caps to protect shared BI domain
Base64 response overhead~33% size inflationSOAP responses100MB CSV becomes ~133MB in SOAP envelope
Max concurrent BI threadsShared pool (typically 10-25)All BIP operations per podHeavy extraction competes with interactive reporting users
sizeOfDataChunkDownloadConfigurable (integer, -1 = all)ReportRequest parameterUse chunking for large outputs

Rolling / Daily Limits

Limit TypeValueWindowNotes
API call limitNo published per-user limitN/AShared thread pool acts as natural throttle
ESS concurrent jobsPod-dependent (typically 16-32 threads)Per ESS domainHeavy BIP ESS jobs reduce available threads
Report schedulingNo hard limit on schedule countN/AOracle recommends max 20 jobs per job set
UCM storageSubject to UCM quotasPer Fusion podAsync report output has retention policies

Authentication

FlowUse WhenToken LifetimeRefresh?Notes
SOAP WS-Security (username token)Server-to-server, recommendedSession-basedPer-request credentialsUse ExternalReportWSSService
SOAP payload credentialsLegacy, simple integrationsPer-requestNoReportService v2 — less secure
REST Basic AuthAd-hoc REST report executionPer-requestNoBase64-encoded username:password
IDCS/OCI IAM federationEnterprise SSO integrationsToken-based (configurable)YesRequires IDCS configuration

Authentication Gotchas

Constraints

Integration Pattern Decision Tree

START — Need to extract data from Oracle Fusion Cloud
|
+-- Is the data available via BICC view objects?
|   +-- YES --> Use BICC (bulk extraction, incremental support, no timeout)
|   +-- NO --> Continue
|
+-- Is the data available via REST API?
|   +-- YES, volume < 1K records/day --> Use REST API (real-time)
|   +-- YES, volume > 1K records/day --> REST API with pagination
|   +-- NO --> Continue (BIP may be your only option)
|
+-- Does an existing BIP report cover the required data?
|   +-- YES --> Continue with BIP extraction approach
|   +-- NO --> Create custom BIP report (requires BI Author role)
|
+-- Expected data volume?
|   +-- < 10K rows --> Synchronous runReport via SOAP/REST
|   +-- 10K-50K rows --> ESS Job (BIPJobType) async
|   +-- 50K-500K rows --> ESS Job with date-range chunking
|   +-- > 500K rows --> STOP: BIP is wrong tool; escalate for BICC
|
+-- Scheduling requirements?
|   +-- One-time/ad-hoc --> SOAP runReport or REST
|   +-- Recurring --> ESS Job via Scheduled Processes
|   +-- ETL pipeline --> OCI Data Integration BIP Connector
|
+-- Output format?
    +-- Raw data for processing --> CSV or XML (data)
    +-- Need formatted output --> PDF, Excel, HTML
    +-- Need type-safe parsing --> XML (preserves data types)

Quick Reference

BIP vs BICC vs REST — Decision Matrix

CriterionBI PublisherBICCREST API
Primary purposeReportingBulk data extractionTransactional CRUD
Oracle recommendationReporting onlyPrimary extraction toolReal-time operations
Extraction supportUnsupported (but functional)Fully supportedSupported (low volume)
Max practical volume~50K rows per reportMillions of rows~1K records per request
Incremental extractionManual (date parameter)Built-in (CDC/delta)Manual (filter by date)
Timeout riskHigh (240s sync limit)None (async)Low (paginated)
Output formatsCSV, XML, Excel, PDF, HTMLCSV (Object Storage)JSON
Async supportVia ESS job wrapperNativeNo (sync only)
Thread pool impactShares with interactive BIDedicated threadsDedicated threads
Data coverageAny data model with BIP reportLimited to published VOsREST-enabled objects
SchedulingESS/ScheduleServiceBuilt-in BICC schedulingExternal orchestration
Third-party connectorsFivetran, Celonis, OCI DIOCI DI, most iPaaSMost iPaaS
When to useData not in BICC, <50K rowsBulk/warehouse loadsReal-time, <1K records/day

Step-by-Step Integration Guide

1. Identify the correct BIP report path

Navigate to Tools > Reports and Analytics in Oracle Fusion, find the report, and note its absolute path (e.g., /Custom/Financials/AP_Invoice_Listing.xdo). [src4]

# Report paths follow this pattern:
# /shared/{module}/{report_name}.xdo   (seeded Oracle reports)
# /Custom/{module}/{report_name}.xdo   (custom reports)

Verify: Confirm the report runs manually in Fusion UI with expected parameters and output before automating.

2. Authenticate and run report via SOAP (synchronous)

Use ExternalReportWSSService for secure authentication with credentials in the SOAP header. [src5]

<!-- SOAP Envelope for ExternalReportWSSService runReport -->
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                  xmlns:pub="http://xmlns.oracle.com/oxp/service/PublicReportService">
  <soapenv:Header>
    <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/...">
      <wsse:UsernameToken>
        <wsse:Username>INTEGRATION_USER</wsse:Username>
        <wsse:Password>PASSWORD</wsse:Password>
      </wsse:UsernameToken>
    </wsse:Security>
  </soapenv:Header>
  <soapenv:Body>
    <pub:runReport>
      <pub:reportRequest>
        <pub:attributeFormat>csv</pub:attributeFormat>
        <pub:reportAbsolutePath>/Custom/Financials/AP_Invoice_Listing.xdo</pub:reportAbsolutePath>
        <pub:parameterNameValues>
          <pub:listOfParamNameValues>
            <pub:item>
              <pub:name>p_from_date</pub:name>
              <pub:values><pub:item>2026-01-01</pub:item></pub:values>
            </pub:item>
          </pub:listOfParamNameValues>
        </pub:parameterNameValues>
      </pub:reportRequest>
    </pub:runReport>
  </soapenv:Body>
</soapenv:Envelope>

Verify: HTTP 200 response with <reportBytes> element containing base64-encoded report output.

3. Decode the base64 response

SOAP response wraps report output in base64 encoding. Decode to obtain the actual file. [src4]

import base64
from lxml import etree

tree = etree.fromstring(soap_response_bytes)
ns = {'pub': 'http://xmlns.oracle.com/oxp/service/PublicReportService'}
report_bytes_b64 = tree.find('.//pub:reportBytes', ns).text
raw_output = base64.b64decode(report_bytes_b64)

with open('ap_invoices.csv', 'wb') as f:
    f.write(raw_output)

Verify: wc -l ap_invoices.csv returns expected row count.

4. Switch to async ESS job for large datasets

For reports exceeding 10K rows, register the BIP report as an ESS job of type BIPJobType to eliminate the 240-second timeout. [src2, src7]

Steps to register BIP report as ESS job:
1. Navigate to Setup and Maintenance > Manage Enterprise Scheduler Jobs
2. Create new job definition (Job Type: BIPJobType)
3. Set Report ID to BIP report path
4. Define parameters matching the BIP report parameters
5. Submit via Scheduled Processes UI or ERP Integration Service SOAP
6. Poll job status; download output from UCM or Object Storage

Verify: Job status = "Succeeded" in Scheduled Processes monitor.

5. Implement date-range chunking for incremental loads

Since BIP has no native CDC, implement manual incremental extraction using date parameters. [src1]

from datetime import datetime, timedelta

def generate_date_chunks(start_date, end_date, chunk_days=7):
    chunks = []
    current = start_date
    while current < end_date:
        chunk_end = min(current + timedelta(days=chunk_days), end_date)
        chunks.append((current.strftime('%Y-%m-%d'),
                       chunk_end.strftime('%Y-%m-%d')))
        current = chunk_end
    return chunks

# Extract last 90 days in 7-day chunks
chunks = generate_date_chunks(datetime(2026, 1, 1), datetime(2026, 3, 9), chunk_days=7)

Verify: Each chunk returns within timeout; union of all chunks matches expected total row count.

Code Examples

Python: Extract BIP report via SOAP with retry logic

# Input:  BIP report path, parameters, Fusion Cloud host
# Output: Decoded CSV file content

import requests, base64, time
from lxml import etree

def extract_bip_report(host, username, password, report_path,
                       params, output_format='csv', max_retries=3):
    url = f"https://{host}/xmlpserver/services/ExternalReportWSSService"
    param_xml = ""
    for name, value in params.items():
        param_xml += f"""
            <pub:item>
              <pub:name>{name}</pub:name>
              <pub:values><pub:item>{value}</pub:item></pub:values>
            </pub:item>"""
    # Build SOAP envelope with WS-Security header
    # ... (full implementation in .md file)
    for attempt in range(max_retries):
        try:
            resp = requests.post(url, data=envelope,
                                headers=headers, timeout=230)
            resp.raise_for_status()
            tree = etree.fromstring(resp.content)
            ns = {'pub': 'http://xmlns.oracle.com/oxp/service/PublicReportService'}
            b64_data = tree.find('.//pub:reportBytes', ns).text
            return base64.b64decode(b64_data)
        except requests.exceptions.Timeout:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt * 5)
                continue
            raise RuntimeError("BIP report timed out — switch to ESS async")

cURL: Quick test of BIP SOAP connectivity

# Test BIP SOAP endpoint availability
curl -s -o /dev/null -w "%{http_code}" \
  "https://fa-xxxx.fa.ocs.oraclecloud.com/xmlpserver/services/ExternalReportWSSService?wsdl"
# Expected: HTTP 200 with WSDL response

# Test REST API endpoint
curl -s -o /dev/null -w "%{http_code}" \
  "https://fa-xxxx.fa.ocs.oraclecloud.com/xmlpserver/services/rest/v1/reports"
# Expected: HTTP 401 (confirms endpoint exists)

Data Mapping

Output Format Selection Guide

FormatMIME TypeBest ForParsing ComplexityType Preservation
csvtext/plainETL pipelines, data warehouse loadsLowNone (all strings)
xml (data)application/xmlStructured parsing, type-safe extractionMediumFull (typed elements)
excelapplication/vnd.ms-excelHuman review + extraction hybridHighPartial
pdfapplication/pdfCompliance/audit output onlyN/AN/A (not parseable)
htmltext/htmlWeb display, email deliveryHighNone

Data Type Gotchas

Error Handling & Failure Points

Common Error Codes

ErrorMeaningCauseResolution
SOAP Fault: TimeoutReport execution exceeded 240sDataset too large for sync executionSwitch to ESS async pattern; reduce date range
HTTP 401Authentication failureInvalid credentials or missing WS-Security headerVerify credentials; confirm BI Consumer role
HTTP 500 + oracle.xdo stack traceBIP internal errorMemory guard triggered or data model SQL errorReduce data volume; check data model SQL
"Report not found"Invalid report pathWrong absolute path or missing permissionsVerify path in BI Catalog; check folder security
"Invalid parameter"Parameter name mismatchName in request doesn't match data modelUse getReportParameters to discover exact names
ESS "Error" statusESS job failedVarious — check ESS job logReview ESS diagnostic log for root cause

Failure Points in Production

Anti-Patterns

Wrong: Synchronous BIP extraction for large datasets

# BAD — synchronous SOAP call for large dataset extraction
# This will timeout at 240s and consume BI thread pool capacity
def extract_all_invoices():
    return call_bip_soap(
        report='/Custom/Financials/ALL_AP_Invoices.xdo',
        params={'p_from_date': '2020-01-01'},  # 6 years of data!
        format='csv'
    )  # Will timeout; may return partial data silently

Correct: ESS async job with date chunking

# GOOD — register as ESS BIPJobType, chunk by date range
def extract_invoices_incremental():
    chunks = generate_date_chunks(start=datetime(2026, 1, 1),
                                  end=datetime(2026, 3, 9), chunk_days=7)
    for from_date, to_date in chunks:
        job_id = submit_ess_job('AP_INVOICE_EXTRACT_ESS',
                               params={'p_from_date': from_date,
                                       'p_to_date': to_date})
        wait_for_completion(job_id)
        download_output(job_id)

Wrong: Using BIP when BICC view objects exist

# BAD — extracting GL balances via BIP when BICC covers them
data = call_bip_soap(
    report='/Custom/GL/GL_Balances_Extract.xdo',
    params={'p_ledger': 'US Primary'}, format='csv'
)

Correct: Use BICC for any data it covers

# GOOD — BICC handles bulk extraction natively with incremental support
bicc_extract = submit_bicc_offering(
    offering='FinancialsTransactionalBusinessIntelligence',
    view_objects=['FscmTopModelAM.GlBalancesAM.GlBalancesVO'],
    mode='incremental', target='object_storage_bucket'
)

Wrong: Parsing BIP CSV by column position

# BAD — column positions change when Oracle patches update templates
invoice_number = row[0]
vendor_name = row[3]
amount = row[7]

Correct: Parse by header name

# GOOD — header-based parsing survives template column changes
import csv
reader = csv.DictReader(open('invoices.csv'))
for row in reader:
    invoice_number = row['Invoice Number']
    vendor_name = row['Vendor Name']
    amount = float(row['Invoice Amount'])

Common Pitfalls

Diagnostic Commands

# Test BIP SOAP connectivity (expect HTTP 200 with WSDL)
curl -s -o /dev/null -w "%{http_code}" \
  "https://fa-xxxx.fa.ocs.oraclecloud.com/xmlpserver/services/ExternalReportWSSService?wsdl"

# Test REST API connectivity (expect HTTP 401, confirms endpoint exists)
curl -s -o /dev/null -w "%{http_code}" \
  "https://fa-xxxx.fa.ocs.oraclecloud.com/xmlpserver/services/rest/v1/reports"

# Discover report parameters via SOAP getReportParameters
# Returns parameter names, types, default values, and LOV options

# Check ESS job status via REST (Fusion Cloud 25A+)
curl -u "INTEGRATION_USER:PASSWORD" \
  "https://fa-xxxx.fa.ocs.oraclecloud.com/hcmRestApi/resources/11.13.18.05/ess-job-details?q=RequestId=12345"

# Monitor BI server health (admin role required)
curl -u "ADMIN_USER:PASSWORD" \
  "https://fa-xxxx.fa.ocs.oraclecloud.com/xmlpserver/services/rest/v1/server/health"

Version History & Compatibility

ReleaseDateStatusBreaking ChangesMigration Notes
25B2026-02CurrentNone--
25A2025-11SupportedNoneESS REST API enhanced
24D2025-08SupportedNone--
24C2025-05SupportedRebranded to Oracle Analytics PublisherUI changed; API backward-compatible
24B2025-02SupportedNone--
24A2024-11EOLREST v1 enhanced--

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Data is only available via BIP reports (no BICC VO coverage)Data is available via BICC view objectsBICC bulk extraction
Need pre-formatted output (PDF, Excel) for compliance/auditNeed raw transactional data for warehouse loadingBICC or REST API
Dataset is <50K rows per extraction runDataset exceeds 100K rowsBICC (no row limits)
Existing BIP reports already model the required dataNeed real-time individual record operationsREST API
One-time or low-frequency extractions (daily or less)Need near-real-time data feeds (<5 min latency)Business Events or REST API
Need complex report logic (calculated fields, cross-module joins)Simple single-table extractionREST API or BICC
Integration team has BIP/OTBI expertiseTeam has no Oracle reporting expertiseREST API (simpler learning curve)

Cross-System Comparison

CapabilityBI Publisher (BIP)BICCREST APIFBDI
DirectionOutbound onlyOutbound onlyInbound + outboundInbound only
Volume sweet spot1K-50K rows50K-10M+ rows1-1K records1K-500K rows
Timeout riskHigh (240s sync)None (async)LowNone (async)
CDC/IncrementalManual (date params)Built-inManual (filters)N/A
Output formatsCSV, XML, Excel, PDF, HTMLCSVJSONCSV (template-based)
SchedulingESS jobs / ScheduleServiceBuilt-inExternalESS jobs
Auth methodsWS-Security, BasicOAuth 2.0OAuth 2.0, BasicN/A (Fusion internal)
Thread pool impactShared with reportingDedicatedDedicatedESS thread pool
Data coverageBroadest (any data model)Limited to published VOsREST-enabled objectsImport templates only
Oracle support statusUnsupported for extractionFully supportedFully supportedFully supported
Third-party connectorsFivetran, Celonis, OCI DIOCI DI, most iPaaSMost iPaaSLimited

Important Caveats

Related Units