Competitor Compliance Benchmarking

Type: Execution Recipe Confidence: 0.85 Sources: 5 Verified: 2026-03-30

Purpose

This recipe produces a structured competitor compliance scorecard assessing each competitor across six dimensions: proof maturity level, adaptation speed, decoupling risk, regulatory arbitrage exploitation, SupTech threat exposure, and catch-up time. The output enables the client to identify compliance gaps they can exploit as competitive moats. [src1, src2]

Prerequisites

Constraints

Tool Selection Decision

Which path?
├── Competitors are public companies
│   └── PATH A: Filing-Based — SEC EDGAR, annual reports, ESG disclosures
├── Competitors are private, limited public data
│   └── PATH B: Observable Evidence — certifications, product features, job postings
├── Industry analyst reports available
│   └── PATH C: Analyst-Augmented — reports + public data + contacts
└── AI-augmented research
    └── PATH D: AI Research + Manual Validation
PathToolsCostSpeedOutput Quality
A: Filing-BasedSEC EDGAR, annual reports$0-$5005-6 daysExcellent
B: Observable EvidenceCertifications, product analysis$0-$2005-7 daysGood
C: Analyst-AugmentedGartner/Forrester + public data$500-$2K4-5 daysExcellent
D: AI + ManualLLM research + expert validation$200-$5003-5 daysGood

Execution Flow

Step 1: Competitor Data Collection

Duration: 2-3 days · Tool: Public filing databases + web research

Collect compliance-relevant data: regulatory filings, certifications, product features, job postings, press releases, enforcement history.

Verify: Data collected for 80%+ of competitors across 4+ data categories. · If failed: Use industry averages as proxy, flag as estimated.

Step 2: Proof Maturity Scoring (Dimension 1)

Duration: 1 day · Tool: Scoring framework + spreadsheet

Score each competitor Level 1-5: Absent, Reactive, Systematic, Integrated, Strategic. Score client on same scale. [src2]

Verify: All competitors and client scored with evidence citations. · If failed: Assign ranges for insufficient evidence.

Step 3: Adaptation Speed Measurement (Dimension 2)

Duration: 1 day · Tool: Timeline analysis + public records

Measure response time to last 2-3 major regulatory changes. Categorize: Fast (< 6 months), Standard (6-18), Slow (> 18), Unknown. [src3]

Verify: Speed calculated for 60%+ of competitors with 2+ data points. · If failed: Report single observation with low confidence.

Step 4: Decoupling Risk Assessment (Dimension 3)

Duration: 0.5-1 day · Tool: Gap analysis

Assess gap between formal compliance claims and operational reality. Compare stated policies vs. observed behavior (data collection, product features). [src1]

Verify: Decoupling risk assessed for all competitors with confidence levels. · If failed: Default to medium risk, flag for investigation.

Step 5: Arbitrage Window and SupTech Assessment (Dimensions 4-5)

Duration: 0.5-1 day · Tool: Analysis + regulatory intelligence

Identify regulatory arbitrage strategies competitors exploit. Assess SupTech sophistication per jurisdiction (high/medium/low threat). [src3, src4]

Verify: Arbitrage windows identified, SupTech levels assigned. · If failed: Use enforcement frequency as proxy.

Step 6: Catch-Up Time Calculation (Dimension 6)

Duration: 1 day · Tool: Financial modeling + capability assessment

Calculate per-dimension catch-up time: (maturity gap) x (average time per level) x (complexity multiplier). Present as optimistic/baseline/pessimistic ranges. [src2]

Verify: Catch-up ranges calculated for all advantage dimensions. · If failed: Use industry averages for similar-size companies.

Output Schema

{
  "output_type": "competitor_compliance_scorecard",
  "format": "spreadsheet + PDF + JSON",
  "sections": [
    {"name": "competitor_scores", "type": "array", "description": "6-dimension scores with evidence"},
    {"name": "relative_positioning_map", "type": "object", "description": "Client vs. competitor positioning"},
    {"name": "catch_up_analysis", "type": "array", "description": "Per-competitor catch-up time ranges"},
    {"name": "arbitrage_windows", "type": "array", "description": "Active arbitrage with closure estimates"},
    {"name": "data_quality_flags", "type": "array", "description": "Per-competitor confidence indicators"}
  ]
}

Quality Benchmarks

Quality MetricMinimum AcceptableGoodExcellent
Competitor coverage> 60%> 80%100%
Dimensions per competitor> 4 of 6> 5 of 6All 6
Evidence citations per score> 1> 2> 3
Catch-up time completeness> 50%> 75%100%
Data quality confidence> 60% high/medium> 75%> 90%

If below minimum: Extend data collection by 2-3 days, narrow competitor list, or supplement with analyst reports.

Error Handling

ErrorLikely CauseRecovery Action
No public data for competitorPrivate company or early stageUse proxies: job postings, product features, reviews
Conflicting evidenceDifferent sources disagreeWeight by reliability, document conflict, assign medium confidence
Client scores lower than competitorsGenuine compliance gapsReport honestly — gaps are as valuable as advantages
Insufficient adaptation speed dataNew competitor or few regulatory changesUse maturity level as proxy for readiness
Decoupling assessment challengedClient disagrees with scoringPresent evidence basis, adjust with client information

Cost Breakdown

ComponentFocused ($2K-$4K)Standard ($4K-$7K)Comprehensive ($7K-$10K)
Data collection$1K-$1.5K$1.5K-$3K$3K-$4K
Scoring and analysis$500-$1K$1K-$2K$2K-$3K
Catch-up time modeling$500-$1K$1K-$1.5K$1.5K-$2K
Report and visualization$0-$500$500-$1K$1K-$1.5K
Total$2K-$4K$4K-$7K$7K-$10K

Anti-Patterns

Wrong: Scoring based on assumptions

Assigning high maturity because competitor is a large, well-known company. [src1]

Correct: Require observable evidence for every score

Every level assignment must cite specific evidence. No evidence = unknown.

Wrong: Ignoring decoupling risk

Accepting competitor compliance claims at face value without checking operational alignment. [src1]

Correct: Assess gap between claims and operations

Use product-level evidence to validate or challenge formal claims.

Wrong: Single-number catch-up time

Stating competitors need exactly 18 months. Result: false precision drives bad decisions.

Correct: Present ranges with documented assumptions

Use optimistic/baseline/pessimistic scenarios. List every assumption so client can adjust.

When This Matters

Use when an agent needs to assess competitor compliance posture relative to the client. Requires regulatory landscape map as input. Output feeds into constraint weaponization workshop and compliance moat scorecard.

Related Units