This recipe produces a structured competitor compliance scorecard assessing each competitor across six dimensions: proof maturity level, adaptation speed, decoupling risk, regulatory arbitrage exploitation, SupTech threat exposure, and catch-up time. The output enables the client to identify compliance gaps they can exploit as competitive moats. [src1, src2]
Which path?
├── Competitors are public companies
│ └── PATH A: Filing-Based — SEC EDGAR, annual reports, ESG disclosures
├── Competitors are private, limited public data
│ └── PATH B: Observable Evidence — certifications, product features, job postings
├── Industry analyst reports available
│ └── PATH C: Analyst-Augmented — reports + public data + contacts
└── AI-augmented research
└── PATH D: AI Research + Manual Validation
| Path | Tools | Cost | Speed | Output Quality |
|---|---|---|---|---|
| A: Filing-Based | SEC EDGAR, annual reports | $0-$500 | 5-6 days | Excellent |
| B: Observable Evidence | Certifications, product analysis | $0-$200 | 5-7 days | Good |
| C: Analyst-Augmented | Gartner/Forrester + public data | $500-$2K | 4-5 days | Excellent |
| D: AI + Manual | LLM research + expert validation | $200-$500 | 3-5 days | Good |
Duration: 2-3 days · Tool: Public filing databases + web research
Collect compliance-relevant data: regulatory filings, certifications, product features, job postings, press releases, enforcement history.
Verify: Data collected for 80%+ of competitors across 4+ data categories. · If failed: Use industry averages as proxy, flag as estimated.
Duration: 1 day · Tool: Scoring framework + spreadsheet
Score each competitor Level 1-5: Absent, Reactive, Systematic, Integrated, Strategic. Score client on same scale. [src2]
Verify: All competitors and client scored with evidence citations. · If failed: Assign ranges for insufficient evidence.
Duration: 1 day · Tool: Timeline analysis + public records
Measure response time to last 2-3 major regulatory changes. Categorize: Fast (< 6 months), Standard (6-18), Slow (> 18), Unknown. [src3]
Verify: Speed calculated for 60%+ of competitors with 2+ data points. · If failed: Report single observation with low confidence.
Duration: 0.5-1 day · Tool: Gap analysis
Assess gap between formal compliance claims and operational reality. Compare stated policies vs. observed behavior (data collection, product features). [src1]
Verify: Decoupling risk assessed for all competitors with confidence levels. · If failed: Default to medium risk, flag for investigation.
Duration: 0.5-1 day · Tool: Analysis + regulatory intelligence
Identify regulatory arbitrage strategies competitors exploit. Assess SupTech sophistication per jurisdiction (high/medium/low threat). [src3, src4]
Verify: Arbitrage windows identified, SupTech levels assigned. · If failed: Use enforcement frequency as proxy.
Duration: 1 day · Tool: Financial modeling + capability assessment
Calculate per-dimension catch-up time: (maturity gap) x (average time per level) x (complexity multiplier). Present as optimistic/baseline/pessimistic ranges. [src2]
Verify: Catch-up ranges calculated for all advantage dimensions. · If failed: Use industry averages for similar-size companies.
{
"output_type": "competitor_compliance_scorecard",
"format": "spreadsheet + PDF + JSON",
"sections": [
{"name": "competitor_scores", "type": "array", "description": "6-dimension scores with evidence"},
{"name": "relative_positioning_map", "type": "object", "description": "Client vs. competitor positioning"},
{"name": "catch_up_analysis", "type": "array", "description": "Per-competitor catch-up time ranges"},
{"name": "arbitrage_windows", "type": "array", "description": "Active arbitrage with closure estimates"},
{"name": "data_quality_flags", "type": "array", "description": "Per-competitor confidence indicators"}
]
}
| Quality Metric | Minimum Acceptable | Good | Excellent |
|---|---|---|---|
| Competitor coverage | > 60% | > 80% | 100% |
| Dimensions per competitor | > 4 of 6 | > 5 of 6 | All 6 |
| Evidence citations per score | > 1 | > 2 | > 3 |
| Catch-up time completeness | > 50% | > 75% | 100% |
| Data quality confidence | > 60% high/medium | > 75% | > 90% |
If below minimum: Extend data collection by 2-3 days, narrow competitor list, or supplement with analyst reports.
| Error | Likely Cause | Recovery Action |
|---|---|---|
| No public data for competitor | Private company or early stage | Use proxies: job postings, product features, reviews |
| Conflicting evidence | Different sources disagree | Weight by reliability, document conflict, assign medium confidence |
| Client scores lower than competitors | Genuine compliance gaps | Report honestly — gaps are as valuable as advantages |
| Insufficient adaptation speed data | New competitor or few regulatory changes | Use maturity level as proxy for readiness |
| Decoupling assessment challenged | Client disagrees with scoring | Present evidence basis, adjust with client information |
| Component | Focused ($2K-$4K) | Standard ($4K-$7K) | Comprehensive ($7K-$10K) |
|---|---|---|---|
| Data collection | $1K-$1.5K | $1.5K-$3K | $3K-$4K |
| Scoring and analysis | $500-$1K | $1K-$2K | $2K-$3K |
| Catch-up time modeling | $500-$1K | $1K-$1.5K | $1.5K-$2K |
| Report and visualization | $0-$500 | $500-$1K | $1K-$1.5K |
| Total | $2K-$4K | $4K-$7K | $7K-$10K |
Assigning high maturity because competitor is a large, well-known company. [src1]
Every level assignment must cite specific evidence. No evidence = unknown.
Accepting competitor compliance claims at face value without checking operational alignment. [src1]
Use product-level evidence to validate or challenge formal claims.
Stating competitors need exactly 18 months. Result: false precision drives bad decisions.
Use optimistic/baseline/pessimistic scenarios. List every assumption so client can adjust.
Use when an agent needs to assess competitor compliance posture relative to the client. Requires regulatory landscape map as input. Output feeds into constraint weaponization workshop and compliance moat scorecard.