Distress Dossier Template

Type: Execution Recipe Confidence: 0.85 Sources: 7 Verified: 2026-03-30

Purpose

This recipe generates a personalized, evidence-based dossier for retailers showing operational distress signals — inventory write-downs, supply chain disruption, workforce instability, or financial deterioration. The output is a 2-page document that names specific signals with dates, quantifies business impact using industry benchmarks, and offers 3 graduated action options — ready for delivery to the decision-maker identified during enrichment. [src1, src2]

Prerequisites

Constraints

Tool Selection Decision

Which path?
├── Output format = PDF AND review mode = auto-deliver
│   └── PATH A: Automated PDF — Claude API + WeasyPrint + email delivery
├── Output format = PDF AND review mode = human review
│   └── PATH B: Reviewed PDF — Claude API + WeasyPrint + review queue
├── Output format = HTML email AND review mode = auto-deliver
│   └── PATH C: Automated HTML — Claude API + HTML template + email API
└── Output format = Markdown AND review mode = human review
    └── PATH D: Draft Markdown — Claude API + Markdown output + review queue
PathToolsCostSpeedOutput Quality
A: Automated PDFClaude API + WeasyPrint$0.02-0.10/dossier2-5 minHigh — requires confidence threshold (score 8+)
B: Reviewed PDFClaude API + WeasyPrint$0.02-0.10/dossier5-15 minExcellent — human catches edge cases
C: Automated HTMLClaude API + HTML template$0.02-0.08/dossier1-3 minHigh — inline rendering, no attachment friction
D: Draft MarkdownClaude API$0.02-0.05/dossier1-2 minGood — fastest for review-heavy workflows

Execution Flow

Step 1: Assemble Input Data

Duration: 1-2 minutes per dossier · Tool: Python/Node.js script

Collect all required inputs from the enrichment and detection pipeline outputs into a single structured context object for the LLM. Required fields: company name, domain, revenue, decision-maker name and title, at least 2 signals with source, date, and data point.

Verify: All required fields populated — company name, at least 1 decision-maker, at least 2 signals with source+date+data_point. · If failed: If fewer than 2 signals, return to detection pipeline and lower threshold, or flag company for monitoring rather than outreach.

Step 2: Generate Executive Summary

Duration: 1-2 minutes · Tool: Claude/GPT API

Generate a 1-paragraph executive summary that names specific signals with dates and hooks the reader with quantified impact. Structure: Name 2-3 specific signals with dates → state what the pattern typically costs using industry benchmarks → close with a quantified impact statement.

Example output: “In Q3 2025, Acme Retail Corp disclosed $45M in inventory write-downs with days-inventory-outstanding rising from 95 to 132. In the following 30 days, 6 urgent supply chain roles appeared on LinkedIn and Indeed — 3x the normal hiring rate. Glassdoor supply chain team ratings dropped 0.8 points in Q4. Among the 127 retailers McKinsey tracked with this compound pattern, 12-18% margin erosion followed within 2 quarters.”

Verify: Summary contains at least 2 specific data points with dates. No promotional language detected. · If failed: Regenerate with stricter system prompt constraints if promotional language appears.

Step 3: Generate Evidence Pack

Duration: 2-3 minutes · Tool: Claude/GPT API

Generate one evidence entry per signal with 5 fields: source (exact document/page), date, data point (specific number), context (1-2 sentence conservative interpretation), and verification path (how to independently confirm).

Verify: Each signal entry has all 5 fields populated. Verification paths are actionable (e.g., “Search SEC EDGAR for CIK 0001234567, Form 10-Q, filed 2025-11-15”). · If failed: Remove any signal lacking a verification path — it was likely inferred rather than observed.

Step 4: Generate Impact Analysis

Duration: 1-2 minutes · Tool: Claude/GPT API

Quantify business impact using industry benchmarks with full citation. Show the math: “At $380M revenue, 12-18% margin erosion = $45.6M-$68.4M annual impact.” State time horizon explicitly and include confidence qualifier. [src1, src6]

Available benchmarks:

Verify: At least 1 dollar-denominated impact figure derived from company revenue + benchmark range. Full citation present. · If failed: If revenue data missing, state benchmark as percentage with note.

Step 5: Generate Recommended Actions

Duration: 1 minute · Tool: Claude/GPT API

Generate 3 graduated options:

Verify: All 3 options present. Option 1 is genuinely no-cost. Option 3 cites a benchmark. No urgency language detected. · If failed: Strip urgency language and regenerate Option 3 only.

Step 6: Assemble and Format Dossier

Duration: 1-3 minutes · Tool: PDF generation (WeasyPrint/Puppeteer) or HTML template

Assemble generated sections into the final dossier format: Executive Summary + Evidence Pack on page 1, Impact Analysis + Recommended Actions on page 2, optional appendix with methodology note and benchmark references.

Output files:

Verify: PDF renders at 2 pages or fewer. All personalization fields populated. All source citations present. · If failed: Reduce evidence pack to top 2 signals and regenerate.

Output Schema

{
  "output_type": "distress_dossier",
  "format": "PDF + JSON",
  "sections": [
    {"name": "executive_summary", "type": "string", "required": true},
    {"name": "evidence_pack", "type": "array", "required": true},
    {"name": "impact_analysis", "type": "string", "required": true},
    {"name": "recommended_actions", "type": "array", "required": true},
    {"name": "appendix", "type": "object", "required": false}
  ],
  "personalization_fields": [
    {"name": "company_name", "type": "string", "required": true},
    {"name": "decision_maker_name", "type": "string", "required": true},
    {"name": "decision_maker_title", "type": "string", "required": true},
    {"name": "signal_dates", "type": "array", "required": true},
    {"name": "signal_data_points", "type": "array", "required": true}
  ],
  "expected_page_count": "2 + optional appendix",
  "sort_order": "sections in fixed order: summary, evidence, impact, actions",
  "deduplication_key": "company_name + generated_at"
}

Quality Benchmarks

Quality MetricMinimum AcceptableGoodExcellent
Falsifiability rate> 80% of claims have verifiable source> 90%100%
Personalization completenessAll required fields populated+ industry-specific context+ competitor references
Benchmark citation accuracy1 cited benchmark with source2-3 benchmarks with n-countsAll benchmarks with full citation
Tone complianceNo “we/our/schedule” detected+ no superlatives or urgency+ reads as independent analyst report
Page length2 pages or fewer1.5-2 pages1.5 pages + focused appendix
Signal-to-noise ratio2+ signals with evidence3+ signals, all falsifiable3+ signals + trend analysis

If below minimum: Regenerate the failing section with stricter system prompt constraints. If falsifiability rate is below 80%, audit each claim against the input data and remove any that lack a specific source.

Error Handling

ErrorLikely CauseRecovery Action
LLM generates promotional languageSystem prompt not restrictive enoughAdd explicit negative examples: “NEVER write: 'we can help', 'our solution', 'schedule a demo'”
Impact analysis lacks dollar figuresRevenue data missing from enrichmentUse percentage-only impact with note; flag company for manual revenue lookup
Evidence pack has unverifiable claimsLLM hallucinated data points not in inputCross-check every data point against input JSON; remove unverifiable claims
PDF exceeds 2 pagesToo many signals or verbose generationLimit evidence pack to top 2 signals by composite score; reduce executive summary to 3 sentences
Personalization fields contain placeholdersEnrichment data incompleteDo not generate dossier — return to enrichment pipeline to fill gaps
LLM refuses impact projectionsSafety filter triggered by financial predictionsReframe as “industry benchmark range” rather than “prediction”; cite study explicitly
Benchmark data is outdatedSource study older than 2 yearsFlag as “based on {year} data” and note recalibration recommended

Cost Breakdown

ComponentPer Dossier50 Dossiers/Month500 Dossiers/Month
LLM API (Claude Sonnet)$0.02-0.05$1-2.50$10-25
LLM API (Claude Opus)$0.05-0.10$2.50-5$25-50
PDF generation (WeasyPrint)$0$0$0
Email delivery (SendGrid)$0$0 (up to 100/day)$15/mo
Total (Sonnet + free tools)$0.02-0.05$1-2.50$10-25

Anti-Patterns

Wrong: Leading with product pitch

Dossier opens with “Our firm specializes in retail supply chain optimization” or closes with “We'd love to schedule a demo.” Result: recipient classifies the dossier as spam and never reads the evidence. Diagnostic credibility is destroyed in the first sentence. [src2]

Correct: Lead with their data, not your capabilities

Open with specific signals the recipient will recognize (“Your Q3 10-Q disclosed $45M in inventory write-downs”). The evidence sells the conversation — not the sender.

Wrong: Including speculative or unfalsifiable claims

Dossier states “Your supply chain is likely underperforming” or “Most retailers in your position struggle with inventory.” These claims cannot be verified and sound like generic sales copy. [src1]

Correct: Every claim cites source, date, and data point

“Your DIO increased from 95 to 132 days between Q2 and Q3 2025 (SEC 10-Q, filed 2025-11-15, page 23).” The recipient can verify this in 2 minutes.

Wrong: Overloading with signals

Including 6-8 signals to appear thorough. Result: the dossier exceeds 2 pages, buries the key insight, and the executive stops reading after page 1. [src3]

Correct: Top 2-3 signals that form a compound pattern

Select the 2-3 signals that together tell a coherent story. Inventory write-down + urgent supply chain hiring + declining Glassdoor scores form a narrative. Adding unrelated signals dilutes the message.

Wrong: Generic benchmark without attribution

“Companies in your situation typically lose 15% of their margins.” No source, no sample size, no year. Indistinguishable from a hallucinated claim. [src4]

Correct: Full benchmark citation

“Among 127 retailers tracked by McKinsey (State of Fashion 2025), those showing compound inventory + workforce distress signals experienced 12-18% margin erosion within 2 quarters.”

When This Matters

Use this recipe when the signal detection pipeline has identified a retailer showing inventory, supply chain, workforce, or financial distress signals and the enrichment pipeline has resolved the signals to a specific company with decision-maker contacts. This is the asset generation step between enrichment (upstream) and outreach delivery (downstream). Without the dossier, outreach lacks the evidence-based positioning that differentiates signal-driven selling from cold outreach.

Related Units