This recipe executes the full Signal Stack consulting lifecycle: from identifying raw intent signals in a target industry through building an automated intelligence pipeline that converts those signals into qualified sales dossiers. The end state is a self-reinforcing flywheel where each delivered dossier generates feedback data that improves signal classification accuracy. [src1, src2]
Which path?
├── Client has existing sales data + CRM
│ └── PATH A: Data-enriched pipeline
├── Client has target vertical but no CRM data
│ └── PATH B: Signal-first pipeline
├── Client needs multiple verticals simultaneously
│ └── PATH C: Platform-first (only if 3+ paying customers exist)
└── Client unsure about vertical selection
└── PATH D: Discovery engagement
| Path | Tools | Cost | Speed | Output Quality |
|---|---|---|---|---|
| A: Data-enriched | CRM + Clearbit/Apollo + LLM | $1K-$3K/month | 8-12 weeks | Excellent |
| B: Signal-first | Public data + LLM + manual | $200-$500/month | 10-16 weeks | Good |
| C: Platform-first | Reusable engine + config | $2K-$5K/month | 4-6 wks/vertical | Excellent |
| D: Discovery | Research + workshops | $0-$500 | 3-4 weeks | Assessment only |
Duration: 5-10 days · Tool: Web research + structured scoring framework
Execute full signal source audit for the target vertical. Inventory every available data source: regulatory databases, behavioral signals, visual signals, and unstructured media. Score each on accessibility, cost, refresh rate, and signal-to-noise ratio. [src1]
Verify: Signal audit report with minimum 15 scored sources across 3+ categories. · If failed: Vertical may lack signal density — run viability scoring.
Duration: 2 days (workshop) + 3 days (validation) · Tool: Workshop facilitation + live data testing
Day 1: domain expert interviews, data source deep-dives, trigger event brainstorming. Day 2: signal strength scoring, false positive threshold calibration, compound signal design, taxonomy validation against live data. [src5]
Verify: Taxonomy validated against 50+ real-world examples. False positive rate < 30%. · If failed: Add qualifying signals, tighten thresholds, revalidate.
Duration: 2-4 weeks · Tool: Python + LLM API + enrichment APIs + cron
Build minimal pipeline: cron job pulling public data, LLM classification using taxonomy, enrichment via Clearbit/Apollo, dossier generation as PDF, email delivery with tracking. No UI required. [src2]
Verify: Pipeline producing 10+ dossiers/week with < 5% error rate. · If failed: If accuracy < 70%, return to taxonomy.
Duration: 4 weeks · Tool: Pipeline + tracking + weekly iteration
Select 2-3 pilot customers with measurable baselines. Deliver 10-20 dossiers/week. Track conversion rates. Iterate taxonomy weekly. Human validates first 100 dossiers. [src1, src3]
Verify: 2x conversion improvement vs cold outreach after 4 weeks. · If failed: Diagnose signal quality vs dossier format.
Duration: 3-4 weeks · Tool: Software architecture + refactoring
Hard gate: 3 paying customers required. Refactor into reusable engine + vertical-specific config layer. Target: subsequent verticals < 50% effort. [src4]
Verify: Second vertical launched in < 50% of vertical #1 timeline. · If failed: Identify tight-coupled components, refactor before vertical #3.
Duration: 4-6 weeks per vertical · Tool: Platform + config + domain advisors
Launch additional verticals using the platform. Per vertical: recruit domain advisor, verify data sources, build classifier rules, design templates, identify pilot customers, ensure compliance. [src3, src5]
Verify: Each new vertical achieves > 1.5x conversion improvement within 4-week pilot. · If failed: Evaluate signal density — kill non-viable verticals quickly.
{
"output_type": "signal_stack_engagement",
"format": "multi-deliverable",
"sections": [
{"name": "signal_audit", "type": "object", "description": "Scored inventory of data sources"},
{"name": "signal_taxonomy", "type": "object", "description": "Classification schema with weighted scoring"},
{"name": "mvp_pipeline", "type": "object", "description": "Deployed pipeline specification"},
{"name": "pilot_results", "type": "object", "description": "Conversion metrics and ROI"},
{"name": "platform_architecture", "type": "object", "description": "Reusable engine design"},
{"name": "vertical_playbooks", "type": "array", "description": "Per-vertical launch configs"}
]
}
| Quality Metric | Minimum Acceptable | Good | Excellent |
|---|---|---|---|
| Signal sources per vertical | > 15 | > 25 | > 40 |
| Taxonomy false positive rate | < 30% | < 20% | < 10% |
| Dossier delivery rate | 10/week | 20/week | 50+/week |
| Conversion rate vs cold outreach | > 1.5x | > 2x | > 3x |
| Time to launch new vertical | < 6 weeks | < 4 weeks | < 2 weeks |
If below minimum: Stop and diagnose. Signal density may be insufficient or taxonomy needs recalibration.
| Error | Likely Cause | Recovery Action |
|---|---|---|
| Signal audit finds < 10 sources | Vertical lacks public data density | Pivot vertical or add paid data sources |
| False positive rate > 40% | Signals too noisy or taxonomy too broad | Add qualifying signals, tighten thresholds |
| Pipeline delivery failures > 10% | API rate limits or source changes | Add retry logic, maintain backup sources |
| Pilot conversion < 1x | Wrong signals or wrong audience | A/B test formats, interview sales team |
| Platform extraction > 2x estimate | Tight coupling in vertical #1 | Decouple before adding verticals |
| Component | Lean ($15K-$25K) | Standard ($25K-$40K) | Enterprise ($40K+) |
|---|---|---|---|
| Signal audit | $2K-$3K | $3K-$5K | $5K-$8K |
| Taxonomy workshop | $3K-$5K | $5K-$8K | $8K-$12K |
| MVP pipeline build | $5K-$8K | $8K-$12K | $12K-$18K |
| Pilot execution | $3K-$5K | $5K-$8K | $8K-$12K |
| Platform extraction | $0 (deferred) | $5K-$10K | $10K-$15K |
| Total engagement | $15K-$25K | $25K-$40K | $40K-$65K |
| Monthly data/compute | $500-$1K | $1K-$2K | $2K-$5K |
Investing in reusable architecture before confirming product-market fit. Result: over-engineered system nobody uses. [src2, src3]
Build the simplest pipeline for vertical #1. Hardcode everything. Extract a platform only after 3 paying customers.
Automating delivery from day one without review. Result: false positives destroy credibility with pilot customers. [src1]
Every dossier gets human review for first 100 deliveries. This calibrates taxonomy and catches edge cases.
Trying to prove the model in 3 verticals at once. Result: none get enough attention, all produce mediocre results. [src3]
Fully prove vertical #1 (3 paying customers), extract platform, then launch vertical #2.
Use when an agent needs to plan or execute a full Signal Stack consulting engagement — from identifying intent signals through building an automated intelligence pipeline. This is the master recipe orchestrating all sub-recipes into a cohesive engagement lifecycle.