Demand Signal Testing
Purpose
This recipe produces a quantified Demand Signal Report by guiding you through selecting, building, launching, and measuring one of five demand validation tests: fake door (painted door), waitlist/landing page, crowdfunding campaign, concierge MVP, or Wizard of Oz MVP. The output is a structured dataset of engagement metrics — click-through rates, signup conversion rates, payment intent signals — with a clear go/pivot/kill recommendation based on pre-defined thresholds. [src1]
Prerequisites
- Product concept description — 2-3 sentence summary of what you are building and for whom — Startup Idea Structuring Template
- Target customer profile — demographics, pain points, and where they gather online
- Landing page builder — Carrd (free), Webflow, Framer, or Unbounce account
- Analytics tracking — Google Analytics property or Mixpanel project configured
- Ad platform account (if using paid traffic) — Google Ads or Meta Ads with billing enabled
- Pre-defined success thresholds — written down BEFORE launching the test
Constraints
- Never spend more than $500-2,000 in ad spend per individual demand test. If the signal is unclear at that budget, the problem is test design or audience targeting, not sample size. [src1]
- Fake door tests placed inside existing products can erode user trust if overused. Limit to one active fake door test at a time and always show a graceful fallback message. [src2]
- Waitlist signups are an intent signal, not a demand signal. The real metric is conversion from waitlist to paid or active user. Average waitlist-to-customer conversion is only 2-5% for cold traffic. [src3]
- Crowdfunding campaigns require 4-8 weeks of pre-launch audience building. Launching cold on Kickstarter typically fails regardless of product quality. [src4]
- All demand tests must define quantitative success/failure thresholds before launch. Moving goalposts after seeing data invalidates the experiment.
Tool Selection Decision
Which test type?
├── User wants fastest signal (1-3 days) AND has ad budget
│ └── TEST A: Fake Door / Painted Door — landing page + paid traffic
├── User wants pre-launch buzz AND 1-2 week timeline
│ └── TEST B: Waitlist Landing Page — signup page + email sequence
├── User has physical/hardware product AND 4-8 week timeline
│ └── TEST C: Crowdfunding Campaign — Kickstarter or Indiegogo
├── User has service/consulting product AND wants deep validation
│ └── TEST D: Concierge MVP — manual delivery to 5-15 customers
└── User has software product AND wants to simulate automation
└── TEST E: Wizard of Oz MVP — human-powered backend, real frontend
| Test | Best For | Cost | Timeline | Signal Strength |
|---|---|---|---|---|
| A: Fake Door | Any product, fastest | $200-1,000 ad spend | 1-7 days | Moderate (measures intent) |
| B: Waitlist | SaaS, apps, digital products | $0-500 | 7-14 days | Moderate (email = weak commitment) |
| C: Crowdfunding | Physical products, hardware | $1,000-5,000 | 30-60 days | Strong (payment = real commitment) |
| D: Concierge MVP | Services, consulting, marketplaces | $0-200 | 7-21 days | Very strong (actual delivery) |
| E: Wizard of Oz | Software with complex backend | $0-500 | 14-30 days | Very strong (real usage) |
Execution Flow
Step 1: Define Success Thresholds
Duration: 30 minutes · Tool: Spreadsheet or text document
Before building anything, write down your pass/fail criteria. This prevents confirmation bias after you see data.
DEMAND SIGNAL THRESHOLDS
KILL PIVOT GO
Fake Door CTR: <2% 2-5% >5%
Waitlist signup: <3% 3-8% >8%
Waitlist→paid: <2% 2-10% >10%
Crowdfund goal: <30% 30-100% >100%
Concierge NPS: <30 30-50 >50
Wizard retention: <20% 20-40% >40%
Sample size needed: minimum 200 visitors (quantitative)
minimum 5 customers (qualitative)
Verify: Thresholds written down and shared with at least one advisor · If failed: Do not proceed without written thresholds.
Step 2: Build the Test Asset
Duration: 2-8 hours · Tool: Landing page builder (Carrd, Webflow, Framer) + analytics
For fake door and waitlist tests, build a single-page landing page with: clear value proposition headline, single CTA button, 3-4 feature bullets, and analytics tracking. For crowdfunding, build a Kickstarter campaign page with product video. For concierge MVP, identify 5-15 target customers and offer manual service delivery. For Wizard of Oz, build a real frontend with manual backend processing. [src2]
Verify: Page loads in under 3 seconds, CTA visible above fold, analytics firing · If failed: Use Google PageSpeed Insights to diagnose.
Step 3: Drive Traffic to the Test
Duration: 1-14 days · Tool: Ad platform (Google Ads, Meta Ads) or organic channels
Run paid ads at $20-50/day for 7-14 days targeting keywords or interests matching your ICP. Alternatively, post in 5-10 relevant online communities for organic traffic. Minimum sample: 500 ad clicks or 200 page visitors.
Verify: Analytics shows traffic arriving and events firing. Ad-to-landing-page CTR should be 1-5% · If failed: If ad CTR below 1%, rewrite ad copy to be more specific about the problem.
Step 4: Measure and Record Signals
Duration: Ongoing + 1-2 hours for final analysis · Tool: Analytics dashboard + spreadsheet
Track daily: visitors, CTA clicks, signups, CTR, cost. Also track qualitative signals: unsolicited replies, users asking about launch date, users sharing the page, users offering to pay. For concierge/wizard tests, track engagement completion, NPS, willingness to pay, and retention. [src3]
Verify: Minimum 200 visitors or 5 completed customer engagements before drawing conclusions · If failed: Double daily ad spend or broaden targeting.
Step 5: Analyze Results and Generate Recommendation
Duration: 1-2 hours · Tool: Spreadsheet
Apply pre-defined thresholds from Step 1 to measured data. Generate structured Demand Signal Report with GO (exceeds threshold — proceed to MVP), PIVOT (ambiguous — iterate and re-test), or KILL (below threshold — explore different problem or segment) recommendation. Include confidence level based on sample size.
Verify: Recommendation matches pre-set thresholds, not post-hoc rationalization · If failed: If ambiguous, run a second test with different messaging.
Step 6: Preserve Test Artifacts for Launch
Duration: 30 minutes · Tool: File storage
Archive landing page URL and screenshot, ad copy variants and performance data, email list of signups (GDPR-compliant), analytics export, and the Demand Signal Report. Successful demand tests produce reusable assets for actual product launch.
Output Schema
{
"output_type": "demand_signal_report",
"format": "JSON",
"columns": [
{"name": "test_type", "type": "string", "description": "fake_door, waitlist, crowdfunding, concierge, wizard_of_oz"},
{"name": "test_duration_days", "type": "number"},
{"name": "total_spend", "type": "number", "description": "Total cost in USD"},
{"name": "total_visitors", "type": "number"},
{"name": "primary_metric_value", "type": "number"},
{"name": "recommendation", "type": "string", "description": "GO, PIVOT, or KILL"},
{"name": "confidence_level", "type": "string", "description": "HIGH (>500), MEDIUM (200-500), LOW (<200)"},
{"name": "cost_per_signup", "type": "number"}
]
}
Quality Benchmarks
| Quality Metric | Minimum Acceptable | Good | Excellent |
|---|---|---|---|
| Fake door CTR (ad click to CTA click) | > 2% | > 5% | > 10% |
| Waitlist signup rate (visitor to email) | > 3% | > 8% | > 15% |
| Waitlist to paid conversion | > 2% | > 10% | > 20% |
| Crowdfunding goal reached | > 30% funded | > 100% funded | > 200% funded |
| Concierge NPS score | > 30 | > 50 | > 70 |
| Wizard of Oz week-2 retention | > 20% | > 40% | > 60% |
| Sample size (quantitative) | 200 visitors | 500 visitors | 1,000+ visitors |
| Sample size (qualitative) | 5 customers | 10 customers | 15+ customers |
If below minimum: Check (1) was the headline clear about the problem? (2) was traffic targeted to the right audience? (3) was the CTA specific and low-friction? If all three are yes and metrics are still below minimum, demand is likely insufficient. [src6]
Error Handling
| Error | Likely Cause | Recovery Action |
|---|---|---|
| Zero ad clicks after 48 hours | Ad rejected, targeting too narrow, or bid too low | Check ad approval status, broaden audience by 2x, increase bid to suggested range |
| High bounce rate (>85%) on landing page | Slow load time, headline mismatch with ad copy, or mobile rendering broken | Run PageSpeed test, ensure headline matches ad promise, test on mobile device |
| High signups but zero email responses | Email going to spam, or signup was low-intent | Check sender reputation with mail-tester.com, add double opt-in |
| Crowdfunding stalls after day 3 | No pre-launch audience, middle-of-campaign slump is normal | Activate press outreach, post updates, email pre-launch list again |
| Concierge customers ghost after first session | Value was unclear or delivery was too manual/slow | Send brief survey asking why, simplify offering, reduce time-to-value |
| Conflicting data (high CTR but low signup) | Friction in signup flow or asking for too much information | Reduce form to email-only, remove all optional fields |
Cost Breakdown
| Component | Free Tier | Paid Tier | At Scale |
|---|---|---|---|
| Landing page builder | Carrd ($0), Framer free | Unbounce ($99/mo), Webflow ($16/mo) | N/A |
| Analytics | Google Analytics ($0) | Mixpanel ($28/mo) | N/A |
| Ad spend (fake door/waitlist) | $0 (organic only) | $200-500 per test | $1,000-2,000 per test |
| Ad spend (crowdfunding pre-launch) | $0 (organic) | $1,000-2,000 | $3,000-5,000 |
| Email tool | Mailchimp free (500 contacts) | ConvertKit ($29/mo) | N/A |
| Crowdfunding platform fee | N/A | 5-8% of funds raised | 5-8% of funds raised |
| Total for one demand test | $0 | $200-1,000 | $1,000-5,000 |
Anti-Patterns
Wrong: Declaring demand validated from clicks alone
A 10% click-through rate on a "Start Free Trial" button means people are curious, not that they will pay. Flexport's founder validated with 300 company signups to a fake product — the real signal was that companies filled out detailed onboarding forms, not just that they clicked. [src1]
Correct: Layer commitment depth into every test
After the initial click, add a second action: email signup, survey completion, or scheduling a call. Each additional step filters for real intent.
Wrong: Interpreting waitlist size as demand validation
100,000 waitlist signups is a vanity metric. Conversion averages approximately 50% if access is granted within a month but drops below 20% after three months. [src3]
Correct: Measure waitlist-to-active conversion within a 30-day window
Invite waitlist users in cohorts and measure activation rate per cohort. Keep the waitlist warm with weekly progress updates if access cannot be granted within 30 days.
Wrong: Launching a crowdfunding campaign without a pre-launch audience
Cold launches on Kickstarter almost always fail. The platform rewards early momentum — campaigns that hit 30% of goal in the first 48 hours get algorithmic promotion. [src4]
Correct: Build a 500+ email list before pressing launch
Spend 4-6 weeks running landing page ads that collect emails. On launch day, email the entire list. First 48 hours determine campaign trajectory.
When This Matters
Use this recipe when a founder or agent needs to produce quantified evidence of customer demand before committing to building a product. It replaces gut feelings and anecdotal interest with structured, measurable signals. The output feeds directly into MVP planning (if GO), pivot strategy (if PIVOT), or idea retirement (if KILL). Requires a product concept and target customer profile as input.