Demand Signal Testing

Type: Execution Recipe Confidence: 0.88 Sources: 7 Verified: 2026-03-11

Purpose

This recipe produces a quantified Demand Signal Report by guiding you through selecting, building, launching, and measuring one of five demand validation tests: fake door (painted door), waitlist/landing page, crowdfunding campaign, concierge MVP, or Wizard of Oz MVP. The output is a structured dataset of engagement metrics — click-through rates, signup conversion rates, payment intent signals — with a clear go/pivot/kill recommendation based on pre-defined thresholds. [src1]

Prerequisites

Constraints

Tool Selection Decision

Which test type?
├── User wants fastest signal (1-3 days) AND has ad budget
│   └── TEST A: Fake Door / Painted Door — landing page + paid traffic
├── User wants pre-launch buzz AND 1-2 week timeline
│   └── TEST B: Waitlist Landing Page — signup page + email sequence
├── User has physical/hardware product AND 4-8 week timeline
│   └── TEST C: Crowdfunding Campaign — Kickstarter or Indiegogo
├── User has service/consulting product AND wants deep validation
│   └── TEST D: Concierge MVP — manual delivery to 5-15 customers
└── User has software product AND wants to simulate automation
    └── TEST E: Wizard of Oz MVP — human-powered backend, real frontend
TestBest ForCostTimelineSignal Strength
A: Fake DoorAny product, fastest$200-1,000 ad spend1-7 daysModerate (measures intent)
B: WaitlistSaaS, apps, digital products$0-5007-14 daysModerate (email = weak commitment)
C: CrowdfundingPhysical products, hardware$1,000-5,00030-60 daysStrong (payment = real commitment)
D: Concierge MVPServices, consulting, marketplaces$0-2007-21 daysVery strong (actual delivery)
E: Wizard of OzSoftware with complex backend$0-50014-30 daysVery strong (real usage)

Execution Flow

Step 1: Define Success Thresholds

Duration: 30 minutes · Tool: Spreadsheet or text document

Before building anything, write down your pass/fail criteria. This prevents confirmation bias after you see data.

DEMAND SIGNAL THRESHOLDS
          KILL         PIVOT        GO
Fake Door CTR:    <2%         2-5%         >5%
Waitlist signup:  <3%         3-8%         >8%
Waitlist→paid:    <2%         2-10%        >10%
Crowdfund goal:   <30%        30-100%      >100%
Concierge NPS:    <30         30-50        >50
Wizard retention: <20%        20-40%       >40%

Sample size needed: minimum 200 visitors (quantitative)
                    minimum 5 customers (qualitative)

Verify: Thresholds written down and shared with at least one advisor · If failed: Do not proceed without written thresholds.

Step 2: Build the Test Asset

Duration: 2-8 hours · Tool: Landing page builder (Carrd, Webflow, Framer) + analytics

For fake door and waitlist tests, build a single-page landing page with: clear value proposition headline, single CTA button, 3-4 feature bullets, and analytics tracking. For crowdfunding, build a Kickstarter campaign page with product video. For concierge MVP, identify 5-15 target customers and offer manual service delivery. For Wizard of Oz, build a real frontend with manual backend processing. [src2]

Verify: Page loads in under 3 seconds, CTA visible above fold, analytics firing · If failed: Use Google PageSpeed Insights to diagnose.

Step 3: Drive Traffic to the Test

Duration: 1-14 days · Tool: Ad platform (Google Ads, Meta Ads) or organic channels

Run paid ads at $20-50/day for 7-14 days targeting keywords or interests matching your ICP. Alternatively, post in 5-10 relevant online communities for organic traffic. Minimum sample: 500 ad clicks or 200 page visitors.

Verify: Analytics shows traffic arriving and events firing. Ad-to-landing-page CTR should be 1-5% · If failed: If ad CTR below 1%, rewrite ad copy to be more specific about the problem.

Step 4: Measure and Record Signals

Duration: Ongoing + 1-2 hours for final analysis · Tool: Analytics dashboard + spreadsheet

Track daily: visitors, CTA clicks, signups, CTR, cost. Also track qualitative signals: unsolicited replies, users asking about launch date, users sharing the page, users offering to pay. For concierge/wizard tests, track engagement completion, NPS, willingness to pay, and retention. [src3]

Verify: Minimum 200 visitors or 5 completed customer engagements before drawing conclusions · If failed: Double daily ad spend or broaden targeting.

Step 5: Analyze Results and Generate Recommendation

Duration: 1-2 hours · Tool: Spreadsheet

Apply pre-defined thresholds from Step 1 to measured data. Generate structured Demand Signal Report with GO (exceeds threshold — proceed to MVP), PIVOT (ambiguous — iterate and re-test), or KILL (below threshold — explore different problem or segment) recommendation. Include confidence level based on sample size.

Verify: Recommendation matches pre-set thresholds, not post-hoc rationalization · If failed: If ambiguous, run a second test with different messaging.

Step 6: Preserve Test Artifacts for Launch

Duration: 30 minutes · Tool: File storage

Archive landing page URL and screenshot, ad copy variants and performance data, email list of signups (GDPR-compliant), analytics export, and the Demand Signal Report. Successful demand tests produce reusable assets for actual product launch.

Output Schema

{
  "output_type": "demand_signal_report",
  "format": "JSON",
  "columns": [
    {"name": "test_type", "type": "string", "description": "fake_door, waitlist, crowdfunding, concierge, wizard_of_oz"},
    {"name": "test_duration_days", "type": "number"},
    {"name": "total_spend", "type": "number", "description": "Total cost in USD"},
    {"name": "total_visitors", "type": "number"},
    {"name": "primary_metric_value", "type": "number"},
    {"name": "recommendation", "type": "string", "description": "GO, PIVOT, or KILL"},
    {"name": "confidence_level", "type": "string", "description": "HIGH (>500), MEDIUM (200-500), LOW (<200)"},
    {"name": "cost_per_signup", "type": "number"}
  ]
}

Quality Benchmarks

Quality MetricMinimum AcceptableGoodExcellent
Fake door CTR (ad click to CTA click)> 2%> 5%> 10%
Waitlist signup rate (visitor to email)> 3%> 8%> 15%
Waitlist to paid conversion> 2%> 10%> 20%
Crowdfunding goal reached> 30% funded> 100% funded> 200% funded
Concierge NPS score> 30> 50> 70
Wizard of Oz week-2 retention> 20%> 40%> 60%
Sample size (quantitative)200 visitors500 visitors1,000+ visitors
Sample size (qualitative)5 customers10 customers15+ customers

If below minimum: Check (1) was the headline clear about the problem? (2) was traffic targeted to the right audience? (3) was the CTA specific and low-friction? If all three are yes and metrics are still below minimum, demand is likely insufficient. [src6]

Error Handling

ErrorLikely CauseRecovery Action
Zero ad clicks after 48 hoursAd rejected, targeting too narrow, or bid too lowCheck ad approval status, broaden audience by 2x, increase bid to suggested range
High bounce rate (>85%) on landing pageSlow load time, headline mismatch with ad copy, or mobile rendering brokenRun PageSpeed test, ensure headline matches ad promise, test on mobile device
High signups but zero email responsesEmail going to spam, or signup was low-intentCheck sender reputation with mail-tester.com, add double opt-in
Crowdfunding stalls after day 3No pre-launch audience, middle-of-campaign slump is normalActivate press outreach, post updates, email pre-launch list again
Concierge customers ghost after first sessionValue was unclear or delivery was too manual/slowSend brief survey asking why, simplify offering, reduce time-to-value
Conflicting data (high CTR but low signup)Friction in signup flow or asking for too much informationReduce form to email-only, remove all optional fields

Cost Breakdown

ComponentFree TierPaid TierAt Scale
Landing page builderCarrd ($0), Framer freeUnbounce ($99/mo), Webflow ($16/mo)N/A
AnalyticsGoogle Analytics ($0)Mixpanel ($28/mo)N/A
Ad spend (fake door/waitlist)$0 (organic only)$200-500 per test$1,000-2,000 per test
Ad spend (crowdfunding pre-launch)$0 (organic)$1,000-2,000$3,000-5,000
Email toolMailchimp free (500 contacts)ConvertKit ($29/mo)N/A
Crowdfunding platform feeN/A5-8% of funds raised5-8% of funds raised
Total for one demand test$0$200-1,000$1,000-5,000

Anti-Patterns

Wrong: Declaring demand validated from clicks alone

A 10% click-through rate on a "Start Free Trial" button means people are curious, not that they will pay. Flexport's founder validated with 300 company signups to a fake product — the real signal was that companies filled out detailed onboarding forms, not just that they clicked. [src1]

Correct: Layer commitment depth into every test

After the initial click, add a second action: email signup, survey completion, or scheduling a call. Each additional step filters for real intent.

Wrong: Interpreting waitlist size as demand validation

100,000 waitlist signups is a vanity metric. Conversion averages approximately 50% if access is granted within a month but drops below 20% after three months. [src3]

Correct: Measure waitlist-to-active conversion within a 30-day window

Invite waitlist users in cohorts and measure activation rate per cohort. Keep the waitlist warm with weekly progress updates if access cannot be granted within 30 days.

Wrong: Launching a crowdfunding campaign without a pre-launch audience

Cold launches on Kickstarter almost always fail. The platform rewards early momentum — campaigns that hit 30% of goal in the first 48 hours get algorithmic promotion. [src4]

Correct: Build a 500+ email list before pressing launch

Spend 4-6 weeks running landing page ads that collect emails. On launch day, email the entire list. First 48 hours determine campaign trajectory.

When This Matters

Use this recipe when a founder or agent needs to produce quantified evidence of customer demand before committing to building a product. It replaces gut feelings and anecdotal interest with structured, measurable signals. The output feeds directly into MVP planning (if GO), pivot strategy (if PIVOT), or idea retirement (if KILL). Requires a product concept and target customer profile as input.

Related Units