Lead Scoring Implementation
Purpose
This recipe creates a weighted lead scoring model that assigns 0-100 composite scores based on ICP fit and engagement signals, classifying leads into Nurture (0-49), MQL (50-74), and SQL (75+) tiers for prioritized outreach.
Prerequisites
- Enriched lead list — Lead Enrichment Pipeline
- ICP definition — ICP Definition
- Historical deal data (optional) — closed-won/lost deals for calibration
- Scoring platform — Sheets, HubSpot, Salesforce, or Python
Constraints
Tool Selection Decision
| Path | Tools | Cost | Automation | Scalability |
|---|---|---|---|---|
| A: Spreadsheet | Google Sheets | $0 | Manual | Up to 2,000 |
| B: HubSpot | Marketing Hub Pro | $0-$890/mo | Full | Unlimited |
| C: Salesforce | SF + Einstein | $25-$300/mo | Full | Unlimited |
| D: Python | Python + pandas | $0 | Semi-auto | Unlimited |
Execution Flow
Step 1: Define Scoring Criteria and Weights
Duration: 15-20 min
Build 100-point scoring model: Fit criteria (60 pts) — title match (20), company size (15), industry (10), geography (5), revenue (5), tech stack (5). Engagement criteria (40 pts) — email verified (10), phone available (10), data completeness (10), LinkedIn connected (5), data freshness (5). [src5]
Step 2: Implement Scoring Formula
Duration: 15-20 min | Tool: Python or Sheets
Apply weighted scoring to each lead. Map ICP attributes to point values with clear tier boundaries for each criterion.
Step 3: Set and Calibrate Thresholds
Duration: 10-15 min
Target: SQL 10-20%, MQL 20-35%, Nurture 45-70%. Adjust thresholds if distribution falls outside ranges. [src1]
Step 4: Validate Against Historical Data
Duration: 15-20 min (if data available)
Won deal average score should be 15+ points higher than lost deal average. If difference is less than 10, model has poor discrimination. [src4]
Step 5: Export Scored Leads
Duration: 5-10 min
Export scored leads sorted by score descending with tier classification and model version metadata.
Output Schema
CSV: first_name, last_name, company, score (0-100), tier (SQL/MQL/Nurture), per-criterion scores, model_version. Sorted by score descending.
Quality Benchmarks
| Metric | Minimum | Good | Excellent |
|---|---|---|---|
| SQL tier rate | 5-25% | 10-20% | 12-18% |
| MQL tier rate | 15-40% | 20-35% | 25-30% |
| Won vs lost diff | > 10 pts | > 15 pts | > 20 pts |
| Criteria count | 6-15 | 8-12 | 8-10 |
| Recalibration | Quarterly | + sales feedback | + conversion tracking |
Error Handling
| Error | Cause | Recovery |
|---|---|---|
| Low score variance | Criteria lack differentiation | Add granularity to top criteria |
| SQL > 30% | Threshold too loose | Raise to 80-85 |
| SQL < 5% | Too tight or narrow ICP | Lower to 70 or broaden ICP |
| Missing data | Incomplete enrichment | Re-run enrichment pipeline |
| Sales disagrees | Wrong weights | Interview sales, adjust |
Cost Breakdown
| Component | Free Tier | Paid Tier | At Scale |
|---|---|---|---|
| Spreadsheet | $0 | $0 | $0 |
| HubSpot scoring | N/A | $890/mo | $890/mo |
| Salesforce | N/A | $25-$300/mo | $300/mo |
| Python | $0 | $0 | $0 |
| Total | $0 | $25-$890/mo | $300-$890/mo |
Anti-Patterns
Wrong: Scoring with 15+ criteria
Creates noise and obscures predictive factors. [src5]
Correct: 8-12 high-signal criteria only
Focus on criteria correlated with closed-won deals. [src5]
Wrong: Static thresholds without recalibration
MQL-to-SQL conversion degrades as ICP evolves. [src4]
Correct: Quarterly recalibration with sales feedback
Review with closed-deal data and adjust if conversion drops below 15%. [src4]
When This Matters
Use when the agent has enriched leads that need prioritization. Converts raw data into SQL/MQL/Nurture tiers for outreach sequencing.