This assessment evaluates the quality of a product roadmap across six critical dimensions: strategy alignment, customer input quality, technical feasibility validation, resource balance, timeline realism, and stakeholder communication. It is designed for product leaders (VP Product, CPO, Group PM) who need to diagnose whether their roadmap is strategically sound, evidence-based, and executable before committing engineering resources. The output identifies specific roadmap weaknesses and routes to improvement playbooks for each dimension scoring below threshold. [src1]
What this measures: How directly roadmap initiatives trace back to company strategy, OKRs, or stated business objectives — and whether that traceability is explicit, not assumed.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Roadmap is a feature wish list with no connection to company strategy | No OKRs referenced; items lack strategic rationale |
| 2 | Emerging | Some items reference strategic goals but linkage is informal and inconsistent | Verbal strategy connection exists but is not documented |
| 3 | Defined | Every initiative explicitly maps to a strategic objective or OKR; mapping is documented | Roadmap tool shows objective-to-initiative linkage; each item has a "why" statement |
| 4 | Managed | Strategy alignment is scored and weighted during prioritization; orphan initiatives flagged | Prioritization framework includes alignment as a weighted factor; quarterly audit removes misaligned items |
| 5 | Optimized | Roadmap is derived from strategy, not mapped after the fact; strategy changes trigger re-evaluation | Strategy-first planning documented; alignment score tracked quarterly; <10% items lack direct linkage |
Red flags: PM cannot state which objective each roadmap item serves; roadmap unchanged after a strategy pivot; 30%+ items are "carry-over" without re-validation. [src2]
Quick diagnostic question: "Pick any three items on your roadmap — can you name the specific company OKR or strategic objective each one serves?"
What this measures: The rigor and breadth of customer evidence informing roadmap decisions — from no input to systematic, quantified customer signal.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Roadmap driven by internal opinions, competitor copying, or executive mandates; no structured customer input | No feedback repository; "we think customers want this" is the standard justification |
| 2 | Emerging | Some customer input exists but is anecdotal — based on a few loud customers rather than systematic research | Customer quotes appear but are cherry-picked; feedback from <5% of customer base |
| 3 | Defined | Structured customer feedback collection in place; roadmap items reference research, survey data, or analytics | Feedback tool in use; at least monthly customer interviews; NPS data available |
| 4 | Managed | Customer input quantified and weighted in prioritization; segment-specific needs distinguished | Scoring includes customer impact factor; feedback tagged by segment, MRR, frequency |
| 5 | Optimized | Predictive customer signal: usage data, churn indicators, and win/loss analysis proactively surface opportunities | Product analytics drive discovery; churn prediction informs roadmap; evidence confidence scored |
Red flags: No customer validation step; largest customer dictates 40%+ of roadmap; product team has not spoken to a customer in the last month. [src5]
Quick diagnostic question: "For your top roadmap initiative, what specific customer evidence supports building it — and how many customers have expressed this need?"
What this measures: Whether engineering has meaningfully validated the feasibility, complexity, and technical risk of roadmap initiatives before they are committed.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No engineering input on feasibility; PM commits timelines without technical validation | Engineering learns about items at sprint planning; 50%+ estimates wrong by 2x+ |
| 2 | Emerging | Engineering provides rough estimates on request but is not involved in roadmap construction | T-shirt sizing exists but under time pressure; debt and dependencies not factored in |
| 3 | Defined | Engineering participates in roadmap planning; feasibility reviews happen before commitment | Formal feasibility gate before "committed" status; spike tickets for uncertain items |
| 4 | Managed | Architecture review catches systemic risks; dependency mapping maintained; tech debt has explicit allocation | Cross-team dependency map; 15-25% capacity for technical debt; architecture review |
| 5 | Optimized | Continuous feasibility: engineering proactively surfaces constraints and opportunities; prototyping validates assumptions | PoC budget for high-risk items; engineering contributes roadmap ideas; feasibility confidence scored |
Red flags: Engineering consistently calls items "impossible" after commitment; no spike budget; tech debt never on roadmap; one architect leaving invalidates 30%+ of roadmap. [src4]
Quick diagnostic question: "When was the last time engineering vetoed or significantly changed a roadmap item's scope or timeline — and what happened?"
What this measures: How effectively the roadmap balances investment across new features, improvements, technical debt, and innovation/exploration.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | 100% of capacity to new features or reactive work; no deliberate balance | All items are net-new features; no debt or exploration category exists |
| 2 | Emerging | Awareness that balance is needed but no explicit allocation; debt addressed only during incidents | "We know we should pay down debt" but no capacity reserved |
| 3 | Defined | Explicit capacity ratios documented (e.g., 70/20/10); ratios reviewed quarterly | Roadmap shows investment by category; team can state their split |
| 4 | Managed | Allocation ratios enforced and tracked; balance adjusted by product lifecycle stage | Dashboard tracks actual vs planned; ratio adjusts by quarter; leadership reviews balance |
| 5 | Optimized | Dynamic allocation based on data: platform health triggers debt investment; experimentation ROI informs budget | Automated health scores influence debt allocation; portfolio-level optimization |
Red flags: Cannot state allocation split; technical debt has no roadmap representation; team has not shipped infrastructure improvement in two quarters. [src6]
Quick diagnostic question: "What percentage of engineering capacity last quarter went to new features vs improvements vs technical debt vs exploration?"
What this measures: Whether roadmap timelines are credible based on historical delivery data, team capacity, and dependency management.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Timelines are executive mandates with no connection to team capacity; rarely delivers on time | Deadlines set before scope understood; 60%+ items miss dates |
| 2 | Emerging | Some estimation exists but is unreliable; timelines set by PM intuition; scope creep common | Past estimates off by 50%+; no buffer; dates shift every review cycle |
| 3 | Defined | Timelines informed by velocity data and historical accuracy; buffer built in; scope bounded | Team tracks velocity; estimation accuracy measured; ranges used for far-horizon items |
| 4 | Managed | Probabilistic timelines: confidence intervals on dates; Monte Carlo or reference-class forecasting | 80% confidence intervals published; cross-team dependency calendar; accuracy >70% |
| 5 | Optimized | Predictive delivery modeling with continuous recalibration; timelines update on velocity changes | Automated forecasting; timeline risk alerts; historical accuracy >85% |
Red flags: Every item has a specific date but no confidence interval; team has never measured estimation accuracy; roadmap dates unchanged despite missing last three commitments.
Quick diagnostic question: "What percentage of roadmap items from last quarter were delivered within one sprint of the originally committed date?"
What this measures: How effectively the roadmap is communicated to stakeholders and whether feedback flows back into roadmap decisions.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Roadmap exists only in PM's head; stakeholders discover priorities through hallway conversations | No shared artifact; different people answer "when is feature X?" differently |
| 2 | Emerging | Roadmap shared periodically in a format stakeholders cannot easily consume; one-way communication | Quarterly presentation exists but is a dense spreadsheet; no feedback mechanism |
| 3 | Defined | Audience-appropriate views exist (executive, sales, engineering); regular review cadence with feedback | Multiple roadmap views maintained; monthly/quarterly reviews with action items |
| 4 | Managed | Two-way communication: stakeholder feedback systematically captured and influences roadmap | Feedback from reviews tracked; stakeholder alignment survey; sales input has clear path |
| 5 | Optimized | Continuous alignment: real-time visibility with self-service access; proactive change communication | Self-service portal; automated change notifications; alignment score >80% |
Red flags: Sales promises features not on roadmap; executives surprised by what ships; no one outside product can describe next quarter's priorities. [src1]
Quick diagnostic question: "If I asked your head of sales and CTO separately what the top 3 product priorities are next quarter, would they give the same answer?"
Overall Score = (Strategy Alignment + Customer Input Quality + Technical Feasibility + Resource Balance + Timeline Realism + Stakeholder Communication) / 6
| Overall Score | Maturity Level | Interpretation | Recommended Next Step |
|---|---|---|---|
| 1.0 - 1.9 | Critical | Roadmap is a feature wish list with no strategic foundation; high risk of building wrong things | Stop and rebuild: establish strategy linkage, customer feedback, engineering feasibility |
| 2.0 - 2.9 | Developing | Basic structure exists but significant gaps in evidence, feasibility, or alignment | Close biggest gap first: run dimension-level routing to fix weakest link |
| 3.0 - 3.9 | Competent | Solid process with defined practices; ready to optimize from compliance to leverage | Move weakest dimensions to "managed"; introduce data-driven prioritization |
| 4.0 - 4.5 | Advanced | Well-structured, evidence-based, effectively communicated; shift to predictive capabilities | Implement predictive modeling, dynamic allocation, continuous alignment measurement |
| 4.6 - 5.0 | Best-in-class | Roadmap is a strategic asset driving organizational alignment | Pioneer AI-assisted optimization, automated feasibility, portfolio balancing |
| Weak Dimension (Score < 3) | Fetch This Card |
|---|---|
| Strategy Alignment | OKR-to-Roadmap Alignment Playbook |
| Customer Input Quality | Customer Discovery Process Playbook |
| Technical Feasibility | Engineering-Product Collaboration Framework |
| Resource Balance | Portfolio Investment Balance Framework |
| Timeline Realism | Estimation Accuracy Improvement Playbook |
| Stakeholder Communication | Roadmap Communication Playbook |
| Segment | Expected Average Score | "Good" Threshold | "Alarm" Threshold |
|---|---|---|---|
| Seed / Series A | 2.0 | 2.8 | 1.3 |
| Series B-C | 2.8 | 3.5 | 2.0 |
| Growth / Scale-up | 3.4 | 4.0 | 2.5 |
| Enterprise / Public | 3.8 | 4.3 | 3.0 |
Fetch when a user asks to evaluate their product roadmap's quality, diagnose why roadmap execution is failing, prepare for a board-level product strategy review, benchmark roadmap practices against industry standards, or assess whether a product team's planning process is mature enough for the company's current stage.