This assessment evaluates the maturity of an organization's UX and design capabilities across six critical dimensions: design system maturity, UX research methodology, accessibility compliance, mobile experience quality, design-development collaboration, and user testing and iteration rigor. The output is a composite maturity score (1-5) per dimension that identifies the weakest links in the product design function and routes to specific improvement actions. [src1]
What this measures: How well-established, adopted, and governed the organization's design system is across components, documentation, and cross-team usage.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No shared design system; each designer creates components from scratch | No component library; developers rebuild UI from screenshots |
| 2 | Emerging | Basic component library exists but incomplete and unmaintained; limited adoption | Less than 40% pattern coverage; no versioning; informal maintenance |
| 3 | Defined | Documented components with usage guidelines and code equivalents; dedicated owner | 60%+ coverage; Storybook or equivalent; 50-70% adoption across teams |
| 4 | Managed | Dedicated team; design tokens enforced; versioning and release process; adoption tracked | Semantic versioning; 80%+ usage; automated visual regression testing |
| 5 | Optimized | Design system is a product with its own roadmap; multi-brand support; automated checks | Quarterly roadmaps; theming API; component analytics; linting in CI |
Red flags: Designers copy-paste from old files; developers create custom components for every feature; no source of truth for colors or typography. [src2]
Quick diagnostic question: "Does your design system have a dedicated owner, documented components with code equivalents, and do you track adoption rates?"
What this measures: How systematically the organization conducts user research, integrates findings into product decisions, and builds research capabilities.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No structured UX research; decisions based on stakeholder opinions | No research repository; no user interviews in past 6 months |
| 2 | Emerging | Occasional usability tests; research happens reactively; no dedicated researcher | 1-3 studies per quarter; findings shared via email, not referenced later |
| 3 | Defined | Regular research cadence; dedicated researcher(s); findings stored in repository | Research repository exists; research briefs inform PRDs; PMs request research |
| 4 | Managed | Mixed-methods approach; research democratized with templates; impact tracked | Researchers train PMs; insights tagged and searchable; impact tracked quarterly |
| 5 | Optimized | Research embedded in strategy; continuous discovery; unmoderated testing at scale | Weekly discovery cadence; insights linked to OKRs; dedicated research ops |
Red flags: No user interviews in the past quarter; team cannot name 3 recent findings; research only validates existing decisions. [src4]
Quick diagnostic question: "How many user research studies did your team conduct last quarter, and where are the findings stored?"
What this measures: How comprehensively the organization addresses digital accessibility across design, development, testing, and organizational commitment.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No accessibility awareness; no WCAG testing; no alt text | Lighthouse a11y score below 50; no keyboard navigation support |
| 2 | Emerging | Awareness exists but action sporadic; occasional fixes when reported | Some alt text; basic contrast checks; not enforced in reviews |
| 3 | Defined | WCAG 2.1 AA targeted; automated scans in CI; design guidelines include a11y | Lighthouse 80-90; Axe in CI; accessibility checklist in design reviews |
| 4 | Managed | WCAG 2.2 AA compliance; manual screen reader testing; a11y champions on teams | Quarterly manual audits; a11y backlog tracked; VPAT maintained |
| 5 | Optimized | A11y embedded in culture; inclusive design from ideation; testing with disabled users | Participants with disabilities in testing; WCAG AAA for critical flows |
Red flags: No accessibility testing; Lighthouse a11y below 60; never tested with screen reader; cannot explain WCAG levels. [src3]
Quick diagnostic question: "What is your Lighthouse accessibility score, and when did someone last test with a screen reader?"
What this measures: The quality, performance, and user experience of mobile products measured by technical metrics and user satisfaction.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Mobile is an afterthought; desktop designs shrunk for mobile | No responsive breakpoints; mobile load above 5s; app rating below 3.0 |
| 2 | Emerging | Basic responsive design; some mobile optimization; occasional measurement | Layouts break at breakpoints; LCP above 4s; crash rate above 1% |
| 3 | Defined | Mobile-considered design; Core Web Vitals monitored; crash-free above 99% | LCP under 2.5s; INP under 200ms; mobile QA; app rating 3.5-4.0 |
| 4 | Managed | Mobile excellence is KPI; A/B testing on mobile; performance budgets enforced | Performance budgets in CI; session replay; app rating above 4.0 |
| 5 | Optimized | Best-in-class mobile; adaptive design; predictive optimization; offline support | Crash-free above 99.95%; LCP under 1.5s; app rating above 4.5 |
Red flags: No Core Web Vitals monitoring; crash rate above 2%; no mobile test cases; designs only at desktop resolution. [src6]
Quick diagnostic question: "What are your mobile Core Web Vitals (LCP, INP), and what is your crash-free session rate?"
What this measures: How effectively designers and developers work together from handoff to shared tooling and mutual understanding.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Designers throw mockups over the wall; developers interpret freely | Static image handoffs; devs guess spacing and colors; blame cycle |
| 2 | Emerging | Figma shared but specs incomplete; design review happens after shipping | Missing states and edge cases; one-way handoff meetings |
| 3 | Defined | Structured handoff with specs; design QA step before release; regular syncs | Specs include all states; Figma inspect used; bi-weekly design-dev syncs |
| 4 | Managed | Designers and devs co-create; design tokens shared; pair sessions | Tokens auto-synced; devs attend critiques; visual regression automated |
| 5 | Optimized | Design and development unified; shared component ownership | Full-stack design system in sync; designers code or devs prototype |
Red flags: Devs never open Figma; shipped features differ significantly from designs; no design review before release. [src2]
Quick diagnostic question: "What happens between design approval and feature shipping — who reviews implementation against the design?"
What this measures: How rigorously the team validates design decisions with real users and how effectively insights drive iteration.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No user testing; features ship based on internal opinions | No usability tests; no beta program; HiPPO-driven decisions |
| 2 | Emerging | Occasional testing for major features; some post-launch analytics | 1-2 tests per quarter; NPS below 10% response rate |
| 3 | Defined | Testing integrated for major features; post-launch metrics reviewed | Prototype testing before dev; task success rate measured; iteration backlog |
| 4 | Managed | Continuous testing culture; A/B testing; unmoderated at scale | Weekly unmoderated tests; feature adoption tracked per release |
| 5 | Optimized | Testing at every stage; rapid experimentation; predictive analytics | 5+ experiments per week; ML-driven personalization; closed-loop data-to-design |
Red flags: Cannot cite task success rate for any flow; no usability testing in past quarter; no A/B testing capability. [src5]
Quick diagnostic question: "For your last major feature, what user validation occurred before development, and what metrics did you review after launch?"
Overall Score = (Design System + UX Research + Accessibility + Mobile Quality + Design-Dev Collaboration + User Testing) / 6
| Overall Score | Maturity Level | Interpretation | Next Step |
|---|---|---|---|
| 1.0 - 1.9 | Critical | Design is ad hoc; product quality depends on individual heroics | Foundational design system + basic research cadence |
| 2.0 - 2.9 | Developing | Capabilities exist in pockets but are inconsistent; UX debt accumulating | Standardize lowest-scoring dimension first |
| 3.0 - 3.9 | Competent | Solid foundations; ready for systematic optimization | Design ops and research ops infrastructure |
| 4.0 - 4.5 | Advanced | High-performing design org; marginal gains focus | WCAG 2.2 AA full compliance; benchmark top-decile |
| 4.6 - 5.0 | Best-in-class | Industry-leading design maturity; design drives strategy | Maintain; evaluate AI design tools quarterly |
| Weak Dimension (Score < 3) | Fetch This Card |
|---|---|
| Design System Maturity | Tech Stack Architecture Assessment |
| UX Research Methodology | Data Strategy Maturity Assessment |
| Accessibility Compliance | Security Posture Assessment — compliance dimension |
| Mobile Quality | Tech Stack Architecture Assessment — mobile infra |
| Design-Dev Collaboration | Engineering Productivity Benchmarks |
| User Testing & Iteration | PLG Readiness Assessment |
| Segment | Expected Average | "Good" Threshold | "Alarm" Threshold |
|---|---|---|---|
| Seed/Series A (<$5M ARR) | 1.6 | 2.2 | 1.0 |
| Series B ($5M-$25M ARR) | 2.5 | 3.2 | 1.8 |
| Growth ($25M-$100M ARR) | 3.3 | 4.0 | 2.5 |
| Scale/Public ($100M+ ARR) | 3.9 | 4.4 | 3.0 |
[src1]
Fetch when a user asks to evaluate product design capabilities, diagnose inconsistent product quality across teams, assess readiness for scaling the design organization, prepare for enterprise sales requiring accessibility compliance, or evaluate whether UX practices support product-led growth.