This assessment evaluates the effectiveness of an organization's performance management system across five critical dimensions: goal-setting architecture, review cadence and feedback quality, calibration rigor, promotion velocity and career progression, and manager capability. The output is a composite maturity score (1-5) that identifies systemic weaknesses in how the organization sets expectations, evaluates contributions, and makes talent decisions. [src1]
What this measures: How effectively the organization cascades strategic objectives into individual goals with measurable outcomes.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No formal goal-setting; objectives vague or nonexistent | No documented goals in HRIS; goals are activity-based |
| 2 | Emerging | Annual goals set but disconnected from strategy; SMART used inconsistently | Fewer than 50% measurable; no cascading from company OKRs |
| 3 | Defined | Goals cascade from company to team to individual; OKR framework adopted | 80%+ documented goals; quarterly progress reviews; stretch goals present |
| 4 | Managed | Dynamic goal-setting with mid-cycle adjustments; weighted by priority | Goals updated when priorities shift; completion rates monitored |
| 5 | Optimized | Continuous alignment with real-time strategy; AI-assisted goal recommendations | Goals auto-adjust; historical data calibrates difficulty |
Red flags: Employees cannot name their top 3 goals; goals copy-pasted from prior year; no connection to company strategy. [src2]
Quick diagnostic question: "Can you show me how your company's top 3 strategic priorities cascade into individual goals?"
What this measures: The frequency, structure, and quality of performance feedback.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Annual review only or none; feedback reactive and crisis-driven | Reviews once a year if at all; no structured templates |
| 2 | Emerging | Semi-annual reviews with basic templates; quality varies by manager | Completion below 70%; no manager training on feedback |
| 3 | Defined | Quarterly check-ins; structured templates; 360-degree feedback available | 90%+ completion; forward-looking development questions; review training |
| 4 | Managed | Continuous management with weekly 1:1s; real-time feedback tools | 1:1 cadence tracked; feedback frequency measured; peer feedback integrated |
| 5 | Optimized | AI-augmented feedback with sentiment analysis; real-time coaching prompts | AI flags underperforming managers; feedback multi-directional and continuous |
Red flags: Completion below 60%; reviews submitted in bulk on deadline; no written narrative; employees surprised by rating. [src1]
Quick diagnostic question: "What percentage of managers completed reviews on time, and what does a typical written review look like?"
What this measures: How consistently and fairly performance ratings are applied across teams and demographics.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No calibration; managers rate independently; distributions vary wildly | Some teams 90% "exceeds" while others are 50%; grade inflation unchecked |
| 2 | Emerging | HR reviews distributions after the fact; forced curve without discussion | Distribution targets exist but not discussed; managers blindsided |
| 3 | Defined | Formal calibration sessions; managers present evidence; distribution guidelines | Calibration sessions each cycle; talent profiles prepared; HR facilitates |
| 4 | Managed | Multi-round calibration; demographic equity analysis; bias training completed | Equity lens applied; bias training prerequisite; appeal process documented |
| 5 | Optimized | AI-assisted bias detection; real-time distribution monitoring | AI flags anomalous patterns by demographic; equity is leadership KPI |
Red flags: No calibration sessions; identical distributions across all managers; no demographic analysis; no bias training. [src5]
Quick diagnostic question: "Walk me through your last calibration session — who was in the room, what data was presented, were demographics reviewed?"
What this measures: How transparently and equitably the organization manages promotions and career levels.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No defined career levels; promotions based on tenure or advocacy; criteria opaque | Employees cannot explain requirements; no career ladder documentation |
| 2 | Emerging | Career levels exist but criteria vague; manager-driven without committee review | Generic competency expectations; promotion is annual budget exercise |
| 3 | Defined | Clear career ladders with documented competencies; promotion committees; time-to-promotion tracked | Frameworks published; criteria reference competencies and impact; average time-in-level known |
| 4 | Managed | Promotion tied to performance data and competencies; equity analysis applied | Evidence-based packets; demographic rates tracked; internal mobility measured |
| 5 | Optimized | AI-assisted promotion readiness; predictive flight risk for under-promoted talent | AI identifies promotion-ready employees; retention models flag stalled talent |
Red flags: Average time to promotion unknown; promotion rates differ by demographic; no published career ladders. [src3]
Quick diagnostic question: "What is the average time from hire to first promotion, and how do rates compare across demographics?"
What this measures: How well managers are equipped, trained, and held accountable for performance management.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No manager training; reviews seen as admin burden; no accountability | Reviews completed as compliance exercise; HR pushes, managers resist |
| 2 | Emerging | Basic onboarding training; effectiveness depends on personal skill | One-time training; some managers are good but it is individual not systemic |
| 3 | Defined | Structured training on conversations, feedback, and bias; effectiveness measured | Annual training program; upward feedback surveys; HR coaches underperformers |
| 4 | Managed | Manager effectiveness is a formal KPI; poor performers coached; best practices shared | Manager scorecards; improvement plans for low scores; peer learning cohorts |
| 5 | Optimized | AI-assisted coaching; real-time nudges; manager capability is competitive advantage | AI prompts missed 1:1s; capability correlated with retention; best managers amplified |
Red flags: No training exists; managers view reviews as HR's job; no upward feedback; same managers receive complaints without consequence. [src6]
Quick diagnostic question: "What training do new managers receive on performance management, and how do you measure whether managers do it well?"
Overall Score = (Goal-Setting + Review Cadence + Calibration + Promotion Velocity + Manager Capability) / 5
| Overall Score | Maturity Level | Interpretation | Next Step |
|---|---|---|---|
| 1.0 - 1.9 | Critical | Performance management exists in name only; high performer attrition likely | Basic goal-setting framework + structured review templates |
| 2.0 - 2.9 | Developing | Foundation exists but execution inconsistent; manager capability is bottleneck | Manager training; calibration sessions; career ladders |
| 3.0 - 3.9 | Competent | Solid system with room for optimization; data-driven improvements possible | Equity analysis in calibration; continuous feedback tools |
| 4.0 - 4.5 | Advanced | High-performing system; focus on marginal gains and predictive capabilities | AI-assisted feedback; predictive retention modeling |
| 4.6 - 5.0 | Best-in-class | Industry-leading performance management | Maintain excellence; evaluate emerging AI coaching |
| Weak Dimension (Score < 3) | Fetch This Card |
|---|---|
| Goal-Setting Architecture | OKR Implementation Playbook |
| Review Cadence & Feedback | Review cadence deep-dive |
| Calibration Rigor | DEI Program Assessment |
| Promotion Velocity | People Analytics Maturity Assessment |
| Manager Capability | L&D Maturity Assessment |
| Segment | Expected Average | "Good" Threshold | "Alarm" Threshold |
|---|---|---|---|
| Startup (<50 employees) | 1.5 | 2.2 | 1.0 |
| Growth (50-500 employees) | 2.6 | 3.2 | 1.8 |
| Enterprise (500-5,000) | 3.3 | 4.0 | 2.5 |
| Large enterprise (5,000+) | 3.8 | 4.3 | 3.0 |
[src4]
Fetch when a user asks to evaluate their performance management system, diagnose why high performers are leaving despite competitive compensation, prepare for a people strategy overhaul, or assess whether ratings are applied equitably across the organization.