This diagnostic evaluates the operational efficiency of an organization across five critical dimensions: process cycle time performance, error rates and quality management, automation coverage, capacity utilization, and continuous improvement culture. The output is a composite efficiency score (1-5) that pinpoints bottlenecks, quantifies waste, and routes to specific improvement frameworks. [src1]
What this measures: How well the organization manages and optimizes end-to-end process cycle times relative to benchmarks and SLAs.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No visibility into cycle times; no SLAs defined | No process timing data; customer complaints about delays |
| 2 | Emerging | Basic tracking for core processes; SLAs frequently missed (>20% breach) | Manual time tracking; 20-40% SLA breach rate |
| 3 | Defined | Automated tracking; SLAs met 85%+; bottlenecks systematically identified | Workflow tools tracking times; SLA dashboards; bottleneck heat maps |
| 4 | Managed | Benchmarked against peers; proactive bottleneck resolution; 95%+ SLA compliance | Benchmarking reports; predictive delay alerts |
| 5 | Optimized | Real-time monitoring with AI-driven optimization; top quartile performance | Process digital twins; AI-optimized routing |
Red flags: Cannot state average cycle time for top 5 processes; team says "it depends" when asked how long things take. [src2]
Quick diagnostic question: "What are your average cycle times for the top 5 core processes, and how do they compare to SLA targets?"
What this measures: How effectively the organization prevents, detects, and corrects errors across operational processes.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Errors discovered by customers; no systematic tracking | No defect logs; rework is constant but unmeasured |
| 2 | Emerging | Basic error logging; quality checks at end of process | Error spreadsheets; post-mortem after major incidents only |
| 3 | Defined | Systematic tracking with categorization; quality gates at critical stages | Defect tracking system; monthly error trend reporting |
| 4 | Managed | Statistical process control; error rates <2%; preventive quality | SPC charts; FMEA for new processes; quality built into workflows |
| 5 | Optimized | AI-powered anomaly detection; near-zero defect rates | ML-based defect prediction; error rates <0.5% |
Red flags: Cannot state error rates for key processes; same errors recur without resolution; customers find errors before the team does. [src4]
Quick diagnostic question: "What is the error rate for your top 3 processes, and how are errors detected?"
What this measures: The degree to which repetitive, rule-based, and high-volume processes are automated.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Almost all processes manual; no workflow automation | Manual data entry; copy-paste between systems |
| 2 | Emerging | Some automation (e-signatures, basic approvals); siloed efforts | Individual department automation; 10-20% tasks automated |
| 3 | Defined | Automation strategy; 40-60% repetitive tasks automated; platform deployed | Automation roadmap; centralized platform; system integration |
| 4 | Managed | 70-80% automated; intelligent automation; automation COE established | Center of excellence; AI document processing; ROI tracking |
| 5 | Optimized | Hyperautomation; autonomous routine decisions; human-in-the-loop for exceptions | End-to-end automated workflows; AI agents; continuous automation discovery |
Red flags: Team spends 60%+ time on manual data entry; processes require handoffs between 3+ systems; "need more headcount" is the default answer. [src3]
Quick diagnostic question: "What percentage of your team's time is spent on repetitive tasks, and do you have an automation roadmap?"
What this measures: How effectively the organization balances workload with available capacity.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No capacity visibility; work assigned ad hoc; chronic overtime or idle time | No resource planning; overtime common; cannot forecast needs |
| 2 | Emerging | Basic headcount planning; capacity managed reactively | Annual plans; capacity crises trigger hiring |
| 3 | Defined | Capacity dashboards; utilization targets (70-85%); demand forecasting | Resource management tools; quarterly capacity planning |
| 4 | Managed | Dynamic allocation; cross-training; predictive demand modeling | Real-time dashboards; flex resource pools; scenario planning |
| 5 | Optimized | AI-driven demand sensing; elastic workforce; zero bottleneck ops | Predictive models; automated workload balancing; 78-82% sustained |
Red flags: Utilization above 90% (burnout) or below 60% (waste); cannot forecast next quarter's needs; single points of failure. [src5]
Quick diagnostic question: "What is the average utilization rate, and how do you forecast capacity needs for next quarter?"
What this measures: Whether the organization has embedded a systematic culture of process improvement.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No improvement methodology; changes reactive; status quo accepted | No improvement backlog; process changes only after failures |
| 2 | Emerging | Occasional improvement initiatives (often consultant-driven) | Annual projects; suggestions not tracked |
| 3 | Defined | CI methodology adopted (Lean, Six Sigma); regular improvement cycles | CI methodology deployed; improvement backlog; retrospectives |
| 4 | Managed | Improvement embedded in daily operations; cross-functional teams; ROI tracked | Daily standups include improvements; Kaizen events; ROI measured |
| 5 | Optimized | AI identifies improvement opportunities; self-optimizing processes | Process mining; A/B testing variants; learning organization |
Red flags: Last improvement was 12+ months ago; no one owns CI; frontline feedback ignored. [src6]
Quick diagnostic question: "When was the last process improvement, who owns continuous improvement, and how do employees submit ideas?"
Overall Score = (Cycle Time + Error Rates + Automation + Capacity + CI Culture) / 5
| Overall Score | Maturity Level | Interpretation | Recommended Next Step |
|---|---|---|---|
| 1.0 - 1.9 | Critical | Operations reactive and unmanaged; high waste and no visibility | Establish baseline metrics; implement basic workflow and error tracking |
| 2.0 - 2.9 | Developing | Some discipline but inconsistent; 15-25% capacity lost to waste | Standardize core processes; deploy tracking; begin error reduction |
| 3.0 - 3.9 | Competent | Defined processes and metrics; shift to optimization | Benchmark against peers; deploy automation; establish CI program |
| 4.0 - 4.5 | Advanced | Operations as competitive advantage | Pursue hyperautomation; build predictive capabilities |
| 4.6 - 5.0 | Best-in-class | World-class with autonomous optimization | Maintain through innovation; contribute to industry benchmarks |
| Weak Dimension (Score < 3) | Fetch This Card |
|---|---|
| Process Cycle Time | Process Optimization Playbook |
| Error Rates and Quality | Quality Management Implementation Guide |
| Automation Coverage | Automation Strategy Playbook |
| Capacity Utilization | Capacity Planning Framework |
| Continuous Improvement | CI Culture Building Playbook |
| Segment | Expected Average Score | "Good" Threshold | "Alarm" Threshold |
|---|---|---|---|
| Startup/SMB (<50 employees) | 1.7 | 2.3 | 1.0 |
| Mid-market (50-500 employees) | 2.5 | 3.2 | 1.8 |
| Large enterprise (500-5000) | 3.2 | 3.8 | 2.4 |
| Global enterprise (5000+) | 3.7 | 4.2 | 2.8 |
Fetch when a user asks to evaluate operational efficiency, diagnose rising costs despite stable revenue, prepare for scaling, or benchmark against peers. Also relevant when a new COO needs to baseline current state.