This assessment evaluates the maturity of an organization's API strategy across six dimensions: API design quality, documentation and developer experience, governance and versioning, security and authentication, monetization readiness, and analytics and observability. Most companies plateau at level 2-3 without a deliberate maturity improvement strategy. [src1]
What this measures: Consistency, usability, and standards-compliance of the API surface including naming, error handling, and resource modeling.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No design standards; inconsistent endpoints; no schema validation | Different naming per endpoint; no shared error schema; no style guide |
| 2 | Emerging | Basic conventions documented but inconsistently applied; some OpenAPI specs | Style guide exists but not enforced; error formats vary; no CI linting |
| 3 | Defined | Design linting in CI/CD; all APIs have OpenAPI 3.x; consistent errors (RFC 7807); standardized pagination | Spectral linter in pipeline; spec-first design; versioning strategy documented |
| 4 | Managed | Design-first with mocks; API design review board; contract testing; hypermedia controls | Design reviews before code; mock servers; contract tests in CI; HATEOAS |
| 5 | Optimized | APIs designed as products with user research; automated quality scoring; composable APIs | Design quality score tracked; APIs compose into products; event-driven patterns standardized |
Red flags: No OpenAPI specs; different error formats; verb-based URLs; no pagination; breaking changes without notice. [src6]
Quick diagnostic question: "Do you have a documented API style guide enforced through automated linting in CI/CD?"
What this measures: Quality of API documentation, developer onboarding, and tools (SDKs, sandboxes, samples) enabling successful integration.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No docs or auto-generated stubs only; no portal; no SDKs; onboarding requires support | Auto-generated Swagger with no descriptions; no getting-started guide; no samples |
| 2 | Emerging | Basic API reference; developer portal with auth setup; some code samples; onboarding >1 day | Reference docs lack context; few samples in one language; no sandbox |
| 3 | Defined | Complete reference with guides; interactive explorer; SDKs in 2-3 languages; sandbox; TTFC <30 min | Contextual guides; try-it console; SDKs maintained; sandbox with test data; changelog |
| 4 | Managed | Portal with analytics; versioned docs; auto-generated SDKs in 5+ languages; developer NPS tracked | Portal analytics dashboard; auto-generated SDKs; dedicated DX team; error message quality audited |
| 5 | Optimized | AI-assisted DX; personalized onboarding; self-sustaining developer community; TTFC <5 min | AI-powered search and code generation; active community; DX metrics in product OKRs |
Red flags: No getting-started guide; docs only after signup; no code samples; abandoned SDKs; email-only support with multi-day response. [src3]
Quick diagnostic question: "How long does it take a new developer to make their first successful API call, and do you have a sandbox?"
What this measures: API lifecycle management from design through deprecation, including versioning, change management, and cross-team coordination.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No versioning; breaking changes without warning; no API catalog; zombie APIs | No version in URL or headers; no central registry; APIs without owners |
| 2 | Emerging | URL-based versioning (v1, v2); basic changelog; some deprecation notices; partial catalog | Version in URL path; irregular changelog; catalog lists some APIs |
| 3 | Defined | Formal versioning policy; deprecation with 6+ month notice; complete catalog with owners; breaking change review | Versioning policy followed; deprecation policy documented; API catalog with lifecycle status |
| 4 | Managed | Automated lifecycle management; CI/CD gates for breaking changes; consumer impact analysis; governance board | Automated lifecycle tracking; consumer analysis before deprecation; governance board meets regularly |
| 5 | Optimized | Additive-only evolution; automated consumer migration; API roadmap published; executive-level strategy | Zero-downtime evolution; automated migration tools; governance in platform engineering |
Red flags: No versioning; breaking changes without notice; no deprecation policy; 20%+ APIs without owner; no API catalog. [src4]
Quick diagnostic question: "What is your API versioning strategy and minimum deprecation notice period for breaking changes?"
What this measures: Robustness of authentication, authorization, rate limiting, input validation, and threat detection for APIs.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | API keys only; no rate limiting; no input validation; no security testing; HTTPS not enforced | Static keys with no rotation; SQL injection possible; some endpoints on HTTP |
| 2 | Emerging | OAuth 2.0 for some APIs; basic rate limiting; HTTPS everywhere; annual pen test | OAuth inconsistent; global rate limit only; basic input validation |
| 3 | Defined | OAuth 2.0 with scopes for all APIs; per-consumer rate limits; automated security scanning in CI/CD | Granular scopes; per-endpoint rate limits; schema validation; API threat model documented |
| 4 | Managed | Zero-trust; mutual TLS service-to-service; real-time threat detection; automated key rotation | mTLS between services; anomaly detection; automated rotation; SOC 2/ISO 27001 covers APIs |
| 5 | Optimized | AI-powered threat detection; adaptive rate limiting; automated incident response; bug bounty covers APIs | AI detects abuse patterns; adaptive limits; automated response playbook |
Red flags: API keys as sole auth with no rotation; no rate limiting; HTTP endpoints; no API security testing; OWASP API Top 10 in production. [src2]
Quick diagnostic question: "What authentication do your APIs use, and do you have per-consumer rate limiting with automated key rotation?"
What this measures: Preparedness to derive revenue from APIs — including metering, billing, pricing strategy, and value articulation.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No monetization consideration; APIs free and unmetered; no per-consumer tracking | No usage metering; APIs as engineering tools not business assets |
| 2 | Emerging | Usage tracked per consumer; free tier but no paid plans; API recognized as revenue potential | Per-consumer dashboards; leadership discussions; cost-per-call estimated |
| 3 | Defined | Pricing model designed; metering captures billable events; ToS for commercial use; tier-based rate limits | Pricing page published; metering captures calls/data/compute; revenue attribution possible |
| 4 | Managed | Self-service signup and billing; usage-based billing automated; pricing A/B tested; partner program | Portal with billing integration; pricing optimized; API revenue as P&L line item |
| 5 | Optimized | API is strategic revenue driver; dynamic pricing; ecosystem revenue exceeds direct; enables partner business models | Material P&L line; value-based pricing; developer ecosystem as moat |
Red flags: No per-consumer metering; no cost-to-serve understanding; no commercial terms; leadership views APIs as cost center. [src5]
Quick diagnostic question: "Do you meter API usage per consumer, and do you have a pricing model — even if currently free?"
What this measures: Depth of API monitoring, business analytics, and developer behavior insights.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No API-specific monitoring; general app logs only; errors found from consumer complaints | No API metrics; no latency tracking; no usage trends; log aggregation not API-aware |
| 2 | Emerging | Basic health metrics (uptime, error rate, latency); API logging; downtime alerts; monthly reports | Uptime dashboard; p50 latency tracked; monthly request volume report |
| 3 | Defined | Per-endpoint analytics; consumer usage patterns; SLO/SLI defined; degradation alerts before SLA breach | Per-endpoint p95/p99; consumer heatmaps; SLOs documented; error budget tracked |
| 4 | Managed | Business analytics from API data; distributed tracing; API health informs product decisions | Usage correlated with business outcomes; Jaeger/Datadog tracing; consumer health scores |
| 5 | Optimized | AI-driven anomaly detection and root cause; predictive analytics; analytics feeds product strategy | AI detects anomalies; predictive scaling; usage drives roadmap; analytics on portal |
Red flags: No API-specific monitoring; p95/p99 not tracked; no per-consumer visibility; SLAs promised but SLOs not defined; errors from complaints only. [src7]
Quick diagnostic question: "Do you track per-endpoint p95/p99 latency, have SLOs defined, and can you see usage per consumer?"
Formula: Overall Score = (API Design + Documentation & DX + Governance + Security + Monetization + Analytics) / 6. For API-first companies, weight Design and DX at 1.5x (divide by 7).
| Overall Score | Maturity Level | Interpretation | Next Step |
|---|---|---|---|
| 1.0 - 1.9 | Critical | APIs are technical debt — inconsistent, undocumented, security liability | Establish style guide; implement OpenAPI specs; deploy gateway with auth; basic portal |
| 2.0 - 2.9 | Developing | API program exists but engineering-driven not product-driven | Enforce design linting; build docs with guides; formalize versioning; per-consumer analytics |
| 3.0 - 3.9 | Competent | Solid foundation with consistent standards; ready for productization | Launch self-service portal; design pricing; build business analytics; contract testing |
| 4.0 - 4.5 | Advanced | API is a product with measurable business impact; ecosystem growing | Optimize pricing; AI-powered DX; partner ecosystem; publish API roadmap |
| 4.6 - 5.0 | Best-in-class | API is strategic driver and competitive moat; self-sustaining ecosystem | Maintain excellence; innovate with AI-native patterns; expand ecosystem |
| Segment | Expected Average | "Good" Threshold | "Alarm" Threshold |
|---|---|---|---|
| Startup (pre-Series B) | 1.8 | 2.5 | 1.2 |
| Growth (Series B-D) | 2.8 | 3.5 | 2.0 |
| Scale-up (post-IPO / $50M+ ARR) | 3.5 | 4.0 | 2.8 |
| Enterprise (1,000+ employees) | 3.2 | 3.8 | 2.5 |
| API-first company (any stage) | 3.8 | 4.3 | 3.0 |
[src1]
Fetch when a user asks to evaluate their API program, is launching a public or partner API, considering API monetization, experiencing developer complaints about integration difficulty, preparing for a platform strategy shift, or evaluating API-first acquisition targets in due diligence.