This assessment evaluates the maturity of a company's marketing attribution and measurement capabilities across five dimensions: data infrastructure, model sophistication, cross-channel integration, privacy-readiness, and organizational alignment. The output is a composite maturity score (1-5) that identifies gaps in measurement capability and routes to specific improvement paths. [src1]
What this measures: The completeness and reliability of marketing data capture across channels, touchpoints, and conversion events.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No unified tracking; each channel measured in silo; no UTM standards | Different teams report different numbers; no single source of truth |
| 2 | Emerging | Basic UTM tagging; primary ad platforms tracked; gaps in offline | 50-70% of traffic properly tagged; no offline attribution |
| 3 | Defined | Standardized UTM taxonomy enforced; server-side tracking; CRM integrated | 80%+ touchpoint coverage; first-party data strategy in place |
| 4 | Managed | CDP unifies cross-device identity; data warehouse stores raw events | Identity resolution active; data latency under 24 hours |
| 5 | Optimized | Real-time data pipeline; probabilistic identity matching; clean-room partnerships | Sub-hour data freshness; automated data quality monitoring |
Red flags: Marketing and finance report different ROI numbers; no UTM naming convention; conversion tracking not audited in 12+ months. [src2]
What this measures: The complexity and accuracy of the attribution methodology used to assign credit to touchpoints.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No formal model; last-click by default or unmeasured | "Google brought 80% of revenue" based on last-click only |
| 2 | Emerging | Single-touch model used intentionally; aware of limitations | Defined first/last-touch model; some assisted conversion reporting |
| 3 | Defined | Multi-touch attribution implemented — linear, time-decay, or position-based | At least one MTA model active; can compare model outputs |
| 4 | Managed | Data-driven attribution or algorithmic model; supplemented by MMM | ML model assigns credit; MMM covers brand/offline channels |
| 5 | Optimized | Unified measurement: MTA + MMM + incrementality testing continuously calibrated | Measurement triangle operational; budget decisions backed by incrementality |
Red flags: Team cannot explain attribution credit assignment; 100% reliance on ad platform self-reporting. [src3]
What this measures: Whether attribution spans the entire journey from awareness through retention, across online and offline.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Only direct-response digital channels attributed | Only paid search and social attributed; brand/events unmeasured |
| 2 | Emerging | Major digital channels plus email and organic; brand/events remain dark | Digital funnel tracked but upper-funnel brand impact unmeasured |
| 3 | Defined | Full digital funnel including content and webinars; annual brand studies | Content attributed via UTM+CRM; sales vs marketing pipeline split |
| 4 | Managed | Online and offline integrated; account-based attribution for B2B | ABM platform integrated with CRM; events feed into MMM |
| 5 | Optimized | Full-lifecycle attribution from first touch to expansion revenue | Predictive path analysis; expansion revenue attributed |
Red flags: 40%+ pipeline marked "unknown source"; events budget has no ROI measurement; no post-sale touchpoint tracking. [src5]
What this measures: How well the measurement framework adapts to privacy regulations and cookie deprecation.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Fully dependent on third-party cookies and pixels; no first-party data strategy | No CMP; relying on deprecated tracking; iOS ATT impact unknown |
| 2 | Emerging | CMP implemented; aware of cookie deprecation; some first-party data | CMP live but not optimized; first-party data limited to emails |
| 3 | Defined | First-party data strategy partially implemented; server-side tracking live | Consent Mode v2 implemented; enhanced conversions configured |
| 4 | Managed | Privacy-by-design architecture; clean-room partnerships; MMM provides cookie-independent measurement | Measurement not dependent on third-party cookies |
| 5 | Optimized | Fully privacy-compliant; differential privacy techniques; federated learning | Measurement accuracy maintained post-cookie; ahead of regulations |
Red flags: No consent management; iOS opt-in rate not measured; measurement broke after Safari ITP. [src6]
What this measures: Whether attribution insights actually drive budget allocation and how well teams are aligned on measurement.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | Attribution data does not influence budget; teams use different numbers | Budget set by precedent or gut feel; reports not reviewed |
| 2 | Emerging | Data reviewed monthly; some budget shifts based on performance | Monthly reviews reference attribution; ongoing debate on numbers |
| 3 | Defined | Attribution directly informs quarterly budget; teams agree on KPIs | QBRs use agreed attribution data; ROAS targets set by channel |
| 4 | Managed | Real-time dashboards inform in-flight optimization; 10-15% testing budget | Budget reallocation in days not months; incrementality budget protected |
| 5 | Optimized | Algorithmic budget recommendations; scenario modeling; board-level attribution | AI recommends optimal budget mix; what-if analysis operational |
Red flags: Finance and marketing disagree on CAC; no mid-year budget adjustment; attribution never shared beyond marketing team. [src4]
Formula: Overall Score = (Data Infrastructure + Model Sophistication + Cross-Channel + Privacy-Readiness + Organizational Alignment) / 5
| Overall Score | Maturity Level | Interpretation | Next Step |
|---|---|---|---|
| 1.0 - 1.9 | Critical | Marketing spend decisions essentially uninformed; 30-50% budget waste risk | Implement UTM taxonomy and conversion tracking basics |
| 2.0 - 2.9 | Developing | Single-touch gives directional insight but misattributes multi-touch journeys | Implement MTA; begin first-party data strategy |
| 3.0 - 3.9 | Competent | Solid MTA foundation; ready for incrementality testing and MMM | Add incrementality testing; implement server-side tracking |
| 4.0 - 4.5 | Advanced | Sophisticated MTA + MMM; ready for unified measurement | Calibrate models; implement automated budget optimization |
| 4.6 - 5.0 | Best-in-class | Unified measurement framework with continuous calibration | Maintain accuracy; evaluate emerging privacy-preserving tech |
| Segment | Expected Average | "Good" Threshold | "Alarm" Threshold |
|---|---|---|---|
| Seed/Series A (<$2M ARR) | 1.6 | 2.2 | 1.0 |
| Series B ($2M-$20M ARR) | 2.4 | 3.0 | 1.8 |
| Growth ($20M-$100M ARR) | 3.2 | 3.8 | 2.5 |
| Scale/Public ($100M+ ARR) | 3.8 | 4.3 | 3.0 |
[src3]
Fetch when a user asks which attribution model to use, wants to evaluate marketing measurement capabilities, is preparing for cookie deprecation, cannot reconcile ROI numbers across channels, or needs to justify marketing spend to finance or the board.