This assessment evaluates the health of a company's email marketing program across five critical dimensions: deliverability and authentication, list hygiene and growth, engagement and performance, automation and lifecycle, and compliance and privacy. The output is a composite health score (1-5) that pinpoints where the email program is leaking value. [src1]
What this measures: Whether emails reach the inbox and whether sending infrastructure is properly authenticated.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No authentication; high bounce rates (>5%); emails land in spam | No SPF/DKIM/DMARC; bounce rate unknown; no seed testing |
| 2 | Emerging | SPF/DKIM configured; DMARC missing or p=none; bounce rate 2-5% | Basic auth but not enforced; occasional spam reports |
| 3 | Defined | SPF/DKIM/DMARC (quarantine+); deliverability monitored; bounce <2% | DMARC at p=quarantine; inbox placement monitored monthly |
| 4 | Managed | BIMI implemented; DMARC p=reject; real-time ISP monitoring; bounce <0.5% | BIMI logo active; real-time ISP dashboards; proactive IP warming |
| 5 | Optimized | Automated deliverability optimization; 98%+ inbox placement | AI-driven ISP throttling; automated IP rotation; near-perfect placement |
Red flags: No DMARC or DMARC p=none; bounce rate above 3%; no inbox placement monitoring. [src1]
What this measures: The health, growth rate, and maintenance practices of the subscriber list.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | List never cleaned; purchased lists used; no opt-in verification | Hard bounces accumulate; addresses from 3+ years ago with no engagement |
| 2 | Emerging | Hard bounces removed after sends; single opt-in only; no proactive cleaning | Reactive cleaning; inactive subscribers identified but not acted upon |
| 3 | Defined | Quarterly hygiene; double opt-in; sunset policy for 90-180 day inactives | Verification service used; inactive subscribers suppressed; organic growth positive |
| 4 | Managed | Real-time validation at capture; automated sunset flows; engagement segmentation | Every signup validated; 90-day sunset workflow; frequency optimization active |
| 5 | Optimized | Predictive churn scoring; growth exceeds churn 2x+; zero purchased lists | ML predicts disengagement 30 days ahead; preference centers optimize frequency |
Red flags: Any list purchases in past 12 months; hard bounce rate above 2%; list not cleaned in 6+ months. [src4]
What this measures: How well email content resonates, measured through click rates, conversions, and revenue.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No tracking beyond opens; click rates <1%; no revenue attribution | Same content to entire list; no testing; unsubscribe >1% per send |
| 2 | Emerging | Click rates 1-3%; basic segmentation; occasional A/B testing | Sporadic testing; conversion tracking incomplete |
| 3 | Defined | Click rates 3-6%; systematic A/B testing; 5-10 behavioral segments | Regular testing cadence; email revenue measured monthly; unsub <0.3% |
| 4 | Managed | Click rates 5-8%; dynamic content personalization; AI send-time optimization | Personalized recommendations; email contributes measurable revenue |
| 5 | Optimized | Click rates 6%+ campaigns, 10%+ flows; 1:1 personalization; 20-30% of digital revenue | AI content variants; real-time personalization; email is primary revenue channel |
Red flags: Click rate below 1%; unsubscribe above 0.5% per send; no A/B testing in 6+ months; no revenue attribution. [src2]
What this measures: Sophistication of automated email workflows responding to subscriber behavior and lifecycle stages.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No automated emails; all manual campaigns; no welcome series | Every email manually sent; no cart abandonment; no triggers |
| 2 | Emerging | Basic welcome email; transactional emails; 1-2 flows total | Simple welcome; order confirmation; automation <10% of sends |
| 3 | Defined | Welcome series; cart abandonment; win-back; post-purchase; 20-30% of email revenue from flows | 4-6 lifecycle flows; triggered emails outperform campaigns |
| 4 | Managed | 10+ flows; browse abandonment, re-engagement, cross-sell; 30-50% of revenue from flows | Comprehensive flow library; A/B tested quarterly; conditional logic |
| 5 | Optimized | AI journey orchestration; predictive triggers; 40-60% of revenue from flows | AI determines next email per subscriber; flows self-optimizing |
Red flags: No welcome email; no cart abandonment flow; 100% of revenue from manual campaigns; flows not updated in 12+ months. [src3]
What this measures: Adherence to global email regulations and respect for subscriber privacy preferences.
| Score | Level | Description | Evidence |
|---|---|---|---|
| 1 | Ad hoc | No compliance awareness; broken unsubscribe; no consent records | Missing unsubscribe links; CAN-SPAM address missing; sending to purchased lists |
| 2 | Emerging | Basic CAN-SPAM compliance; GDPR/CASL gaps; consent records incomplete | Unsubscribe works but slow (>10 days); no preference center |
| 3 | Defined | CAN-SPAM, GDPR, CASL compliant; consent records with timestamps; one-click unsub | Consent tracked per subscriber; unsubscribe within 24 hours; preference center live |
| 4 | Managed | Automated compliance monitoring; jurisdiction-aware rules; regular audits | Automated prevention of sending to non-consented; quarterly audit |
| 5 | Optimized | Privacy-first strategy; real-time consent management; zero-party data collection | Cross-channel consent sync; ahead of regulations; privacy as differentiator |
Red flags: Missing or hidden unsubscribe; no GDPR consent records; unsubscribe takes >10 days; purchased lists used. [src5]
Formula: Overall Score = (Deliverability + List Hygiene + Engagement + Automation + Compliance) / 5
| Overall Score | Maturity Level | Interpretation | Next Step |
|---|---|---|---|
| 1.0 - 1.9 | Critical | Email program damaging sender reputation; regulatory penalties possible | Fix authentication, clean list, implement unsubscribe; stop sending until stable |
| 2.0 - 2.9 | Developing | Basic ops functional but significant value leakage | Quarterly list hygiene, welcome series, testing cadence, GDPR compliance |
| 3.0 - 3.9 | Competent | Healthy foundations; ready for automation and personalization optimization | Scale to 6-10 flows, behavioral segmentation, revenue attribution |
| 4.0 - 4.5 | Advanced | High-performing program generating measurable revenue | AI personalization, predictive send timing, advanced lifecycle automation |
| 4.6 - 5.0 | Best-in-class | Email is primary revenue driver with best-in-class deliverability | AI content generation, cross-channel orchestration, predictive churn prevention |
| Segment | Expected Average | "Good" Threshold | "Alarm" Threshold |
|---|---|---|---|
| Startup (<$5M ARR) | 2.0 | 2.8 | 1.3 |
| Growth ($5M-$50M ARR) | 2.8 | 3.5 | 2.0 |
| Scale ($50M-$200M ARR) | 3.4 | 4.0 | 2.5 |
| Enterprise ($200M+ ARR) | 3.8 | 4.3 | 3.0 |
[src6]
Fetch when a user reports declining email performance, suspects deliverability issues, is preparing for an ESP migration, needs to audit email compliance, or wants a baseline assessment of email marketing program health.