Product metrics benchmarks provide empirical reference points for evaluating product health across four dimensions: engagement (DAU/MAU), activation, feature adoption, and satisfaction (NPS/CSAT). These benchmarks are derived from Q3-Q4 2025 data spanning 2,000+ products across B2B SaaS, B2C, mobile, and marketplace segments. The most significant shift is the emergence of AI-native products as a distinct segment with substantially higher activation rates but lower long-term retention, creating a two-tier benchmark landscape. [src1, src3]
Data vintage: Based on Q3-Q4 2025 data from analytics platforms and industry surveys covering 2,000+ products globally.
Key shift: AI-native products now show 54.8% activation rates, pulling the overall SaaS average up from 30% to 37.5%. Non-AI products should benchmark against pre-AI medians.
Definition: Daily Active Users divided by Monthly Active Users, expressed as a percentage. Measures what fraction of your monthly user base engages with the product on any given day. "Active" must be defined consistently -- typically a meaningful action (not just login) within a 24-hour window. [src2]
| Segment | Median | 25th Pctl | 75th Pctl | Top Decile |
|---|---|---|---|---|
| B2B SaaS (all) | 13% | 8% | 20% | 32% |
| B2B SaaS (mission-critical) | 25% | 15% | 35% | 50% |
| B2C SaaS / Utility | 18% | 10% | 30% | 45% |
| Social / Content apps | 40% | 25% | 55% | 65% |
| Gaming | 30% | 20% | 40% | 55% |
| Marketplace / E-commerce | 10% | 6% | 15% | 25% |
| Fintech | 15% | 8% | 22% | 35% |
Trend: B2B SaaS DAU/MAU stable at 13% median; AI-native collaboration tools pushing the mission-critical segment up from 20% to 25%. [src1, src2]
Red flag threshold: DAU/MAU below 8% for B2B SaaS indicates the product is not part of daily workflows. Below 5% signals a retention crisis.
Definition: Average number of distinct days per week a user opens and interacts with the product. Complements DAU/MAU by showing usage cadence. [src1]
| Segment | Median | 25th Pctl | 75th Pctl | Top Decile |
|---|---|---|---|---|
| B2B SaaS (daily-use tools) | 4.2 days | 3.0 | 4.8 | 5.5 |
| B2B SaaS (weekly-use tools) | 2.1 days | 1.5 | 3.0 | 4.0 |
| B2C / Consumer apps | 3.5 days | 2.0 | 5.0 | 6.2 |
| Social / Messaging | 5.5 days | 4.0 | 6.5 | 7.0 |
Trend: Session frequency up 8% YoY for products with AI-assisted features. [src1]
Definition: Percentage of new signups who complete a predefined activation milestone (first value moment) within a set window (typically 7-14 days). The milestone should represent experiencing core product value, not merely logging in. [src3]
| Segment | Median | 25th Pctl | 75th Pctl | Top Decile |
|---|---|---|---|---|
| B2B SaaS (overall) | 36% | 22% | 50% | 65% |
| AI / ML products | 55% | 40% | 65% | 78% |
| CRM / Sales tools | 43% | 30% | 55% | 68% |
| DevTools / Infrastructure | 38% | 25% | 48% | 62% |
| HR / People software | 31% | 20% | 42% | 55% |
| Fintech / Insurance | 15% | 5% | 25% | 40% |
| PLG SaaS (self-serve) | 40% | 28% | 52% | 68% |
Trend: AI-native products lead at 55% median activation. Overall SaaS median rose from 30% to 36% driven by AI-assisted onboarding. Fintech lags at 15% due to KYC friction. [src3]
Red flag threshold: Below 20% activation for B2B SaaS indicates a broken onboarding flow. Below 10% is a critical failure.
Definition: Elapsed time from first signup to completing first value-generating action. Shorter TTV correlates with higher activation and lower early churn. [src3]
| Segment | Median | Healthy Range | Alarm Threshold |
|---|---|---|---|
| PLG / Self-serve SaaS | 1 day, 2 hours | < 3 days | > 7 days |
| Sales-assisted B2B SaaS | 5 days | < 14 days | > 30 days |
| AI-native tools | 15 minutes | < 1 hour | > 1 day |
| Enterprise (complex onboarding) | 14 days | < 30 days | > 60 days |
Definition: Percentage of active users who use a specific feature within a given period (typically 30 days). Calculated as: (users who used feature / total active users) × 100. [src4]
| Segment | Median | 25th Pctl | 75th Pctl | Top Decile |
|---|---|---|---|---|
| Core features (entire base) | 28% | 18% | 40% | 55% |
| Major releases (first 90 days) | 24% | 12% | 36% | 48% |
| Minor features (all users) | 6.4% | 3% | 12% | 22% |
| Revenue $5M-$10M | 30% | 20% | 42% | 55% |
| Revenue $1M-$5M | 22% | 14% | 32% | 45% |
| Revenue < $1M | 15% | 8% | 24% | 35% |
Trend: Median minor feature adoption at 6.4% -- only ~6 out of 100 shipped features drive meaningful usage. In-app feature announcements see 2-3x higher adoption than email/changelog. [src4]
Red flag threshold: Core feature adoption below 20% suggests discoverability or usability problems.
Definition: Percentage of users who continue using a feature month-over-month after initial adoption. Measures whether features create lasting habits. [src4]
| Feature Type | Median Stickiness | Healthy Range |
|---|---|---|
| Core workflow features | 65% | 55%-80% |
| Collaboration features | 55% | 40%-70% |
| Reporting / analytics features | 45% | 30%-60% |
| Admin / settings features | 25% | 15%-40% |
| AI-assisted features (new) | 50% | 35%-65% |
Definition: Percentage of Promoters (score 9-10) minus Detractors (score 0-6) on an 11-point scale. Passives (7-8) excluded. Range: -100 to +100. [src5]
| Segment | Median | 25th Pctl | 75th Pctl | Top Decile |
|---|---|---|---|---|
| B2B SaaS | 29 | 15 | 45 | 62 |
| B2C SaaS / Consumer software | 47 | 30 | 58 | 72 |
| Consulting / Professional services | 59 | 42 | 68 | 78 |
| Healthcare | 61 | 45 | 72 | 82 |
| E-commerce / Retail | 45 | 30 | 55 | 68 |
| Banking / Financial services | 41 | 25 | 52 | 65 |
| Insurance | 23 | 10 | 38 | 52 |
| Manufacturing | 65 | 48 | 75 | 85 |
Trend: Median NPS across all industries stable at 42 in 2025. B2C outperforms B2B by 11 points. B2B SaaS median dropped from 33 to 29 as market saturation increases customer expectations. [src5]
Red flag threshold: NPS below 0 means more detractors than promoters. Below 15 for B2B SaaS signals competitive vulnerability.
Definition: Percentage of customers who rate their experience as satisfactory (4 or 5 on a 5-point scale). Typically measured post-interaction. More transactional than NPS. [src6]
| Segment | Median | 25th Pctl | 75th Pctl | Top Decile |
|---|---|---|---|---|
| B2B SaaS / Software | 78% | 70% | 84% | 90% |
| B2C Software | 82% | 74% | 88% | 93% |
| Consulting | 84% | 76% | 89% | 94% |
| E-commerce | 80% | 72% | 86% | 92% |
| Banking / Financial | 79% | 70% | 85% | 91% |
| Healthcare | 78% | 68% | 84% | 90% |
Trend: SaaS CSAT stable at 78% median. Companies with AI-powered support chatbots show 5-8 points higher CSAT. [src6]
Red flag threshold: CSAT below 70% for software indicates systematic dissatisfaction. Below 65% correlates with >5% monthly churn.
| Rule | Formula / Threshold | Interpretation |
|---|---|---|
| Product Engagement Score (PES) | (Adoption + Stickiness + Growth) / 3, each 0-100 | Above 50 = healthy, above 70 = best-in-class |
| DAU/MAU > 20% (B2B) | DAU / MAU > 0.20 | Product is part of daily workflows -- strong retention predictor |
| Activation > 40% (PLG) | Activated users / signups > 0.40 | Onboarding funnel is efficient -- sustainable PLG growth |
| Feature breadth ratio | Features with >10% adoption / total features | Healthy range: 30-50%. Below 20% = feature bloat |
| NPS-CSAT alignment | NPS > 30 AND CSAT > 78% | Both above median = balanced product and support experience |
| Activation-to-Retention bridge | Activation rate × (1 - month-1 churn) > 25% | Below 25% = growth math does not work |
Constraint: PES is meaningful only when all three sub-scores are measured consistently. The composite obscures problems if one sub-score is excellent but another is critical. Always decompose PES before acting on it. [src1]
| Segment | Definition | Typical Characteristics |
|---|---|---|
| B2B SaaS (daily-use) | Workflow tools used 4+ days/week (PM, communication, CRM) | ACV $5K-$50K, DAU/MAU 15-30%, seat-based pricing |
| B2B SaaS (periodic-use) | Tools used 1-3 days/week (analytics, reporting, HR) | ACV $10K-$100K, DAU/MAU 8-15% |
| PLG SaaS | Self-serve signup with freemium or free trial | ACV < $15K, 40%+ activation expected, virality matters |
| B2C / Consumer | Direct-to-consumer apps with individual users | ARPU < $50/month, DAU/MAU > 20% expected |
| Social / Content | Platforms centered on UGC and social interaction | DAU/MAU > 35%, network effects, ad-supported |
| Marketplace / E-commerce | Platforms connecting buyers and sellers | Transaction-driven, DAU/MAU 10-15% |
| AI-native | Products with AI as the core value (not AI-enhanced) | 55% activation, lower long-term retention, usage-based pricing |
| Metric | 2023 | 2024 | 2025 | Direction |
|---|---|---|---|---|
| B2B SaaS DAU/MAU | 12% | 13% | 13% | → Stable |
| SaaS Activation Rate (median) | 28% | 30% | 36% | ↑ +20% (AI-driven) |
| Minor Feature Adoption (median) | 7.1% | 6.8% | 6.4% | ↓ -10% (feature bloat) |
| NPS B2B SaaS (median) | 35 | 33 | 29 | ↓ -17% (rising expectations) |
| NPS B2C (median) | 48 | 49 | 47 | → Stable |
| CSAT SaaS (median) | 76% | 77% | 78% | ↑ +3% (AI support) |
| AI-native Activation Rate | N/A | 42% | 55% | ↑ +31% |
Treating DAU/MAU as a universal benchmark: A 13% DAU/MAU is median for B2B SaaS but would be alarming for a social app (median 40%). Always use segment-specific benchmarks. [src2]
Confusing activation rate with signup-to-login rate: Many companies report activation as "percentage who logged in at least once." True activation measures first value moment. Inflated definitions mask broken onboarding. [src3]
Assuming high feature adoption means product-market fit: High adoption from lack of alternatives (high switching cost) masks underlying dissatisfaction. Cross-reference adoption with NPS and feature-level CSAT. [src4]
Comparing PLG activation rates to sales-led: PLG targets 40%+ self-serve activation. Sales-led may have 25% self-serve but 70% sales-assisted. Segment by GTM motion before comparing. [src3]
Using NPS as the sole satisfaction metric: NPS captures loyalty intent but not immediate experience quality. A product with NPS 50 and CSAT 65% has loyal fans who tolerate frequent friction. Track both. [src5]
Fetch this when a user asks about product engagement benchmarks, wants to evaluate DAU/MAU or activation rates against industry peers, is setting product KPI targets, or needs to diagnose whether their product metrics are healthy for their segment. Also relevant when building investor decks, prioritizing product roadmaps based on feature adoption data, or assessing product-market fit.