Product Operations Dashboard
Purpose
This recipe produces a product operations dashboard that tracks feature adoption, visualizes user engagement funnels, aggregates feedback, monitors sprint velocity and release health, and surfaces key metrics (DAU/MAU, retention, activation rate). The output gives the product team a unified view connecting user behavior with feedback signals and development progress. [src6]
Prerequisites
- Product analytics platform with event tracking active — PostHog, Mixpanel, or Amplitude
- Core events implemented — user_signup, feature_used, session_start, page_view (14+ days of data)
- Project management tool with API access — Linear, Jira, or GitHub Projects
- PostgreSQL database — free at Supabase or Neon
- Dashboard tool account — Retool or Metabase
- Feedback collection channel — in-app widget, support inbox, or structured Slack channel
Constraints
- Analytics API rate limits: Mixpanel export = 60 req/hour, Amplitude = 360 req/hour. Cache and batch queries. [src1]
- Minimum 14 days of event data required. Feature adoption curves need 30+ days to stabilize.
- User-level analytics require GDPR consent. Anonymous aggregates are safe for operational dashboards. [src2]
- Feature flag status must sync with deployment state to produce correct adoption metrics.
- Feedback volume under 50 items/month is insufficient for trend analysis.
Tool Selection Decision
Which path?
├── User is non-technical AND budget = free
│ └── PATH A: No-Code Free — PostHog dashboards + Google Sheets
├── User is non-technical AND budget > $0
│ └── PATH B: No-Code Paid — Mixpanel/Amplitude native + Retool
├── User is semi-technical or developer AND budget = free
│ └── PATH C: Code + Free — PostHog + Metabase + PostgreSQL + cron
└── User is developer AND budget > $0
└── PATH D: Code + Paid — PostHog/Mixpanel + Retool + PostgreSQL + n8n
| Path | Tools | Cost | Speed | Output Quality |
|---|---|---|---|---|
| A: No-Code Free | PostHog + Google Sheets | $0 | 2-3 hours | Basic — limited cross-source views |
| B: No-Code Paid | Mixpanel/Amplitude + Retool | $0-50/mo | 4-6 hours | Good — native analytics + custom views |
| C: Code + Free | PostHog + Metabase + PostgreSQL | $0 | 6-8 hours | Good — full SQL, custom metrics |
| D: Code + Paid | PostHog/Mixpanel + Retool + PG | $25-75/mo | 5-7 hours | Excellent — unified, real-time |
Execution Flow
Step 1: Design the Product Data Model
Duration: 30-60 minutes · Tool: SQL client
Create 6 tables: product_events, feature_adoption, user_engagement, user_feedback, sprint_metrics, releases. Add indexes for date-based queries. [src2]
Verify: All 6 tables created. · If failed: Check database permissions.
Step 2: Build Analytics Data Sync
Duration: 1-2 hours · Tool: n8n, custom script, or PostHog export
Configure ETL from analytics platform (PostHog/Mixpanel trend and funnel APIs) and project management tool (Linear GraphQL API for sprint data). Schedule daily sync at 8:00 AM. [src1]
Verify: user_engagement has 14+ days of data. · If failed: Check API key permissions and scopes.
Step 3: Build Product Analytics Queries
Duration: 1-2 hours · Tool: SQL
Create five core queries: daily engagement (DAU/WAU/MAU ratios), feature adoption funnel, feedback category breakdown, sprint velocity trend, and release health status. [src6]
Verify: All queries return data. · If failed: Check feature and event naming consistency.
Step 4: Assemble the Dashboard UI
Duration: 1-2 hours · Tool: Retool or Metabase
Layout: KPI row (DAU, DAU/MAU ratio, D7 retention, avg session, feedback score), engagement trend, feature adoption funnel, feedback breakdown, sprint velocity, release health timeline, recent feedback feed. [src3]
Verify: All sections render. DAU matches analytics platform within 5%. · If failed: Check query bindings.
Step 5: Configure Product Alerts
Duration: 30-60 minutes · Tool: n8n + Slack
Alert conditions: DAU drop >20% vs 7-day avg, feature adoption below 10% after 7 days, negative feedback spike >2x average, release error rate increase >50%.
Verify: Test alert fires in Slack. · If failed: Check webhook URL.
Step 6: Deploy and Share Access
Duration: 30 minutes · Tool: Dashboard settings
Share with product team (PM, design, engineering leads). Create separate Release Health view for on-call engineers.
Verify: PM and engineering lead can access dashboard.
Output Schema
{
"output_type": "product_operations_dashboard",
"format": "deployed web application",
"components": [
{"name": "engagement_trend", "type": "chart", "description": "DAU/WAU/MAU multi-line trend with ratio indicators"},
{"name": "feature_adoption", "type": "chart", "description": "Feature adoption funnel: exposed to activated to retained"},
{"name": "feedback_analysis", "type": "chart", "description": "Feedback by category and sentiment"},
{"name": "sprint_velocity", "type": "chart", "description": "Sprint completion trend (10 sprints)"},
{"name": "release_health", "type": "table", "description": "Releases with error rate delta and health status"},
{"name": "kpi_cards", "type": "metrics", "description": "DAU, DAU/MAU, D7 retention, avg session, feedback score"}
],
"refresh_interval": "30 minutes (analytics), 6 hours (sprint/release)",
"data_source": "PostgreSQL synced from analytics and PM tool"
}
Quality Benchmarks
| Quality Metric | Minimum Acceptable | Good | Excellent |
|---|---|---|---|
| Data freshness | < 24 hour lag | < 6 hour lag | < 1 hour lag |
| Event coverage | 5+ core events | 10+ events | Full event taxonomy |
| Feature tracking | Manual list | Auto-detected from flags | Real-time flag sync |
| Feedback integration | 1 source | 2-3 sources | All channels unified |
| Sprint data accuracy | Manual entry | API-synced weekly | Real-time from PM tool |
If below minimum: Check analytics SDK implementation. Missing events usually indicate SDK not on all pages or misspelled event names.
Error Handling
| Error | Likely Cause | Recovery Action |
|---|---|---|
| Analytics API 429 | Too many export requests | Reduce frequency, batch date ranges |
| DAU shows 0 | SDK not initialized or events blocked | Verify SDK, check ad blocker interference |
| Feature adoption 0% | Flag not synced or event name mismatch | Verify flag status, check event naming |
| Sprint data missing | PM tool API token expired | Regenerate token in Linear/Jira settings |
| Feedback not appearing | Webhook or polling not configured | Check webhook URL, verify polling schedule |
| All releases "degraded" | Error rate baseline not calibrated | Recalculate from 7-day pre-release average |
Cost Breakdown
| Component | Free Tier | Paid Tier | At Scale (50K+ MAU) |
|---|---|---|---|
| Analytics (PostHog) | $0 (1M events/mo) | $450/mo | Custom pricing |
| Dashboard (Retool) | $0 (5 users) | $10/user/mo | $100/mo |
| Database (Supabase) | $0 (500MB) | $25/mo | $25/mo |
| ETL (n8n) | $0 (self-hosted) | $20/mo | $50/mo |
| Total | $0 | $505/mo | $625+/mo |
Anti-Patterns
Wrong: Tracking vanity metrics without activation context
Showing DAU growth without connecting to feature activation or retention creates false confidence. DAU can grow from marketing while the product fails to retain users. [src6]
Correct: Pair usage metrics with retention cohorts
Display DAU alongside D7 and D30 retention. Healthy products show DAU growth AND stable retention. If DAU grows but retention drops, growth is unsustainable.
Wrong: Aggregating all feedback into a single sentiment score
A "neutral" average could mean all fine or half love/half hate. Single scores hide critical signals. [src5]
Correct: Segment feedback by category and feature area
Break down by category (bug, feature request, UX issue) and feature area. Track trends within each segment.
When This Matters
Use when a startup product team has active users and needs unified visibility into user behavior, feedback, development velocity, and release health. Requires at least one analytics platform with 14+ days of events. This recipe builds the dashboard — for product strategy, use a playbook card.