Notification Automation Rules for Startup Dashboards
Purpose
This recipe configures a complete notification automation system for startup dashboards — threshold-based alerts when KPIs breach limits, scheduled weekly digests, automated status updates, and monthly investor report data pulls. The output keeps founding teams informed of critical changes without requiring daily dashboard visits. [src1]
Prerequisites
- Configured data warehouse with metrics views — Data Integration Architecture
- KPI definitions with thresholds — Startup KPI Framework by Stage
- Slack workspace with #metrics-alerts and #weekly-digest channels
- Slack incoming webhook URL — Create webhook
- SendGrid API key — SendGrid (for email notifications)
Constraints
- Alert fatigue: limit to 3-5 alerts/day per channel — more than 5 daily alerts leads to team ignoring all alerts within 2 weeks [src3]
- Slack free plan: 90-day message history limit — critical alerts should also be logged to database
- Retool Workflows free: 500 runs/month (~16/day) — sufficient for daily alerts + weekly digest
- Webhook reliability ~99.5% — critical financial alerts need email fallback
- Alert detection speed limited by data sync frequency (6-24 hours)
- SendGrid free: 100 emails/day — sufficient for digests, not mass notifications
Tool Selection Decision
Which path?
├── Non-technical AND budget = free
│ └── PATH A: Zapier + Slack — basic threshold alerts
├── Semi-technical AND budget = free
│ └── PATH B: Supabase Edge Functions + Slack webhooks — SQL-based alerts
├── Developer AND budget = $0-100/mo
│ └── PATH C: Retool Workflows + Slack + SendGrid — full automation suite
└── Developer AND budget = $100+/mo
└── PATH D: n8n (self-hosted) + Slack + SendGrid — unlimited workflows
| Path | Tools | Cost | Setup Time | Alert Complexity |
|---|---|---|---|---|
| A: No-Code | Zapier + Slack | $0-20/mo | 1-2 hours | Basic |
| B: SQL-Based | Supabase Functions + Slack | $0 | 3-5 hours | Medium |
| C: Full Suite | Retool + Slack + SendGrid | $0-25/mo | 4-8 hours | High |
| D: Self-Hosted | n8n + Slack + SendGrid | $5-20/mo | 6-10 hours | Very High |
Execution Flow
Step 1: Define Alert Rules and Thresholds
Duration: 1-2 hours · Tool: Spreadsheet or Notion
Map KPIs to alert levels: Critical (MRR drop > 10%, churn > 8%, runway < 6 months), Warning (growth < 5%, CAC payback > 18 months), Info (weekly digest only).
Verify: Every KPI has an associated alert level and threshold · If failed: Start with the 4 critical alerts only
Step 2: Build Alert Query Functions
Duration: 2-3 hours · Tool: SQL + JavaScript
Create SQL queries for each alert: MRR week-over-week change, monthly churn rate, CAC payback period, cash runway. Queries return rows only when thresholds are breached.
Verify: Queries return zero rows when metrics are healthy · If failed: Check analytics views have recent data
Step 3: Configure Slack Notification Delivery
Duration: 1-2 hours · Tool: Slack webhooks + JavaScript
Build notification delivery with color-coded severity (red critical, orange warning, blue info), structured fields, and email fallback for critical alerts.
Verify: Test alert renders correctly in each Slack channel · If failed: Regenerate webhook URLs if over 1 year old
Step 4: Build Weekly Metrics Digest
Duration: 1-2 hours · Tool: Scheduled function (Monday 8 AM cron)
Comprehensive summary: MRR + WoW change, new customers, churned customers, new trials, ad spend, net new MRR.
Verify: Trigger manually, confirm all metrics match dashboard · If failed: Check date ranges in analytics views
Step 5: Set Up Investor Report Data Automation
Duration: 1-2 hours · Tool: Scheduled function (1st of month cron)
Monthly automated pull: ending MRR, ARR, total customers, LTV:CAC ratio, cash balance. Sent via email to founders.
Verify: Output values match manual calculations within 2% · If failed: Check date range off-by-one errors
Step 6: Schedule All Automations
Duration: 30-60 minutes · Tool: Cron scheduler
Configure: critical checks every 6 hours, warnings daily 9 AM, digest Monday 8 AM, investor pull 1st of month, freshness check every 12 hours, system health daily 6 AM.
Verify: Scheduler shows all 6 automations with next run times · If failed: Check timezone settings
Output Schema
{
"output_type": "notification_automation_system",
"format": "configured automation rules + delivery channels",
"components": [
{"name": "critical_alerts", "type": "SQL queries + threshold rules", "required": true},
{"name": "warning_alerts", "type": "SQL queries + threshold rules", "required": true},
{"name": "weekly_digest", "type": "scheduled report", "required": true},
{"name": "investor_data_pull", "type": "scheduled export", "required": true},
{"name": "slack_delivery", "type": "webhook integrations", "required": true},
{"name": "email_fallback", "type": "SendGrid integration", "required": true}
],
"expected_alert_count": "8-12 active rules"
}
Quality Benchmarks
| Quality Metric | Minimum Acceptable | Good | Excellent |
|---|---|---|---|
| Alert delivery rate | > 95% | > 99% | 99.9% |
| False positive rate | < 20% | < 10% | < 5% |
| Alert-to-action time (critical) | < 4 hours | < 1 hour | < 15 minutes |
| Weekly digest completeness | > 80% metrics | > 95% | 100% |
| Investor report accuracy | Within 5% | Within 2% | Within 0.5% |
If below minimum: Review thresholds for over-sensitivity, check data freshness, verify webhook delivery.
Error Handling
| Error | Likely Cause | Recovery Action |
|---|---|---|
| Alert not firing | Data sync delayed | Check freshness; add canary freshness alert |
| False positive alerts | Threshold too sensitive | Widen by 20%; add moving average smoothing |
| Slack webhook 404 | URL expired or channel deleted | Regenerate webhook in Slack API settings |
| Digest missing metrics | Analytics view returned null | Add COALESCE defaults; log null values |
| Wrong month in investor report | Date calculation off-by-one | Use date_trunc consistently |
| Alert fatigue | Too many notifications | Reduce to critical-only for 1 week |
Cost Breakdown
| Component | Free Tier | Growth | Scale |
|---|---|---|---|
| Automation engine | $0 (500 runs/mo) | $10-25/mo | $50-100/mo |
| Slack webhooks | $0 | $0 | $0 |
| Email (SendGrid) | $0 (100/day) | $15/mo | $50/mo |
| SMS (Twilio) | N/A | $0.01/msg | $0.01/msg |
| Total | $0 | $25-40/mo | $110-170/mo |
Anti-Patterns
Wrong: Alerting on Every Metric Change
Setting alerts for every KPI. A 2% MRR fluctuation is normal variance. Teams receiving 10+ alerts/day stop reading them within 2 weeks. [src3]
Correct: Alert Only on Actionable Thresholds
Use the “would you wake someone at 2 AM for this?” test. Non-critical items go to the weekly digest.
Wrong: No Email Fallback for Critical Alerts
Slack webhooks have ~99.5% delivery. Over a year, expect 1-2 missed critical alerts without email backup.
Correct: Multi-Channel Critical Alerts
Critical alerts fire to Slack AND email simultaneously. Email is the guaranteed delivery backup.
When This Matters
Use when the founding team needs to stop checking dashboards multiple times daily and instead receive proactive notifications when metrics need attention. Needed after data integration pipeline setup and 2-4 weeks of manual dashboard monitoring.