CREATE MATERIALIZED VIEW dashboard_metrics AS SELECT ... — pre-aggregate metrics for sub-3s query performance.| Layer | Purpose | Startup Stack (Free) | Growth Stack (Paid) | Enterprise Stack |
|---|---|---|---|---|
| Ingestion | Pull data from sources | Supabase webhooks + cron | Airbyte / Fivetran | Fivetran + custom connectors |
| Storage | Raw data warehouse | PostgreSQL (Supabase) | BigQuery / Snowflake | Snowflake + S3 data lake |
| Transformation | Clean, aggregate, model | SQL views + pg_cron | dbt Core | dbt Cloud + Great Expectations |
| Serving | Expose to dashboards | Supabase API + Metabase | Metabase Cloud / Preset | Looker / Tableau / custom API |
| Access Control | RBAC + data isolation | PostgreSQL RLS | RLS + application RBAC | RLS + SSO + audit logging |
| Orchestration | Schedule and monitor | pg_cron / GitHub Actions | Dagster / Prefect | Airflow / Dagster Cloud |
| Monitoring | Pipeline health | Supabase logs + Slack | Metabase alerts | DataDog + PagerDuty |
START: Choose your architecture tier
├── Data volume < 1M rows AND team has basic SQL?
│ ├── YES → TIER 1: PostgreSQL-native (Supabase + Metabase)
│ └── NO ↓
├── Data volume 1M-100M rows OR need multi-source integration?
│ ├── YES → TIER 2: Warehouse-based (BigQuery/Snowflake + dbt + Metabase)
│ └── NO ↓
├── Data volume > 100M rows OR need real-time streaming?
│ ├── YES → TIER 3: Full data platform (warehouse + streaming + orchestration)
│ └── NO ↓
└── No existing data infrastructure?
└── Start with TIER 1, migrate when performance degrades
Data freshness:
├── Weekly metrics → Batch: nightly/weekly SQL transforms
├── Daily metrics → Scheduled: hourly/daily dbt runs
└── Real-time metrics → Streaming: Supabase Realtime / Kafka
Separate operational data (transactional) from analytical data (aggregated). Create materialized views for pre-aggregated dashboard metrics with scheduled refresh via pg_cron. [src1]
CREATE MATERIALIZED VIEW daily_metrics AS
SELECT date_trunc('day', created_at) AS day,
event_type, COUNT(*) AS event_count,
COUNT(DISTINCT user_id) AS unique_users
FROM events GROUP BY 1, 2;
SELECT cron.schedule('refresh-daily-metrics',
'0 2 * * *', 'REFRESH MATERIALIZED VIEW CONCURRENTLY daily_metrics');
Verify: Schema supports all command center KPIs. Materialized views cover frequent queries.
Connect external data sources using ELT: load raw data first, transform after. Use webhooks for real-time sources (Stripe), REST API polling for CRM/analytics, and client SDKs for product events. [src3] [src5]
Verify: All data sources connected, raw data landing in staging tables at expected frequency.
Transform raw data into dashboard-ready metrics. Use simple views for real-time low-complexity queries, materialized views for expensive aggregations, and dbt models for complex multi-source business logic. [src5]
Verify: All KPIs computable from transformation layer. Query time < 3 seconds.
Enforce data security using PostgreSQL Row Level Security (RLS) with role-based policies. Create separate roles for CEO, department heads, managers, board, and all-hands with appropriate data visibility. [src4]
ALTER TABLE revenue_details ENABLE ROW LEVEL SECURITY;
CREATE POLICY dept_isolation ON revenue_details
FOR SELECT USING (department = current_setting('app.user_department'));
Verify: Each access tier tested, RLS policies enforced, no data leakage.
Connect transformation layer to dashboard tools. For free: Supabase + Metabase with a dedicated read-only user. For custom: API endpoints with caching. [src6]
Verify: Dashboard tool connected, all metrics rendering, query performance < 3 seconds.
Monitor data freshness, row count anomalies, materialized view refresh status, query performance, and storage growth. Alert on stale data (> 2x expected interval), count drops (> 50%), and refresh failures. [src3]
Verify: Monitoring queries running, alert channels configured, test alert sent.
-- Pre-aggregated KPI snapshot for sub-3s queries
CREATE SCHEMA IF NOT EXISTS analytics;
CREATE MATERIALIZED VIEW analytics.kpi_snapshot AS
SELECT CURRENT_DATE AS snapshot_date,
(SELECT SUM(amount)/100 FROM subscriptions
WHERE status='active') AS mrr,
(SELECT COUNT(*) FROM users
WHERE created_at >= CURRENT_DATE - 30) AS new_users_30d,
(SELECT COUNT(DISTINCT user_id) FROM events
WHERE created_at >= CURRENT_DATE - 1) AS dau;
SELECT cron.schedule('kpi-refresh', '0 * * * *',
'REFRESH MATERIALIZED VIEW CONCURRENTLY analytics.kpi_snapshot');
// Subscribe to real-time metric changes via Supabase
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(SUPABASE_URL, SUPABASE_KEY);
const channel = supabase
.channel('dashboard-metrics')
.on('postgres_changes',
{ event: 'INSERT', schema: 'public', table: 'events' },
(payload) => updateDashboard(payload.new))
.subscribe();
Running analytical queries against the production transactional database. Dashboard queries compete with application queries, degrading both. [src6]
Use read replicas, a dedicated analytics schema, or materialized views. Production handles transactions; analytics handles dashboard queries.
Transforming data during extraction before loading. When requirements change, source data is gone and the pipeline must be rebuilt.
Load raw data first. Transform using SQL views or dbt models. Raw data is always available for new analyses. [src5]
RBAC only in frontend or API code. Anyone with a database connection bypasses all restrictions.
Enforce RLS at PostgreSQL level. Database prevents unauthorized access even if application is bypassed. [src4]
| Use When | Don't Use When | Use Instead |
|---|---|---|
| Designing data infrastructure for a dashboard | Need dashboard UI/UX design | executive-command-center-design |
| Setting up data pipelines from SaaS tools | Need pre-built dashboard templates | dashboard-template-library |
| Implementing multi-role access control | Need alert routing rules | notification-automation-rules |