Data Alerting
Data alerting is the automated monitoring of metrics against defined conditions, with notifications sent when those conditions are met. Instead of a human checking a dashboard at regular intervals, the system checks for them and pushes a message when something requires attention.
The basic mechanism is simple: a query runs on a schedule – every hour, every day, every five minutes – and the result is evaluated against a threshold or condition. If revenue drops below a target, if error rates spike above a baseline, if a pipeline stage has zero records for the first time in 30 days, the system fires a notification to a configured channel.
Types of alerts
Threshold alerts. The most common type. A metric is compared against a fixed value. "Alert me when daily active users fall below 10,000." "Alert me when cost per acquisition exceeds $50." These are simple to configure but require the user to know what the threshold should be, which assumes a baseline understanding of normal ranges.
Anomaly-based alerts. The system learns a metric's typical patterns – daily seasonality, weekly cycles, growth trends – and flags deviations from the expected range. A 15% revenue drop on a random Tuesday might be alarming, but the same drop on Christmas Day is expected. Anomaly detection accounts for context. The tradeoff is that these systems require historical data and tuning to avoid false positives.
Trend-based alerts. Rather than checking a single data point against a threshold, trend alerts evaluate the direction and rate of change over a window. "Alert me when 7-day rolling churn rate has increased for three consecutive weeks." These catch slow-moving problems that threshold alerts miss – gradual degradation that never crosses an absolute line but represents a meaningful shift.
Delivery channels
Where alerts land matters as much as what they detect. Common channels include:
- Email – universally available, easy to ignore, poor for urgent issues.
- Slack or Teams – visible to groups, supports threading and discussion around the alert, but can get lost in channel noise.
- Webhook – sends a payload to an API endpoint, enabling custom workflows. An alert can trigger a PagerDuty incident, create a Jira ticket, or kick off an automated remediation script.
- SMS or push notification – reserved for critical alerts that require immediate human response.
The best setups route alerts by severity. Informational trends go to a Slack channel. Threshold breaches go to email. Critical anomalies page someone.
Why alerts should query the semantic layer
A common failure mode: the alert checks a metric using one SQL query, while the dashboard displays the same metric using a different query. The definitions drift, and the alert fires (or doesn't fire) based on a number that doesn't match what users see on the dashboard.
This is a specific case of metric inconsistency, and the fix is the same as for any other consumption layer – the alert should query the semantic layer rather than running independent SQL. When the alert and the dashboard both resolve "revenue" through the same governed definition, the number the alert evaluates is the same number users see. No drift.
Alert fatigue
The most common operational failure with data alerting is alert fatigue – too many notifications, most of which don't require action. Fatigue sets in fast. After a week of false positives, recipients start ignoring the channel entirely. The alert system becomes background noise, and real issues get missed.
Fatigue typically stems from poorly calibrated thresholds, alerts on volatile metrics without smoothing, or alerting on symptoms rather than causes. The fix involves regular review: which alerts fired in the past 30 days, and how many led to actual action? Any alert that hasn't triggered a meaningful response in three months is a candidate for removal or recalibration.
Ad-hoc reporting and alerting are complementary. Alerts tell you something changed. Ad-hoc analysis tells you why. A mature setup connects the two – an alert links to a relevant report or query that helps the recipient investigate immediately, rather than starting from scratch. The time-to-insight scorecard captures this handoff under its Operationalize stage.
The Holistics Perspective
Holistics supports scheduled report delivery via email and Slack, with threshold-based alerts that notify stakeholders when metrics move outside expected ranges. Alerts query the same governed semantic layer as dashboards, ensuring consistency between what users see and what triggers a notification.
See how Holistics approaches this →