Predicting Health Dips Before They Happen
Your wearable data contains early warning signals that precede readiness dips by 1-5 days. We built a system that learns which signals matter for you personally — and warns you before the dip arrives.
You’ve seen it before: a readiness score in the green for days, then a sudden drop into the red. By the time your wearable tells you to rest, you already feel it. The information arrives too late to act on.
What if the warning came two days earlier?
Your body actually does signal these drops in advance — through subtle shifts in autonomic markers, sleep patterns, and temperature. The problem isn’t missing data; it’s that nobody is watching the right signals at the right time. We built a system that does.
The signals hiding in your data
Most health apps show you a readiness score — a single number derived from last night’s sleep and recent activity. It tells you how you feel today. That’s useful, but it’s reactive.
Beneath that score, your wearable captures signals that move before readiness drops:
HRV coefficient of variation — not your average HRV, but how stable it is day to day. When HRV becomes erratic, it often precedes a readiness dip by 1-3 days, even if the average hasn’t changed yet.
DFA α1 — a fractal scaling exponent from heart rate analysis. When it drops below 0.75, research associates it with parasympathetic dominance from overtraining. It moves before you feel overtrained.
Temperature deviation — a 0.3°C rise in skin temperature can signal the onset of illness 24-48 hours before symptoms appear.
Sleep regularity — not how long you sleep, but how consistent your timing is. Circadian disruption accumulates silently over days before manifesting as a readiness drop.
Cardiac vagal index — a measure of parasympathetic nervous system activity. When it withdraws, your body is diverting resources from recovery to stress response.
These aren’t exotic research metrics. If you wear an Oura Ring, Garmin, or WHOOP, this data is already being collected. It’s just not being used for prediction.
Why one formula doesn’t fit everyone
The obvious approach is to build a model: “if HRV CV rises by X and temperature deviates by Y, predict a dip.” The problem is that the weights are personal.
For one person, sleep regularity might be the strongest leading indicator — their readiness is resilient to training load variation but collapses when their sleep timing shifts. For another person, HRV variability is the dominant signal and sleep regularity barely matters.
Population-level models average over these differences. They produce predictions that are statistically reasonable and personally mediocre.
We took a different approach: let the model learn which signals matter for each individual.
Bayesian learning from your own data
Omnio’s prediction model starts with priors from sports science research — population-level estimates of how each signal relates to readiness changes. These are reasonable starting points, but they’re just starting points.
Every day, the model observes what actually happened: what it predicted yesterday, what readiness score arrived today, and what each signal was doing in the days before. It updates its beliefs about which signals matter for you and by how much.
This is Bayesian conjugate updating — mathematically principled, computationally lightweight, and naturally cautious. The model doesn’t overreact to a single surprising day. It gradually shifts its weights as evidence accumulates.
After 30 days, the feature weights start diverging from population defaults. After 90 days, you have a genuinely personalized model. The practical effect: predictions get tighter, early warnings get more precise, and the forecast extends further into the future as the model becomes more confident in its understanding of your patterns.
Early warnings with learned confidence
When the model detects an anomalous pattern in your leading indicators, it surfaces an early warning on your dashboard — before your readiness score reflects the change.
The key design decision: how many signals need to agree before firing a warning?
Too sensitive and you get false alarms. Too conservative and the warning arrives too late to be useful.
Our approach: let the model decide, based on how much it has learned.
With less than 30 days of data, the system is conservative — it requires three or more signals to agree before alerting. As the model matures and learns which signals are reliable for you personally, it lowers the threshold. A mature model might fire on a single strong signal that it has learned is a consistent predictor.
This means early warnings improve over time. The first few weeks, you might get a warning only when multiple things look off simultaneously. After a few months, the system catches subtler patterns — a HRV CV spike alone might be enough, if that’s historically been your canary.
Dip fingerprints: learning from what went wrong
When a readiness dip does happen, the system looks backward: which signals were anomalous in the 1-5 days before?
It records these as a dip fingerprint — a snapshot of what was unusual before the drop. Over time, fingerprints cluster into recognizable types:
- Training overreach — elevated load metrics + rising HRV variability
- Sleep disruption — circadian irregularity + fragmented sleep architecture
- Illness onset — temperature elevation + respiratory rate increase
- Accumulated stress — rising allostatic load + declining resilience score
When the early warning system fires, it compares the current signal pattern against your library of past fingerprints. If there’s a match, it tells you: “This looks like the pattern before your March 14 dip — HRV variability and temperature deviation preceded a 12-point readiness drop.”
Pattern recognition, not just anomaly detection.
Adaptive forecast horizon
The traditional approach to forecasting is to pick a fixed window — “here’s your 5-day forecast.” But a 5-day forecast with ±20 points of uncertainty isn’t useful. A 3-day forecast with ±4 points is.
Omnio’s forecast extends as far as the model can predict with useful confidence. The confidence cone widens over time — each additional day compounds uncertainty. The chart ends where the cone gets too wide to be actionable.
For a user with consistent patterns and a mature model, this might extend to 7-10 days. For someone with volatile data or a young model, it might be 2-3 days. Watching the forecast extend over weeks of use is a visible signal that the model is learning.
This also communicates honestly about what the system knows and doesn’t know. A confidence cone that visibly widens is more trustworthy than a single-number prediction that hides its uncertainty.
What this means in practice
Here’s what a typical interaction looks like:
Monday evening. You check your dashboard. The forecast card shows green for the next 4 days with a narrow confidence cone. No early warning. Your model’s accuracy over the last 30 days is ±3.8 points.
Wednesday morning. An amber warning appears: “HRV variability rising — readiness may dip in ~2 days.” You tap through to the detail view. The signal breakdown shows your HRV CV is 1.9σ above your baseline. Your model has learned (from 84 observations) that HRV CV is your second-strongest predictor. The pattern matches a previous fingerprint from February where HRV CV preceded an 8-point drop.
You adjust. You swap Thursday’s planned heavy session for a recovery day. Friday’s readiness comes in 4 points higher than the model’s original trajectory.
The system didn’t just tell you your readiness was low — it told you before it dropped, told you why, and gave you enough lead time to change the outcome.
The technical foundation
The prediction engine is built on Omnio’s existing Bayesian model registry — 11 models that already learn personal recovery curves, training tolerances, and nutrition responses. This work extends the readiness prediction model from 6 features to 18, wires it into the forecast (closing a gap where trained models weren’t being used), and adds the early warning and fingerprint systems on top.
All features are normalized to z-scores against each user’s 30-day rolling baseline before feeding to the model. This means the system learns “how many standard deviations of your HRV CV predicts a dip” — not absolute values that vary between people.
The model tracks its own accuracy and automatically widens confidence intervals or resets to population priors if predictions drift. It won’t confidently give you bad forecasts.
Shipping in phases
We’re rolling this out in three stages:
- Enhanced predictions — the expanded model and accuracy tracking, improving forecast quality immediately
- Early warnings and detail view — the dashboard card redesign, forecast chart with confidence cone, and signal breakdown
- Dip fingerprints — retrospective pattern matching and the “past patterns” history view
Each phase delivers standalone value. You don’t need to wait for fingerprints to benefit from better predictions.
Omnio is a health analytics platform that unifies wearable and health data with AI-powered insights. Learn more at getomn.io.
Related reading
- What Is a Composite Health Score and Why Does It Matter?Single metrics lie by omission. A composite score synthesizes HRV, sleep, training load, and recovery into one number — but only if you can see how it's built.
- Your Body Already Knows When to FocusYour wearable data contains a hidden signal: the ultradian rhythm. We're building a system that reads it — predicting your best focus windows from sleep data, then refining in real-time with a heart rate monitor.
- Most Nutrition Trackers Count Calories. Ours Understands Your Diet.Calorie counting is table stakes. Omnio validates your logs against government nutrition databases, scores meal quality, tracks 35 micronutrients and polyphenols, classifies your dietary pattern, and connects what you eat to how you sleep, recover, and train.