Your AI Health Assistant Doesn't Know Who You Are

Your AI health assistant analyzes your sleep, recovery, and training data — but it never sees your name, email, or account ID. Here's how we built privacy into the architecture itself.

Mac DeCourcy · · Updated April 3, 2026

Your AI health assistant analyzes your sleep, recovery, and training data — but it doesn’t know your name. It doesn’t know your email address. It doesn’t even know you’re you.

When we built Omnio’s AI chat assistant, we made a deliberate choice: the large language model powering your conversations never receives any personally identifying information. Not your name, not your email, not your account ID. To the AI, you’re just a collection of health patterns — sleep scores, heart rate trends, training loads. It can reason about your data brilliantly without ever knowing who it belongs to.

Here’s how that works.


The AI doesn’t know who you are

When you ask the assistant a question, we assemble context about your health to help it give a useful answer. That context includes things like your age, biological sex, height, health goals, and connected data sources. It includes your active protocols, recent check-ins, and training plan.

What it never includes: your name, your email address, or your account identifier. These fields are simply never passed to the AI. To the language model, you exist only as a set of health patterns and goals — enough to give you personalised advice, but nothing that could identify you as a person.

We go further with sensitive medical information. Fields like medications, medical conditions, and allergies are stripped from the AI’s context by default. Even though you’ve entrusted that information to Omnio, the AI doesn’t get to see it unless you explicitly allow it.

Your data stays in its own lane

Behind the scenes, Omnio uses an architecture called MCP (Model Context Protocol) to let the AI query your health metrics. Here’s the important part: every single data query is automatically scoped to your account on the server side, before the AI ever sees the results.

The AI doesn’t choose whose data to look at — it can’t. When it asks “what was this user’s sleep like last week?”, the system injects your account scope into the query automatically. The AI doesn’t even know this scoping exists. It simply asks a question and receives only your data in response.

This isn’t a filter applied after the fact. It’s the architecture itself. There is no query the AI can construct that would return another user’s data, because the isolation happens at a layer the AI has no access to or awareness of.

The AI works on a need-to-know basis

The assistant has access to a curated set of around 30 health-related tools — things like looking up your sleep trends, checking your training load, or reviewing your recovery scores. That’s it. It can’t access your login credentials, your account settings, or any administrative functions.

The tool list is a strict whitelist. The AI can only call tools we’ve explicitly approved, and it can only pass parameters we’ve explicitly allowed. Sensitive parameters like your account scope are injected server-side — the AI can’t see or influence them.

On the output side, every response from the AI passes through filters that scan for accidentally included personal data patterns before the message reaches you. And all conversations are automatically deleted after 30 days.

Privacy by architecture, not by policy

It would have been easier to build a chatbot that passes your full account profile to the AI and asks it nicely not to repeat sensitive details. Plenty of products work this way — privacy enforced by prompt instructions and good intentions.

We didn’t want that. Health data is deeply personal, and “please don’t share this” isn’t a security model. Instead, we designed the system so the AI structurally cannot access identifying information. It’s not a matter of trust — the data simply isn’t there.

One last thing: the AI chat assistant requires your explicit consent before it activates. No one gets opted in by default. You choose whether to use it, and you can revoke that choice at any time.

Your health data helps you understand your body. It shouldn’t cost you your privacy to get that understanding.