What AI features does Elation offer?
Elation brings artificial intelligence into the EHR to help you do work more efficiently, surface information when you need it, and capture the visit so you can stay present with your patient. See the Elation Health - Building Clinical-First AI article for more information**.** Note Assist, Clinical Insights, Wordsmith, and AI Billing are custom-built to integrate into your Elation workflows. Elation uses models from Anthropic, OpenAI, and Google to deliver the best results.What is Clinical Insights?
Clinical Insights is an AI-powered tool built into the patient chart that helps you explore a patient’s record, surface relevant clinical information, and think through questions during or after a visit. It works alongside your clinical judgment. For setup instructions and workflow details, see the Chronological Record Guide.Does Clinical Insights have access to the full patient chart?
Not entirely. Clinical Insights works with a defined subset of the patient’s Elation record. Understanding what is included — and what is not — helps you interpret responses and know when to look further.What Clinical Insights can access
- Demographics — name, date of birth, age, sex, pronouns
- Problem list — active and resolved problems
- Medications — current and discontinued medications
- Allergies and intolerances — allergies and drug intolerances
- Clinical history — past medical, surgical, family, and social history; smoking status
- Vitals — historical vital sign trends
- Preventive care — vaccinations and health maintenance reminders
- Labs and forms — structured lab reports (last 10); clinical forms (e.g., PHQ-9 scores)
- Visit notes — up to 20 most recent signed visit note summaries
- Non-visit notes — up to 20 most recent phone notes, email notes, etc.
- Appointments — upcoming appointments (next 12 months), including date, reason, clinician, location, and status
- External reports — OCR-extracted text from lab reports, imaging, and consult notes
What Clinical Insights cannot access
- Unsigned or draft notes
- Confidential notes
- Insurance or billing information
- Images — Clinical Insights can only read text extracted from reports, not visual content like X-rays or scanned documents
- Data from outside Elation — outside records, patient-reported information not entered into the chart, or verbal conversations
Does Clinical Insights make clinical decisions?
No. Clinical Insights surfaces information and generates responses based on the data available to it. Every output is a starting point for your review, not a conclusion. You remain the clinician.What kind of AI powers Clinical Insights?
Clinical Insights is powered by Large Language Models (LLMs). An LLM is trained on enormous amounts of text — books, articles, medical literature, websites — and learns patterns in how language works. When you ask a question, it generates a response by predicting what words are most likely to come next based on patterns learned during training. This is sophisticated pattern recognition, not clinical reasoning. An LLM can produce text that sounds confident, coherent, and authoritative — even when it’s wrong. It doesn’t weigh evidence or consider the full patient context the way you do. Always verify the information against the patient’s full medical record and apply your own expertise before making any clinical decisions.What is a “hallucination”?
A “hallucination” is when the AI generates information that sounds plausible but is factually incorrect or fabricated. This can include made-up citations, invented lab values, or clinical details that are not present in the patient’s record. Hallucinations happen because the model is optimized to produce fluent, plausible-sounding language — not to verify whether what it’s saying is true. Always verify the information against the patient’s full medical record and apply your own expertise before making any clinical decisions.What is a knowledge cutoff?
Large Language Models (LLMs) are trained on data collected up to a specific date, known as the knowledge cutoff. The model has no awareness of anything that happened after that date. This means that if a clinical guideline was updated, a drug was recalled, or new evidence was published after the cutoff, the model won’t know about it — and may still present outdated information as if it were current. Always verify the information against the patient’s full medical record and apply your own expertise before making any clinical decisions.Why is Clinical Insights open-ended instead of limited to preset questions?
Primary care is broad, unpredictable, and deeply individual — a rigid tool that only answers predefined questions would not match the way you think and work. Clinical Insights is designed to be useful across the full range of what comes up in your day, from a quick medication interaction check to thinking through a complex differential. You can ask Clinical Insights questions in your own words about a patient’s specific situation. You are not limited to a dropdown menu or a preset workflow. You can ask it to summarize a long record, flag gaps in preventive care, help you think through a diagnosis, or pull together information scattered across visits.What kinds of questions is Clinical Insights best suited for?
Clinical Insights tends to work well for:- Synthesizing information across a patient’s chart
- Identifying patterns over time
- Summarizing visit histories
- Surfacing relevant details you might not have time to search for manually
When should I question the output?
Be more cautious with:- Very specific clinical recommendations
- Dosing information
- Rare conditions
- Anything where being wrong carries real risk
What can go wrong with AI-generated clinical content?
There are a few common ways the output can fall short.- Hallucinations — The model generates plausible-sounding information that isn’t in the patient’s record or isn’t clinically accurate.
- Confident wrong answers — The model presents incorrect information with the same tone and fluency as correct information, making errors harder to catch.
- Missing context — The model may not have access to the full picture — outside records, verbal conversations, your clinical judgment — and may produce responses that miss important nuances.
- Outdated information — The model’s training data has a cutoff date, so it may reference guidelines or evidence that have since been updated.
What has Elation done to reduce these risks?
Elation has built several layers of safeguards into Clinical Insights:- Data scoping — Clinical Insights works with data in the patient’s Elation record, reducing the risk of fabricated information from outside sources.
- Prompt design — the instructions that shape how the model responds are engineered to encourage accuracy, flag uncertainty, and discourage fabrication.
- Model selection — Elation evaluates and selects models based on their performance in clinical contexts, not just general benchmarks.
- Ongoing monitoring — Elation continuously monitors how Clinical Insights performs in real-world use and updates its approach as needed.
Can Elation guarantee the AI will not make mistakes?
No. Elation cannot guarantee that the AI won’t make mistakes. Safeguards reduce risk — they don’t eliminate it. No AI system in healthcare can promise zero errors. Clinical Insights is designed to support your judgment, not replace it. The safeguards make errors less likely, but your review is what makes the output safe.Does how I ask the question matter?
Yes. LLMs are sensitive to how questions are framed. A vague question tends to produce a vague answer. A specific question with context — the patient’s situation, what you’re trying to decide, what you’ve already considered — will generally produce a much more useful response.Tips for getting better results
- Be specific. Instead of “tell me about this patient,” try “summarize this patient’s diabetes management over the last 12 months, including A1c trends and medication changes.”
- Give context. If you are considering a specific diagnosis, say so. “I’m thinking about lupus given the joint pain and rash — what in the chart supports or argues against that?” produces better results than “what’s going on with this patient?”
- Ask it to show its work. Prompts like “what evidence in the chart supports this?” or “what are you basing that on?” help you evaluate the quality of the response.
- Iterate. If the first response isn’t quite right, refine your question. You can narrow the focus, ask for more detail on a specific point, or redirect the conversation.
How do I know when to trust the output?
Build your own sense of when the AI is helpful by starting with questions you already know the answer to. This lets you calibrate — you develop a feel for what a reliable response looks like versus one that’s subtly off. Over time, you’ll get faster at knowing when to trust it and when to double-check.What if something feels wrong?
Trust that feeling. If an AI-generated response does not match your clinical instinct, your instinct is more likely to be right. The tool does not have your training, your experience with the patient, or your ability to sense when something does not add up. When in doubt, verify independently — the same way you would with any other reference source.Clinical Insights is designed to support your clinical practice, not direct it. You are the clinician. The AI is a tool in your toolkit, useful when used thoughtfully and most powerful when paired with the judgment you’ve spent your career developing. What Clinical Insights surfaces is a starting point for your review. Always verify the information against the patient’s full medical record and apply your own expertise before making any clinical decisions.