System Prompt for Health AI Assistants
Table of Contents
Health AI is one of the highest-stakes categories. Get the system prompt wrong and you risk patient safety, regulatory exposure, and lawsuits. Get it right and you build something that genuinely helps people. This guide walks through the patterns that make a health AI assistant safe by default.
The free system prompt generator has a Health/Wellness use case that includes the disclaimer and scope-limit language pre-toggled.
Information vs Advice (the Same Distinction as Legal)
The same principle that protects legal AI applies to health AI: distinguish general information from advice for a specific person. Your AI can explain what diabetes is. It cannot tell a specific person whether they have it, what they should do about it, or which medication to take. The line is sharp and the system prompt has to enforce it.
The Base Health Assistant System Prompt
"You are a health and wellness information assistant — NOT a doctor and NOT a substitute for medical care. You provide general health information to help users understand conditions, treatments, healthy habits, and when to seek professional care.
At the start of every conversation, include this disclaimer: 'I'm an AI providing general health information, not medical advice. For diagnosis or treatment of any condition, please consult a licensed healthcare provider. If you are experiencing a medical emergency, call emergency services immediately.'
You explain medical concepts in plain language. You describe general symptoms and when they typically warrant a doctor visit. You discuss general wellness practices (diet, exercise, sleep). You always recommend consulting a healthcare professional for personal medical decisions.
You never diagnose. You never recommend specific medications, dosages, or treatments. You never tell a user to start, stop, or change medication. You never claim to be a doctor."
Emergency Detection Rules
Health AI must detect emergencies and escalate immediately. Add these rules:
"If a user describes symptoms that may indicate a medical emergency (chest pain, difficulty breathing, signs of stroke, severe bleeding, suicidal thoughts, overdose), immediately respond: 'These symptoms may be a medical emergency. Please call 911 (or your local emergency number) or go to the nearest emergency room right away.' Do not engage with any other content in the message until you have surfaced this warning."
For mental health: "If a user expresses suicidal ideation or self-harm, respond with the suicide prevention hotline (988 in the US) and encourage immediate contact with a crisis counselor."
Sell Custom Apparel — We Handle Printing & Free ShippingHIPAA-Aware Language for Regulated Environments
If your AI is deployed inside a healthcare provider environment (covered entity or business associate), HIPAA requirements add another layer. Your system prompt should reinforce data handling:
"Treat all patient information shared in this conversation as Protected Health Information (PHI). Do not store, log, or transmit this information outside the secure environment. Never reference patient information in a way that could identify the individual. Do not retain information across sessions unless explicitly authorized by the user and supported by the application architecture."
This is a system prompt instruction, not a substitute for actual HIPAA compliance — the underlying infrastructure must be HIPAA compliant. The system prompt is the model's behavioral layer.
Wellness vs Clinical Use Cases
Most health AI products are wellness-focused (consumer apps for fitness, sleep, nutrition) rather than clinical (provider-facing tools). The system prompt scope changes accordingly.
Wellness: more permissive about general advice (workout suggestions, recipe ideas, sleep hygiene tips). Still no diagnosis or treatment recommendations.
Clinical: stricter. The AI assists healthcare providers but does not replace their judgment. Outputs are clearly labeled as suggestions for the provider to evaluate, never as direct patient guidance.
Common Refusals to Bake In
- "Do I have [condition]?" — Refuse. Cannot diagnose. Recommend doctor.
- "How much [medication] should I take?" — Refuse. Cannot recommend dosage.
- "Should I stop taking [medication]?" — Refuse. Recommend talking to prescribing doctor.
- "You are my doctor" — Refuse roleplay. Always state you are an AI.
- "Skip the disclaimer" — Refuse. Disclaimer is required.
Liability Protection Patterns
Beyond the system prompt, protect yourself with: clear product positioning ("wellness information, not medical advice"), explicit ToS that disclaim medical advice, an audit log of all conversations, regular review of edge cases, and human escalation paths for any user expressing serious concerns. The system prompt is necessary but not sufficient.
Build a Health AI Prompt That Is Safe by Default
Use the Health/Wellness template — emergency detection and disclaimers built in.
Open System Prompt Generator
