The patient hands you three printed pages. Symptoms typed into ChatGPT, detailed response about possible causes, specific questions to ask their GP. They apologise. “I know I probably shouldn’t have...”
This is already happening. Patients are arriving in GP surgeries with AI-generated health information. Some have printed pages from ChatGPT. Some have screenshots on their phone. Some have spent hours in conversation with an AI chatbot, refining their questions, building a detailed understanding of their symptoms, and arriving with a level of preparation that would have been unusual even five years ago.
This is not fundamentally different from the patient who arrives with pages printed from NHS Choices or a list of conditions they found on Google. But there are important differences. AI chatbots provide personalised-feeling responses. They answer follow-up questions. They can sound authoritative and confident even when they are wrong. And patients often form a relationship with the AI — they trust it in a way they might not trust a static webpage.
Your response to these patients sets the tone for the entire consultation. Get it right, and you have a well-prepared patient who is engaged in their own care. Get it wrong, and you have a patient who feels dismissed, embarrassed, or defensive.
The patient with the printout
The most common scenario, and the one that will become increasingly frequent, is the patient who has researched their symptoms with an AI chatbot before their appointment. They arrive with questions, possible diagnoses, and sometimes a detailed understanding of their condition that is genuinely impressive. They often apologise for bringing it up, as if they have done something wrong.
Your first task is to normalise this behaviour. Patients researching their health is not a problem. It is a sign of engagement. The patient who has spent time thinking about their symptoms, formulating questions, and trying to understand their body is a patient who is invested in their own care. That is exactly what we want.
The second task is to use what they have brought as a starting point, not an obstacle. “That’s helpful background. Tell me what you’re most worried about” is a much more productive opening than “You shouldn’t believe everything you read online.” The first invites the patient into a collaborative conversation. The second shuts it down.
When a patient brings AI-generated health information, start with their concerns, not the AI’s output. “What are you most worried about?” or “What made you look this up?” gets to the heart of why they are in your consulting room far more effectively than reviewing their printout line by line.
That said, you do need to address the content. If the AI has suggested a rare diagnosis that is causing the patient significant anxiety, you need to explain why that diagnosis is unlikely in their case. If the AI has provided genuinely useful information, acknowledge it. If the AI has got something wrong, correct it — but correct the information, not the patient’s decision to look it up.
The over-researched parent
Parents of young children are a particular group where AI health research can create both opportunities and challenges. A parent who has researched their child’s symptoms may arrive with ten possible diagnoses, detailed knowledge of red flag features, and significant anxiety. They may have been told by the AI to “seek urgent medical attention” for something that, in your clinical assessment, does not require it.
The challenge is that dismissing their research dismisses their anxiety — and their anxiety is real and valid, regardless of whether the AI’s suggestions are correct. The parent who has been up since 3am with a febrile child and has been told by ChatGPT that the rash “could indicate meningitis” is frightened. They need reassurance based on your clinical assessment, not a lecture about the limitations of AI.
A useful approach is to acknowledge what the AI got right, correct what it got wrong, and explain the clinical reasoning that leads to your assessment. “The AI was right that a non-blanching rash needs checking urgently — you did the right thing coming in. What I can see is that your child is alert, feeding well, and the rash is actually blanching when I press on it. That’s reassuring.” This validates the parent’s concern, credits their action, and provides clinical context the AI could not.
Where this becomes more difficult is when the parent’s AI research has identified something you were not planning to investigate. “I read that recurrent ear infections can be linked to immune deficiency — should we test for that?” In most cases, the answer is no, but you need to explain why in a way that respects their research while applying your clinical judgement. The AI does not know this is the child’s second ear infection, not their fifteenth.
A practical risk framework
Back in Module 1, we discussed a risk ladder for AI use. This is directly relevant when patients ask you whether they should use AI for health questions — and they will ask. You need a practical framework that is honest, balanced, and avoids both uncritical enthusiasm and blanket disapproval.
Generally safe uses of AI for health information: • Understanding a condition you have already been diagnosed with • Preparing questions before a GP appointment • Learning medical terminology to understand letters and results • Getting general lifestyle and wellbeing information • Understanding how a medication works (not whether to take it)
Use with caution: • Symptom checking — AI may over-diagnose or miss important context • Interpreting blood test results — reference ranges vary, and context matters • Researching a condition you think you might have but have not been diagnosed with • Comparing treatment options — AI does not know your full medical history
Avoid entirely: • Making treatment decisions based on AI advice • Adjusting medication doses based on AI suggestions • Using AI as a substitute for seeking medical attention • Self-diagnosing serious or complex conditions • Delaying seeking help because the AI said it was probably nothing
This framework gives patients practical guidance without being paternalistic. Most patients will instinctively understand the logic: AI can help you learn and prepare, but it cannot replace someone who knows you, can examine you, and has access to your full medical history.
Practical response phrases
Having a few well-practised phrases makes these consultations much smoother. You do not need a script, but having comfortable language to draw on helps, particularly when you are tired or under time pressure.
When a patient brings AI research that is broadly accurate: “Some of what you’ve read is right. Let me explain how it applies to your specific situation.” This validates their effort without endorsing the AI as a clinical authority.
When the AI has caused unnecessary anxiety: “I can see why that information would be worrying. The important thing is that I can examine you and put this into context. Let me tell you what I’m finding.” This shifts the focus from the AI’s output to your clinical assessment.
When the AI has got something wrong: “AI can give general information, but it doesn’t know you. In your case, what’s actually going on is...” This corrects the misinformation without criticising the patient for seeking it.
When a patient asks if they should use AI for health questions: “It can be useful for understanding conditions and preparing for appointments. But for symptoms, diagnoses, and treatment decisions, that’s what I’m here for. The AI doesn’t know your history, and it can’t examine you.”
The goal is never to make patients feel foolish for using AI. It is to help them use it well — and to understand what it cannot do. A patient who feels respected is a patient who will keep coming back when they need to.
The patient who trusts AI more than you
This is uncommon, but it happens. A patient has spent hours with an AI chatbot. The AI has been patient, thorough, available 24/7, and never made them feel rushed. They arrive convinced they have a specific diagnosis, and when your assessment differs, they push back. “But ChatGPT said...”
This is not really about AI. It is about trust, health anxiety, and the therapeutic relationship. The AI has not caused the problem — it has given the patient a framework for their anxiety that feels authoritative and certain. Your clinical uncertainty (“It could be several things, let’s investigate”) feels less reassuring than the AI’s confident diagnosis.
The most effective approach is to avoid a direct confrontation between your opinion and the AI’s. Instead, acknowledge their concern and explain your clinical process. “I understand that the information you’ve found points in that direction. What I’d like to do is examine you properly, run some appropriate tests, and give you an answer based on your specific situation. If it turns out to be what you’re concerned about, we’ll manage it. If it’s something else, we’ll manage that too.”
In rare cases, you may encounter a patient who genuinely refuses to accept your clinical assessment because it contradicts the AI. This requires the same skills you would use with any patient who disagrees with your management plan: clear communication, documentation of the discussion, shared decision-making where possible, and safety-netting. The AI is a new variable, but the clinical and communication skills are the same.
The patient who fears AI
On the other end of the spectrum, some patients are anxious about AI in healthcare. They may ask about the AI scribe in your consulting room. They may express concern about their data. They may worry that AI is replacing doctors. These concerns are valid and deserve a thoughtful response.
“What’s recording us, doctor?” is a question you should be able to answer clearly and honestly. Explain what the system does, what happens to the data, and that they can opt out. Do not be defensive or dismissive. A patient asking about the technology in their consultation is exercising exactly the kind of informed engagement we should encourage.
For patients who worry about AI replacing doctors, honest reassurance is appropriate: “AI helps me with some of the administrative tasks, like note-taking, so I can focus more on you. It doesn’t make clinical decisions. That’s my job, and it’s not changing.” This is truthful, reassuring, and positions the technology as a tool rather than a replacement.
Some patients will have read alarming headlines about AI in healthcare. They may conflate different types of AI — the scribe in your room, the diagnostic AI in radiology, the chatbot they used at home. Helping them understand the distinction is useful. The AI scribe is a note-taking tool. It is not diagnosing them, deciding their treatment, or making any clinical judgements. You are.
Above all, remember that a patient who expresses concern about AI is a patient who cares about their healthcare. That concern is a foundation for trust, not a barrier to it.
Key Takeaway
Patients using AI for health questions is not a problem to solve. It is a reality to navigate. Your job is not to police their research. It is to help them apply it sensibly — and to fill the gaps that AI cannot.