Module 3: AI in the Consultation
Lesson 8 of 8~8 min read

AI as a Clinical Thinking Partner

Differentials, guidelines, polypharmacy, and where the line sits

Listen to this lesson

0:00
-:--

You have a patient with an unusual rash. Not typical eczema. Not clearly psoriasis. Photosensitive, maybe. You have a working differential, but you want to make sure you are not anchoring. Could you ask AI?

Throughout this course, we have mostly discussed AI as something that happens to you — a scribe in your consulting room, a tool your patients are using, a technology your practice has adopted. In this final lesson, we shift perspective. Here, we talk about AI as something you might actively choose to use as part of your clinical thinking.

This is the most nuanced territory in the course. Using AI as a clinical aide-memoire sits firmly in the amber zone we discussed in Module 2 — neither clearly safe nor clearly dangerous, but requiring careful judgement about when, how, and within what boundaries. The difference between a useful thinking tool and an inappropriate delegation of clinical responsibility is not always obvious, and it is your professional judgement that determines which side of that line you are on.

Let us be clear from the outset: nothing in this lesson is a recommendation to use any specific AI tool for clinical decision-making. What follows is a framework for thinking about where AI might assist your clinical reasoning, and where the boundaries of safe use sit.

Expanding your differential

One of the most promising uses of AI in clinical practice is as a check against anchoring bias. Anchoring — the tendency to fixate on an early diagnosis and interpret subsequent information through that lens — is one of the most common cognitive biases in medicine. It is also one of the hardest to overcome, because by definition you do not know you are doing it.

Imagine you have a patient with joint pain, fatigue, and a photosensitive rash. Your working diagnosis is drug-induced photosensitivity — the patient recently started a new medication. But you want to make sure you are not missing something. You could ask an AI tool: “What are the differential diagnoses for joint pain, fatigue, and photosensitive rash in a 35-year-old woman?”

The AI will likely generate a list that includes lupus, dermatomyositis, porphyria, and several other conditions alongside drug-induced photosensitivity. You already know most of these. But seeing them listed out — particularly the ones you had not consciously considered — can prompt you to think more broadly. Did you check an ANA? Is there a malar distribution? Any oral ulcers?

This is the AI functioning as a thinking partner, not a decision-maker. It is not telling you the diagnosis. It is prompting you to consider possibilities you might have deprioritised. The clinical assessment, the examination, the decision about what to investigate — all of that remains entirely yours.

When using AI to explore differentials, frame your query with anonymised clinical features only. Never include patient names, dates of birth, NHS numbers, or any identifying information. “35-year-old woman with joint pain and photosensitive rash” is sufficient. The AI does not need to know who the patient is to help you think.

There are important caveats. AI tools can generate plausible-sounding differentials that include very rare conditions, creating a risk of over-investigation. They may also miss conditions that are common in UK general practice but less prominent in the training data. And they cannot integrate the clinical gestalt — the overall impression you form from seeing and examining the patient — that is often the most important diagnostic tool you have.

Guideline checks and quick references

NICE guidelines are comprehensive, evidence-based, and essential to good clinical practice. They are also long, frequently updated, and sometimes difficult to navigate quickly during a busy surgery. Asking an AI tool, “What does NICE recommend for the initial management of type 2 diabetes in adults?” can give you a rapid summary that helps you check your own knowledge.

This is useful. But it comes with a critical caveat: you must verify the AI’s summary against the actual guideline. AI tools can hallucinate guideline content just as easily as they hallucinate clinical findings. They may cite outdated recommendations, conflate guidance from different conditions, or present a simplified version that misses important nuances or exceptions.

A practical approach is to use AI for the initial query and then confirm the key points against the actual NICE guideline, the BNF, or your clinical system’s decision support. Think of it as a quick-reference shortcut, not a definitive source. If the AI says NICE recommends metformin as first-line for type 2 diabetes, you probably already knew that — but it is still worth confirming, especially for less familiar conditions or recently updated guidance.

AI tools can confidently cite guidelines that do not exist, quote recommendations that have been superseded, or merge guidance from different conditions. Always verify guideline information against the primary source before acting on it.

Polypharmacy checks are a related use case. “Is there a significant interaction between amlodipine and simvastatin?” is a reasonable question to put to an AI tool as a quick sense-check. But the answer must be verified against the BNF or your clinical system’s interaction checker. AI tools sometimes flag interactions that are clinically insignificant while missing ones that matter. They also lack the context of your patient’s renal function, hepatic function, and other medications that affect the clinical significance of any given interaction.

Drafting and administrative tasks

Where AI may add the most value with the least clinical risk is in non-clinical text generation. Drafting a referral letter framework, composing a patient information leaflet, or creating a template for a clinical audit are all tasks where AI can save significant time without directly affecting patient care — as long as you review and edit the output.

For referral letters, you might ask an AI tool to draft a structure for a two-week-wait referral for a suspicious skin lesion, then populate it with the specific clinical details from your consultation. The AI provides the framework; you provide the clinical content. This can be significantly faster than writing from scratch, particularly for referral types you make infrequently.

The absolute rule here is: no patient-identifiable information goes into the AI tool. Draft the framework with placeholder text, then add the patient’s details in your clinical system. If you are using a general-purpose AI tool (as opposed to one integrated into your clinical system with appropriate data processing agreements), this is not just good practice — it is a legal requirement under UK GDPR.

For patient information materials, AI can be genuinely helpful. Asking it to “explain what an eGFR blood test measures, in plain English, at a reading age of 11-12” can produce a useful first draft that you then review for clinical accuracy. This is significantly faster than writing from scratch and often produces more accessible language than clinicians naturally use.

The key principle across all these administrative uses is the same: AI generates a draft, you verify and finalise. The AI output is never the finished product. It is always a starting point for your professional review.

Where aide-memoire becomes delegation

This is the most important distinction in this lesson, and it is worth being precise about it. There is a line between using AI as a thinking tool and delegating clinical decisions to it. That line is not always obvious, and it can shift depending on context. But knowing where it sits is a core professional responsibility.

Aide-memoire (appropriate): • “Remind me of the CHA₂DS₂-VASc scoring criteria” • “What are the diagnostic criteria for polycythaemia vera?” • “What does NICE recommend for step 2 of the asthma pathway?” • “List the common side effects of sertraline” • “What bloods should I consider for unexplained weight loss?”

Delegation (too far): • “Tell me if this patient needs anticoagulation” • “What is the diagnosis based on these symptoms?” • “Should I refer this patient urgently?” • “Is this medication safe for this patient?” • “Write this patient’s management plan”

The difference is clear when you look at it on paper. Aide-memoire questions ask the AI for factual information that you then apply to the clinical situation using your own judgement. Delegation questions ask the AI to make or recommend a clinical decision. The first is a reference tool. The second is a clinician — and AI is not a clinician.

In practice, the line can be blurrier. “What are the red flags for cauda equina syndrome?” is an aide-memoire. “Does this patient’s presentation sound like cauda equina?” is delegation. The difference might be just a few words, but the professional implications are profound. In the first case, you are refreshing your knowledge and applying it. In the second, you are asking the AI to exercise clinical judgement it does not have.

If you would not be comfortable explaining to the GMC that you used an AI tool in this way, you have probably crossed the line from aide-memoire to delegation. That discomfort is a useful professional instinct. Trust it.

The GMC test

A practical way to check whether your use of AI is appropriate is to apply what we might call the GMC test. This is not a formal framework — the GMC has not yet published comprehensive guidance on AI use in clinical practice — but it is a useful mental model based on the principles that already govern your practice.

Question 1: Would I be comfortable explaining this to the GMC if questioned? If a clinical decision you made was later scrutinised, would you be comfortable saying, “I used an AI tool to check the diagnostic criteria for condition X, then applied those criteria to my clinical findings”? Probably yes. Would you be comfortable saying, “I asked an AI tool whether this patient needed referral, and it said no, so I didn’t refer”? Absolutely not.

Question 2: Did I verify the AI output independently? Whatever the AI told you, did you check it against an authoritative source — the BNF, NICE, a textbook, your own knowledge? If the AI said there was no interaction between two medications, did you confirm that before prescribing? Verification is not optional. It is the minimum standard.

Question 3: Did I apply my own clinical judgement to the patient in front of me? AI can provide general information. It cannot assess the specific patient in your consulting room. Did you take the AI’s general output and apply it to this patient’s specific circumstances, history, examination findings, and preferences? If yes, you used AI as a tool. If no, you delegated to it.

These three questions will not cover every scenario. But they provide a robust framework for the vast majority of situations where you might consider using AI in your clinical work. When in doubt, err on the side of caution. The technology will become more sophisticated and the guidance more specific over time. Your professional standards remain constant.

What you have learned in Module 3

This module has been about the practical, day-to-day reality of working with AI in general practice. We have covered a lot of ground, and it is worth stepping back to see how it fits together.

You learned that AI documentation is now mainstream in UK general practice — 28% of GPs using it, 98% with access. You learned how to evaluate any AI tool using five questions that protect you and your patients. You explored how ambient scribing works from activation to sign-off, and why understanding the mechanics matters.

You developed the most critical practical skill in the course: reviewing AI-generated notes. You know the four-point check, you know what hallucinated negatives look like, and you know what to do when you catch an error. You explored the particular challenges of sensitive consultations — mental health, domestic abuse, safeguarding — and why these situations often require you to pause the scribe and document manually.

You considered how to respond when patients bring AI-generated health information to the consultation, and how to guide them towards safe and productive use of these tools. And in this final lesson, you explored the potential for AI as a clinical thinking partner — where it can genuinely help, where the boundaries sit, and how to apply professional judgement to a rapidly evolving technology.

The common thread across all eight lessons is this: AI is a tool, and you are the clinician. The technology changes. Your professional responsibility does not. Every AI output — whether a clinical note, a differential diagnosis, or a guideline summary — requires your review, your judgement, and your accountability.

In Module 4, we move from individual practice to team-wide implementation. How do you introduce AI tools across a whole practice? How do you train your team, manage governance, and build workflows that are safe, efficient, and sustainable? The skills you have developed in Modules 1 through 3 are the foundation. Module 4 is about putting them into practice at scale.

Key Takeaway

AI can be a valuable thinking partner for exploring differentials, checking guidelines, and drafting non-clinical text — as long as you verify independently, apply your own clinical judgement, and never delegate the decision to the algorithm. The line between tool and crutch is your professional responsibility.

Reflect on Your Learning

These questions are designed for your CPD appraisal portfolio. Use them to reflect on what you have learned in this module and how it applies to your practice. You can copy or screenshot your answers as evidence of self-certified CPD.

  1. Does your practice have an AI documentation tool? Can you answer all five evaluation questions about it?
  2. Think about your last complex consultation. What would an AI scribe have captured, and what would it have missed?
  3. How would you handle a sensitive consultation where the AI scribe is active? When would you pause it?

Approximate CPD time for Module 3: 2.5 hours (including listening, reading, and reflection).