Module 3: AI in the Consultation
Lesson 1 of 8~7 min read

The Consultation Has Changed

From “should we?” to “how do we use it well?”

Listen to this lesson

0:00
-:--

The colleague who never mentioned AI is now asking how to turn it off. The registrar who started last month already has a scribe on their phone. The practice manager forwarded an email about Accurx Scribe being available to all practices on the NHS contract.

Six months ago, the conversation at the coffee machine was whether AI had any place in general practice. That conversation is over. It has been overtaken by something much more practical: how do we actually use this thing?

If you are reading this module, you already understand what AI is, how it works, and where the boundaries are. You know the green zone and the red zone. You know why how you ask matters and what the errors look like. Module 2 gave you the skills to use AI safely for tasks outside the consultation room.

Now we are going inside the consultation room. And what you find there has already changed.

The numbers you need to know

Let me give you the headline figures as of early 2026.

28% of GPs in England are now using some form of AI in their clinical work. That number has doubled in the past twelve months. Over 1.5 million GP appointments per month are being documented with the assistance of AI scribing tools. And through Accurx’s integration into NHS systems, 98% of practices now have access to an AI scribe as part of their existing contract.

These are not pilot figures from a handful of tech-forward practices in London. This is mainstream adoption across the country. Rural practices. Inner-city surgeries. Training practices. Dispensing practices. The technology has arrived everywhere, whether or not the training has followed.

If your practice has not yet had a conversation about AI documentation tools, you are not behind — but you do need to start. The tools are already available to you. The question is whether you use them deliberately or they arrive by default.

Three scenarios you might recognise

Let me describe three situations I have seen in real practices over the past year. See if any of them feel familiar.

Scenario one: the overnight rollout. A practice wakes up to an email from their clinical system supplier announcing that AI scribing is now integrated into the consultation workflow. No training session. No practice meeting. No discussion about consent, governance, or how to review AI-generated notes. It is just there, with a microphone icon in the corner of the screen. Two partners start using it immediately. Three others pretend it does not exist. Nobody is sure who decided this was happening.

Scenario two: the enthusiastic registrar. A new registrar arrives and mentions on day one that they have been using Heidi Health on their phone throughout their hospital jobs. They find it indispensable. They are already recording consultations, getting structured notes, and pasting them into the clinical record. When the trainer asks about governance, the registrar says the app is GDPR compliant and shows them the privacy policy on the website. The trainer is not sure whether that is enough.

Scenario three: the worried partner. A senior partner reads an article about AI in primary care and sends it to the partnership WhatsApp group with the message: “Are we behind? Should we be doing this?” This starts a thread that runs for three days without reaching a conclusion. Someone suggests putting it on the agenda for the next partners’ meeting. Someone else says they already trialled something six months ago and it was not very good. A third partner asks whether the indemnity covers AI-generated notes.

All three scenarios are happening right now, in practices across the country. And they all share the same underlying problem: the technology has arrived faster than the preparation.

From “should we?” to “how do we?”

If you are a GP who has been cautious about AI — and there are very good reasons for caution — I want to acknowledge something. The pace of change has been extraordinary. It is completely reasonable to feel that this has moved too fast.

But the question has shifted. It is no longer “should general practice use AI documentation tools?” NHS England has answered that question by publishing guidance, creating an approved vendor registry, and integrating these tools into NHS contracts. The professional bodies have started issuing guidance. The indemnity organisations are updating their advice.

The question now is: how do we use these tools well? How do we maintain clinical standards? How do we protect patients? How do we make sure that a technology designed to save time does not create new risks?

That is what this module is about.

What this module covers

We are going to work through eight lessons that take you from evaluation to competent daily use.

In the next lesson, I will give you a five-question framework for evaluating any AI tool before you trust it with your clinical work. Not all tools are equal, and knowing how to tell the good from the risky is an essential skill.

Then we will look at how ambient scribing actually works — what happens between the moment you turn on the microphone and the moment a structured note appears on your screen. Understanding the mechanics helps you understand the limitations.

After that, we will spend two lessons on reviewing AI-generated notes — what to check, how to check it, what the common errors look like, and what to do when you find one. This is the core clinical skill of AI-assisted documentation, and we will treat it with the depth it deserves.

We will then tackle sensitive consultations — mental health, safeguarding, domestic abuse — and how the presence of an AI scribe changes the dynamics. We will explore what happens when patients bring AI to the consultation, from ChatGPT printouts to anxious questions about what is recording them. And we will finish with AI as a clinical thinking partner — how to use it for differentials, guideline checks, and polypharmacy, and where the line between aide-memoire and delegation sits.

You do not need to adopt AI documentation tools tomorrow. But you do need to understand them. Because even if you choose not to use them yourself, your colleagues will. Your registrars will. Your locums will. And as a responsible clinician, you need to know what good practice looks like.

The professional standard

Before we go any further, let me be clear about one thing that does not change.

You are responsible for your clinical documentation. Whether you type it yourself, dictate it, use a template, or use an AI scribe — the note in the patient’s record is yours. Your name. Your accountability. Your professional responsibility.

AI documentation tools are assistants, not authors. They produce drafts, not finished records. And the standard to which those records are held is the same standard that has always applied: they must be accurate, contemporaneous, and sufficient to support the clinical decisions made.

The GMC has not changed its expectations because the method of documentation has changed. Neither has your indemnity provider. Neither should you.

With that foundation in place, let us start with the most practical question: is this tool safe?

Key Takeaway

AI documentation in UK general practice is no longer experimental. The question is not whether it arrives, but how to use it competently, safely, and in line with professional standards. Your responsibility for clinical documentation does not change because the method of documentation has changed.