Module 4: Making AI Work in Your Practice
Lesson 1 of 7~7 min read

Why Your Practice Needs a Plan

The difference between AI arriving and AI working

Listen to this lesson

0:00
-:--

There is a practice twenty miles from mine where three GPs are using three different AI scribes, none of them approved, with no shared governance, no training, and no discussion about what happens when something goes wrong. They are not bad doctors. They are busy doctors who adopted technology faster than their practice could prepare for it.

If you have worked through the first three modules, you now understand what AI is, how to use it safely, and what it looks like inside the consultation room. You know how to evaluate tools, review AI-generated notes, and handle sensitive situations.

But knowing how to use AI yourself is not the same as making it work across a practice. And that is where most practices are struggling right now.

The technology has arrived. The preparation has not.

What unplanned adoption looks like

Let me describe a pattern I have seen repeated across multiple practices.

One enthusiastic partner starts using an AI scribe. They find it saves them twenty minutes a day. They mention it at a partners’ meeting. Two other clinicians download the same tool. A registrar is already using a different one. A salaried GP tries a third option. Nobody has checked whether any of these tools are on the AVT Registry. Nobody has updated the practice’s Data Protection Impact Assessment. Nobody has told the patients.

Unplanned AI adoption is not just inefficient — it is a governance risk. If a patient complaint arises from an AI-generated note, your practice needs to be able to demonstrate that the tool was approved, the clinician was trained, and the processes were documented. Without a plan, you cannot demonstrate any of those things.

This is not about being slow or bureaucratic. It is about being professional. We do not let locums prescribe without checking their registration. We do not install a new clinical system without a migration plan. AI tools deserve the same level of organisational preparation.

The three things that go wrong

In practices where AI adoption has been unplanned, I consistently see three problems.

1. Inconsistent standards. Different clinicians use different tools with different settings. Some review AI notes thoroughly. Some barely glance at them. Some dictate corrections. Some do not. The practice has no shared standard for what a reviewed AI-generated note looks like, and no way to audit whether reviews are actually happening.

2. Governance gaps. The practice cannot answer basic questions. Which tools are in use? Where does the data go? Has a DPIA been completed? Has the ICO registration been updated? What is the Caldicott Guardian’s view? These are not hypothetical questions — they are the first things that will be asked if something goes wrong.

3. Team division. Without a shared approach, AI becomes a source of friction rather than improvement. The enthusiasts think the cautious colleagues are holding things back. The cautious colleagues think the enthusiasts are being reckless. Neither group is wrong. They are just operating without a common framework.

Why a plan matters

A plan does not mean a hundred-page strategy document. It does not mean hiring a consultant or setting up a committee. It means the practice has made a deliberate, shared decision about how AI tools will be adopted, and has documented that decision.

The plan answers four questions:

What are we using? Which specific tools, from which suppliers, for which purposes. One tool for ambient scribing. One approach to AI-assisted correspondence. Not a free-for-all.

Who has been trained? Which clinicians have completed training on the approved tool. What that training covered. How competence was assessed. When refresher training is due.

What are the rules? When to use AI and when not to. How to handle consent. How to review AI-generated documentation. What to do when errors are found. When to pause or stop the tool.

Who is responsible? Who leads AI implementation. Who handles governance. Who reviews incidents. Who communicates with patients. Who reports to the partnership or board.

If your practice can answer those four questions clearly, you are in a strong position. If you cannot, this module will help you get there.

The practices that get it right

I have also seen practices where AI adoption has gone well. They share some common characteristics.

They started with a conversation, not a tool. They discussed AI at a partners’ or team meeting before anyone started using anything. They appointed someone to lead the process — not necessarily the most tech-savvy person, but someone organised, respected, and willing to do the work.

They chose one tool and committed to it. They completed the governance paperwork before going live. They trained the whole team, not just the enthusiasts. They started with a pilot — one or two clinicians, for a defined period — and evaluated the results before rolling out more widely.

They did not move fast. They moved deliberately. And because they did, they avoided the problems that come with unplanned adoption.

The goal of this module is to give you a practical, step-by-step approach to AI implementation that any practice can follow. Whether you are starting from scratch or tidying up an adoption that has already begun, the process is the same: Decide, Prepare, Pilot, Embed.

What is coming in this module

Over the next six lessons, we will work through the practical steps of making AI work in your practice.

We start with building your implementation plan — the concrete steps from partners’ meeting to first pilot. Then we tackle training your team, because technology is only as good as the people using it. We look at integrating AI into your workflows — where it fits in the working day and where it does not.

After that, we focus on measuring what matters — how to tell if AI is actually helping, not just how to count time saved. We cover when things go wrong — the common problems every practice hits and how to troubleshoot them. And we finish with sustaining and scaling — how to move from pilot project to practice standard.

This is the practical module. Less theory, more checklists. Less “what is AI?” and more “what do we do on Monday morning?”

Key Takeaway

AI tools arriving in your practice is not the same as AI working in your practice. Without a deliberate plan covering tool selection, training, governance, and accountability, you create inconsistency, governance gaps, and team friction. The framework is simple: Decide, Prepare, Pilot, Embed.