Module 4: Making AI Work in Your Practice
Lesson 2 of 7~8 min read

Building Your Implementation Plan

From partners’ meeting to first pilot

Listen to this lesson

0:00
-:--

You have decided your practice needs a plan. Good. Now what? This lesson gives you a step-by-step process that any practice can follow, whether you are a two-partner surgery or a twelve-doctor training practice.

The biggest mistake practices make is jumping straight to the technology. They hear about a tool, download it, try it in a few consultations, and then wonder why the rest of the team is not on board.

Implementation is not a technology problem. It is a people and process problem. The tool is the easy part. Getting ten clinicians, six administrators, and three practice managers to change how they work — that is the hard part.

Let me walk you through the four phases of a successful implementation.

Phase 1: Decide

Before you evaluate a single tool, the practice needs to make a collective decision: are we doing this?

This should happen at a partners’ meeting or a whole-team meeting. Not a corridor conversation. Not an email chain. A proper discussion where everyone has the opportunity to raise concerns, ask questions, and contribute to the decision.

The agenda for this meeting should cover three things:

The case for AI. What problem are we trying to solve? Is it documentation burden? Is it time pressure? Is it the fact that clinicians are already using unapproved tools and we need to bring this under governance? Be specific. "AI is the future" is not a business case. "Our GPs are spending 45 minutes per day on documentation that could be reduced to 20 minutes" is.

The concerns. Give people space to voice their worries. Some will be about patient safety. Some about data protection. Some about workload during implementation. Some about cost. Some about whether AI will change the nature of the consultation. All of these are legitimate. Document them. You will need to address each one.

The decision. At the end of the meeting, you need a clear outcome. Are we proceeding? If yes, who is leading? What is the timeline? If no, what would need to change? If we need more information, who is getting it and by when? Record this in the minutes.

In my experience, the biggest barrier is not opposition to AI. It is uncertainty. Most clinicians are not against AI — they just do not know enough to feel confident. Sharing what you have learned in Modules 1 to 3 can be remarkably effective at moving the conversation forward.

Phase 2: Prepare

Once the practice has decided to proceed, there is preparation work before anyone touches a tool.

Appoint an AI lead. This person does not need to be a technology expert. They need to be organised, trusted by the team, and willing to invest time. Their role is to coordinate the implementation, not to become the practice’s AI helpdesk. Ideally, pair a clinical lead (for safety and workflow questions) with an administrative lead (for governance and logistics).

Choose your tool. Use the five-question framework from Module 3, Lesson 2. Is it on the AVT Registry? Is it MHRA registered if it needs to be? Where does the data go? Can you complete a DPIA? Does your team have the skills to use it? Narrow to one tool. Do not trial three tools simultaneously — it creates confusion and makes evaluation impossible.

Complete the governance. This means:

Data Protection Impact Assessment (DPIA) — Your practice is legally required to complete a DPIA before introducing any technology that processes patient data in a new way. Your ICB may have a template. The AVT Registry suppliers often provide supporting documentation. This is not optional.

ICO registration update — Check whether your practice’s ICO registration needs to be updated to reflect the use of AI processing of patient data.

Caldicott review — Your Caldicott Guardian should review and approve the use of the tool. If your practice does not have a named Caldicott Guardian, that needs to be addressed first.

Patient communication — Patients need to know that AI tools are being used in their consultations. This can be through waiting room notices, website updates, and verbal consent at the start of consultations. Draft these materials now, before the pilot starts.

Do not skip the governance steps because they feel bureaucratic. They exist to protect patients and to protect you. If a complaint reaches the GMC and you cannot show that a DPIA was completed and patients were informed, the fact that the tool saved you twenty minutes a day will not be a defence.

Phase 3: Pilot

A pilot is not everyone trying the tool at once. It is a structured, time-limited test with clear objectives.

Who pilots? Two to three clinicians. Choose a mix: one enthusiast who will push the tool’s capabilities, one pragmatist who will give you an honest view of whether it works in reality, and ideally one person who was initially cautious. Their experience will be the most persuasive when you present results to the wider team.

How long? Four to six weeks. Less than four weeks does not give you enough data. More than six weeks and you lose momentum.

What do you measure? We will cover measurement in detail in Lesson 5, but at minimum during the pilot you should track:

Time. How long are clinicians spending on documentation before and after? Use actual measurements, not estimates. Ask pilot clinicians to time themselves for a week before the pilot starts, then during the pilot.

Quality. Are the AI-generated notes accurate? How many corrections are needed per consultation? What types of errors are occurring? Keep a simple log.

Experience. How do the pilot clinicians feel about the tool? How do patients respond? Are there consultations where the tool is helpful and consultations where it gets in the way?

What about patients? During the pilot, ensure every patient is told that AI documentation is being used. Offer them the choice to opt out. Record their response. Track how many opt out and why.

A well-run pilot gives you evidence. Evidence is what turns a good idea into a practice decision. Without evidence, you are asking the team to trust your enthusiasm. With evidence, you are asking them to look at the data.

Phase 4: Embed

If the pilot is successful — and success means the evidence supports wider adoption, not just that the enthusiasts liked it — you move to embedding the tool across the practice.

This means:

Training everyone. Not just showing them which button to press. Training them on how to review AI-generated notes, when to pause the tool, how to handle patient questions, and what to do when errors occur. This is covered in detail in the next lesson.

Updating your protocols. Your documentation policy, your new patient information, your consent processes, your clinical governance reporting — all need to reflect the use of AI tools.

Setting a review date. Plan a formal review at three months and six months. Is the tool still working? Have new problems emerged? Does the training need refreshing? Is the governance still current?

Embedding is not a one-time event. It is an ongoing process of monitoring, adjusting, and improving. The practices that do this well treat AI implementation like any other quality improvement project — with regular review cycles and a willingness to change course if the data tells them to.

A realistic timeline

Here is what a realistic timeline looks like for a typical practice:

Weeks 1–2: Initial discussion and decision meeting. Appoint AI lead.

Weeks 3–4: Tool evaluation and selection. Begin DPIA.

Weeks 5–6: Complete governance (DPIA, ICO, Caldicott). Prepare patient communications. Set up pilot protocol.

Weeks 7–12: Run the pilot. Collect data.

Week 13: Evaluate pilot results. Present to team. Decide on wider rollout.

Weeks 14–16: Train remaining team members. Update protocols. Go live.

That is about four months from decision to full implementation. It can be done faster, but rushing usually means cutting corners on governance or training. Both will cost you more time later.

If your practice has already adopted AI tools without this process, that is okay. You are not starting from scratch — you are starting from where you are. Go through the preparation and governance steps now. Formalise what is already happening. Fill the gaps. It is always better to regularise late than to leave gaps open.

Key Takeaway

Successful AI implementation follows four phases: Decide (collective decision at a proper meeting), Prepare (appoint a lead, choose one tool, complete governance), Pilot (structured test with two to three clinicians over four to six weeks), and Embed (train everyone, update protocols, set review dates). Expect about four months from decision to full implementation.