Module 1: Getting Started
Lesson 5 of 5~8 min read

Cutting Through the Hype

How to tell what is real from what is marketing

Listen to this lesson

0:00
-:--

“AI will replace GPs within five years.” I’ve been seeing that headline since 2017. I’m still here. You’re still here.

The pattern of hype

Healthcare technology hype follows a predictable pattern. And once you see it, you can’t unsee it.

First comes the bold claim. “97% accuracy.” “Better than doctors.” “Will save the NHS billions.”

Then comes the pilot study. Promising results in a controlled environment. Usually at a well-resourced hospital. Usually with carefully selected patients.

Then comes the rollout. Where the messy reality of actual clinical practice meets the clean assumptions of the research team. Diverse patient populations, complex comorbidities, poor data quality, staff resistance.

And then — silence. The headline moves on. The startup pivots. The pilot ends.

If you’ve been in the NHS long enough, you remember the National Programme for IT. £12 billion. The largest civilian IT project in the world. It was going to transform everything. It didn’t.

AI is genuinely different and genuinely powerful. But the pattern of over-promising and under-delivering? That pattern is very much alive.

Five questions to ask any AI claim

Whenever you hear a claim about AI in healthcare, run it through these five questions:

1. Has it been tested on real NHS patients? A model trained on American hospital data may perform brilliantly on American hospital data. That tells you nothing about how it will work in a UK GP surgery with different patient populations, coding systems, and clinical pathways.

2. Was it built by people who understand general practice? If the developers have never sat in a ten-minute appointment — if they don’t understand that your patient has four problems and you have time for one — their tool isn’t designed for your world.

3. What happens when it gets things wrong? Every AI system will make mistakes. Is there a clear escalation pathway? Is the clinician informed? If the marketing doesn’t mention errors, the team hasn’t thought seriously about failure.

4. Is it solving a real problem? AI solutions frequently start with the technology and look for a clinical problem to attach it to. Start with your pain points. If it solves a problem you didn’t know you had, be sceptical.

5. Is there evidence from a practice like mine? A tool that works in a large urban teaching practice may not work in a small rural surgery. Context matters. Always ask whether the evidence comes from a setting that resembles yours.

What is genuinely happening right now

AI-assisted documentation is working. Tools that help with clinical notes, referral letters, and correspondence are saving real time in real practices.

AI for medical imaging is promising in specific, narrow tasks — chest X-rays, retinal scans, skin lesions. Real, validated use cases, but narrow and requiring clinical oversight.

AI for triage is mixed. Works reasonably well for straightforward presentations. Struggles with complexity. Not yet reliable enough without human review.

AI replacing clinicians is not happening. Despite the headlines, the technology is not close to replacing the clinical judgement, physical examination, and human connection that define general practice.

What you’ve learned across this module

Let’s recap the five lessons:

In Lesson 1, you learned that AI is fundamentally different from traditional software. It generates rather than retrieves.

In Lesson 2, you learned how language models work — word prediction at massive scale, with no clinical experience and no ability to say “I don’t know.”

In Lesson 3, you saw the genuine opportunities — for patients, clinicians, and practices.

In Lesson 4, you confronted the honest limitations — hallucinations, delayed care, context confusion, and missed red flags.

And in this lesson, you’ve learned to ask the five questions that protect you from hype.

You now understand more about AI than most clinicians in the NHS. And you understand more than enough to make informed decisions in your own practice.

What comes next

You don’t need to become an AI expert. That’s not your job. Your job is to be an excellent clinician.

But you do need to be an informed clinician. One who understands what these tools can and can’t do. One who can guide patients. One who can evaluate the claims and products that will increasingly land on your desk.

That’s what this module has given you. Not expertise. Informed confidence.

In Module 2, we’re going to get practical: how to actually use these tools safely in your day-to-day work. Data protection. Prompt writing. Spotting errors. Real skills for real practice.

And later in this course, we’ll cover the clinical safety standards — DCB0129, DCB0160 — that govern how health IT systems are assessed and deployed in the NHS. Because if you’re going to evaluate AI tools for your practice, you should understand the safety framework they’re supposed to meet.

Key Takeaway

Ask five questions of any AI claim: tested on NHS patients? Built by people who understand GP? What happens when it’s wrong? Solving a real problem? Evidence from a practice like mine?

Reflect on Your Learning

These questions are designed for your CPD appraisal portfolio. Use them to reflect on what you have learned in this module and how it applies to your practice. You can copy or screenshot your answers as evidence of self-certified CPD.

  1. What surprised you most about how AI generates its responses?
  2. Which of the AI tools discussed could you see yourself using first, and for what task?
  3. What would you want your practice team to understand about AI before adopting it?

Approximate CPD time for Module 1: 2 hours (including listening, reading, and reflection).