Back to Blog
AI Safety & Governance15 min read

AI in UK General Practice: A Complete Framework

Patient safety, practice governance, and harm reduction strategies for ChatGPT, Claude, and Gemini in NHS settings.

KP

Dr Krishnan Pasupathi

MBBS MBA MRCGP

21 January 2026

Your patients are already using AI. The question isn't whether to engage — it's how to do so safely.

The Reality

40% of UK adults have used generative AI.

Many are asking ChatGPT about symptoms, medications, and test results — often before (or instead of) consulting their GP. This framework provides practical guidance for UK general practice — not to promote AI use, but to reduce harm when patients inevitably use these tools.

The Core Problem

Patients using AI for health information face real risks:

!

Hallucinations

AI confidently states incorrect medical information

!

Delayed Care

Reassurance from AI delays seeking professional help

!

Wrong Context

US-centric advice (dosages, drug names, reference ranges)

!

Missing Red Flags

AI may not recognise urgent symptoms requiring immediate care

Key insight: Telling patients "don't use AI" is ineffective. A harm reduction approach is more realistic.

The Harm Reduction Approach

Rather than prohibition, we can guide safer use:

Lower Risk Uses

  • • Understanding medical terminology
  • • Preparing questions for GP appointments
  • • Learning about conditions after diagnosis
  • • General health education

Higher Risk Uses

  • • Self-diagnosing symptoms
  • • Medication dosage decisions
  • • Interpreting test results alone
  • • Emergency symptom assessment

Risk Ladder: Patient AI Use

From dangerous to safer — and what to do instead

Danger

"ChatGPT says I should stop my blood pressure tablets"

Changing or stopping prescribed medication based on AI advice

Potential harm

Severe

Instead: Never change medications without speaking to your GP or pharmacist first.

High Risk

"I've had chest pain but Claude says it's probably just anxiety"

Using AI to decide whether symptoms need urgent attention

Potential harm

High

Instead: For chest pain, breathing difficulty, or sudden symptoms — call 999 or 111. AI cannot examine you.

Caution

"I asked Gemini what my blood test results mean"

Interpreting test results without professional context

Potential harm

Moderate

Instead: Use AI to understand what tests measure, but discuss your specific results with your GP who knows your history.

Be Aware

"I described my symptoms and ChatGPT thinks it might be X"

Using AI to generate possible diagnoses

Potential harm

Low-Moderate

Instead: Write down your symptoms to share with your GP, but don't anchor on AI's guess — it could be completely wrong.

Lower Risk

"My GP diagnosed me with X — I asked Claude to explain it simply"

Learning about a condition you've already been diagnosed with

Potential harm

Low

Tip: Cross-check with NHS.uk. Ask AI to use UK terminology and guidelines.

Safer Use

"I used AI to help me write down questions for my GP appointment"

Preparing for consultations, understanding medical terms

Potential harm

Minimal

This works well: AI can help you articulate concerns and make the most of limited appointment time.

The principle is simple:

The higher the stakes, the more you need a human professional. AI can help you learn and prepare — but it cannot examine you, know your history, or take responsibility for your care.

What Practices Can Do

1. Acknowledge the Reality

Include AI in consultations: "Have you looked this up online or asked an AI chatbot?" This opens dialogue without judgment and lets you correct any misinformation.

2. Provide Patient Guidance

Create or share resources that help patients use AI more safely — what to check, what to avoid, when to always seek professional help.

3. Develop Practice Protocols

Consider how your practice handles AI-related queries. Document your approach for consistency across the team.

4. Staff Training

Ensure clinical and reception staff understand common AI tools and their limitations, so they can respond appropriately to patient questions.

Key Statistics

40%

UK adults have used generative AI

70%

Find it useful for health info

25%

Have used AI for symptom checking

60%

Want NHS guidance on AI use

The Framework Includes

  • Risk assessment matrices — Categorising AI use by risk level
  • Patient handouts — Ready-to-use guidance materials
  • Staff protocols — How to handle AI-related consultations
  • Governance templates — Documentation for practice policies
  • Red flag recognition — When AI use becomes dangerous

Download the Complete Framework

The full 30-page document includes detailed protocols, patient resources, and implementation guidance.

Download PDF

A note on how this was made

This framework was developed collaboratively using multiple AI tools. Claude (Anthropic) drafted the initial content, ChatGPT (OpenAI) provided critical analysis and refinements, and the whole was reviewed by Dr Pasupathi to ensure clinical accuracy and practical applicability for UK general practice. We believe transparency about AI involvement reflects the same principles we advocate in this document.

The goal is not to make patients dependent on AI, nor to make them afraid of it. It's to help them — and us — navigate this new reality safely.

KP

Dr Krishnan Pasupathi

MBBS MBA MRCGP

NHS GP Partner with 29 years in medicine. GP Trainer since 2016. Building free patient education tools and resources for healthcare professionals.