Two GPs. Same Wednesday evening. Both staying late to catch up on admin.
Dr Akhtar needs to write a practice protocol for managing annual asthma reviews. She opens ChatGPT and types: “I am a UK GP. Write a protocol for annual asthma reviews based on NICE guidelines. Include what to check at each review, when to refer, and how to structure the recall system.” Three minutes later, she has a structured first draft. She checks it against NICE, adjusts the wording, and saves it.
Dr Brennan has a pile of discharge summaries to action. He copies the first one, pastes it into ChatGPT, and types: “Summarise this with medication changes and follow-up actions.” Thirty seconds later, he has a clean summary. He does the same for the next five letters.
Dr Akhtar used AI safely. Dr Brennan did not. And the difference comes down to one thing: patient data.
The four questions
In the last lesson, we talked about where your data goes when you type it into a commercial AI tool. Now I want to give you a practical framework for deciding whether a particular task is safe.
Four questions. Ask them every time.
Question 1: Does what I am about to type contain any patient-identifiable information? This includes names, dates of birth, NHS numbers, and addresses. But it also includes clinical details that could identify someone — rare diagnoses, specific hospital references, named clinicians, or unusual combinations of information. If the answer is yes, or even maybe, do not use a commercial AI tool. Dr Brennan’s discharge summary contained all of these. That is why it was not safe.
Question 2: Is this tool approved for use with NHS data? Commercial tools like ChatGPT, Claude, and Gemini are not approved for NHS patient data unless your organisation has a specific data processing agreement in place. If you are not sure whether a tool is approved, the safe assumption is that it is not. Dr Akhtar did not need an approved tool because she was not using patient data. Her protocol was based on publicly available NICE guidelines.
Question 3: Am I on a personal device or an NHS device? Using personal devices with patient data adds another layer of risk. Your personal laptop, your phone, your home WiFi — none of these have the same security controls as NHS infrastructure. Even if the task itself is appropriate, doing it on a personal device may not be.
Question 4: Would I be comfortable if my Caldicott Guardian could see exactly what I just did? This is the gut-check question. If the answer is yes, you are probably fine. If you hesitate, stop and think.
What you can do right now
Let me be clear about this. The data protection rules do not mean you cannot use AI. They mean you need to use it for the right tasks. And there are plenty of right tasks.
Draft practice protocols and policies. Ask AI to help you write a structured protocol for any clinical process — chronic disease reviews, medication monitoring, referral pathways. The input is your clinical knowledge and published guidelines. No patient data required.
Ask AI about published guidelines. What does NICE recommend for managing gout? What are the key points from the latest hypertension guideline? You are asking AI about publicly available information, not pasting copyrighted content into it. And you verify the output against the original.
Create generic patient education materials. Leaflets about common conditions, explanations of common procedures, information sheets about lifestyle changes. As long as they are generic and not personalised to a specific patient, these are safe and valuable.
Explore clinical questions. What are the common side effects of dapagliflozin? What is the mechanism of action of empagliflozin? How does the two-week-wait pathway work for suspected colorectal cancer? These are the kinds of questions you might type into a search engine. AI gives you a more structured answer.
Write non-clinical correspondence. Practice newsletters, staff communications, job adverts, meeting agendas, complaint response templates. None of these involve patient data.
All of these are safe because they do not involve patient-identifiable information and the output will be reviewed by you before it is used.
What is coming
The tasks that many clinicians actually want AI for — summarising discharge letters, drafting referral letters, documenting consultations — those do involve patient data. And the current position is that you should not use commercial tools for these.
But this is changing. Quickly.
NHS England has published guidance on AI-enabled ambient scribing products. These are tools that listen to consultations, with patient consent, and produce clinical documentation within a secure, governed framework. The data stays within the NHS. It is not used for training. It is processed under proper legal agreements.
Microsoft Copilot is now available across the NHS at no additional cost. Trials involving 30,000 NHS staff showed it saved an average of 43 minutes per person per day on administrative tasks.
The technology works. The governance is catching up. And when your practice has access to a properly approved tool for handling patient data, you will be able to use it safely and confidently.
How to find out what your practice has approved
Here is a practical step you can take today. Ask your Caldicott Guardian or your information governance lead what AI tools, if any, your practice or integrated care board has approved for clinical use.
Every practice has a Caldicott Guardian. Every practice has someone responsible for information governance. And this is exactly the kind of question they are there to answer.
If your practice has approved a specific tool, use that one for tasks involving patient information. If nothing has been approved yet, stick to the non-patient-data tasks we described. There are enough of them to make a real difference to your workload.
Data protection is not the enemy of innovation. It is the framework that makes innovation safe. And as clinicians, safe is where we start.
In the next lesson, we are going to get practical — which tasks are in the green zone, with worked examples you can try today.
Key Takeaway
Ask four questions before every AI task: does it contain patient data, is the tool approved, am I on an NHS device, and would my Caldicott Guardian be comfortable? If in doubt, stick to the many valuable tasks that don’t require patient data.