It is half past six. You are running late. There is a four-page discharge summary on your screen, full of medication changes, follow-up actions, and consultant recommendations. You need to action it before you go home.
You know that AI can summarise a document like this in thirty seconds. You have seen colleagues do it. The technology works. It is genuinely impressive.
But before you paste that letter into ChatGPT, I need you to understand exactly where that text goes. Because once you know, you will never look at this the same way again.
The journey of a sentence
Let me walk you through what happens, step by step, when you type something into ChatGPT.
You type a sentence. Let’s say you type: “67-year-old male, admitted with chest pain, troponin elevated, discharged on dual antiplatelet therapy.”
The moment you press enter, that sentence leaves your computer. It travels across the internet, encrypted, to a data centre. If you are using ChatGPT, that data centre is most likely in the United States. If you are using Claude, it is in the United States. If you are using Gemini, it is in the United States.
Your sentence is now on a server owned by a private company, in another country, subject to different laws.
Depending on which tool you are using and which version, several things may happen next. The company may store your input. They may log it for safety monitoring. Their employees may review it as part of quality assurance. And in some versions, your input may be used to train future versions of the model — which means your words become part of the system that generates answers for millions of other users.
That is the journey. Your words leave your computer, cross the Atlantic, and land on a commercial server where they may be stored, reviewed, and reused.
Why this matters for patient data
Now think about what a discharge summary contains. The patient’s full name. Their date of birth. Their NHS number. Their home address. Their diagnoses — including mental health conditions, substance use, and safeguarding concerns. Their medications, including controlled drugs. The names of their consultants and their hospital.
This is what the law calls special category data. Under the UK General Data Protection Regulation, health data has the highest level of legal protection. It cannot be processed without a specific lawful basis. And sending it to a commercial server in the United States — without a data processing agreement, without a data protection impact assessment, and without the patient’s knowledge — does not meet that standard.
Imagine you printed that discharge summary, walked outside, and handed it to a stranger on the street. You would never do that. But from a data protection perspective, pasting it into a commercial AI tool is not fundamentally different. You are giving patient data to a third party who has no clinical relationship with the patient and no legal obligation to protect it under NHS information governance rules.
But I would remove the name
This is the most common response I hear. What if I take the name out first?
Let me give you an example of why that is not enough.
Imagine a discharge summary for a 31-year-old female with Ehlers-Danlos syndrome, admitted to the Royal Devon and Exeter Hospital following a dislocated shoulder, referred by Dr Patel from a named GP practice, currently on pregabalin and duloxetine, with a note about safeguarding concerns related to a domestic situation.
I have not given you a name. I have not given you an NHS number. But if you live in that area, if you work in that practice, if you know a young woman with Ehlers-Danlos syndrome — you might already know who this is.
That is the problem with anonymisation. Removing the obvious identifiers — name, date of birth, NHS number — is straightforward. But the remaining clinical details can still identify someone, especially for rare conditions, small communities, or unusual combinations of circumstances.
The Information Commissioner’s Office has been clear on this. Anonymisation is not simply removing names. It requires a considered assessment of whether the person could be identified from the remaining information, on its own or combined with other available data. In a busy surgery, doing that assessment properly for every document you want to paste into AI is simply not practical.
The simpler rule
The current NHS guidance takes a straightforward position: do not enter patient-identifiable information into commercial AI tools. Not with the name. Not without the name. Not even if you think you have removed all the identifying details.
It is not that anonymisation can never work. In research settings, with proper processes and expert review, anonymisation is used effectively. But in the middle of a busy surgery, quickly stripping details from a discharge summary before pasting it into ChatGPT? That is not the same level of rigour. And the consequences of getting it wrong are real.
A data breach. A complaint. An ICO investigation. A fitness-to-practise concern with the GMC. These are not theoretical. They are the documented consequences of mishandling patient data — and AI tools do not come with a special exemption.
What this means for you
I do not want you to walk away from this lesson thinking AI is too dangerous to touch. It is not. There is an enormous amount you can do with AI that does not go anywhere near patient data — and we will cover that in the coming lessons.
But I do want you to understand the mechanics. When you type into a commercial AI tool, your words leave your computer. They cross borders. They land on someone else’s server. And they may stay there.
For general clinical questions, for practice protocols, for educational materials — that is fine. For patient data, it is not.
In the next lesson, we are going to look at the practical rules — what you can do, what you should avoid, and how to find out what your practice has approved for safe use.
Key Takeaway
When you type into a commercial AI tool, your words leave your computer, cross borders, and land on a commercial server. For general clinical questions and protocols, that is fine. For patient data, it is not — regardless of whether you remove the name.