When you type a patient’s NHS number into your clinical system — do you ever worry it might bring up the wrong patient? No. And there’s a reason for that.
EMIS, SystmOne — whichever system you use — it does exactly what it was programmed to do. Every single time. You enter a number, it finds the matching record. You print a prescription, it checks the formulary. You run a QOF search, it counts the patients who meet the criteria.
This is traditional software. And you already understand it, even if you’ve never thought about it in these terms.
What your computer has always done
Think about your day. You log in. You open the clinical system. You check blood results. You print a prescription. You dictate a letter and the system sends it.
Every one of those steps follows a rule. If this, then that. If the patient is allergic to penicillin, flag it. If the eGFR is below 30, alert the prescriber. If the QOF indicator is due, add it to the recall list.
This is how computers have worked since they were invented. They follow instructions. They don’t think. They don’t guess. They don’t improvise.
And honestly, that’s exactly what you want from a clinical system. You want predictability. You want the same input to produce the same output every time. 7 × 8 = 56. Always. Your prescribing system works the same way.
For decades, that’s all computers did. They were very fast calculators following very detailed rules. Then something changed.
The moment everything shifted
In late 2022, a company called OpenAI released ChatGPT. Within two months, over 100 million people were using it. It was the fastest-growing technology in history.
And for good reason. You could type a question — any question — and get a fluent, detailed, conversational answer. Not a list of search results. Not a link to a website. An actual answer, written in natural language.
Colleagues started talking about it. Patients started using it. The media couldn’t stop writing about it.
But here’s what most people missed in all that excitement: ChatGPT is not a better search engine. It’s not a smarter database. It’s something fundamentally different from every piece of software you’ve ever used.
And that difference matters enormously for healthcare.
The calculator versus the writer
Open the calculator on your phone. Type 7 × 8. You get 56. Do it again. 56. Do it a thousand times. 56. Every single time.
Now open ChatGPT. Ask it to explain atrial fibrillation to a patient. You’ll get a clear, well-written explanation. Ask the same question again. You’ll get a different explanation. Different words, different structure, different emphasis. Still clear. Still well-written. But different.
That’s not a bug. That’s the fundamental nature of what this technology is.
Your calculator follows rules. ChatGPT generates text. Your clinical system looks things up. ChatGPT creates things that didn’t exist before.
Why this distinction matters
Imagine, just for a moment, that your clinical system worked like ChatGPT.
You type in an NHS number. It doesn’t look up the patient. It generates a patient. A plausible-sounding patient with a reasonable medical history — but not necessarily the right one.
Terrifying, isn’t it?
That thought experiment isn’t meant to frighten you. It’s meant to give you an instinct — a gut feeling — for why AI requires a completely different kind of trust than the software you’re used to.
When your clinical system gives you a blood result, you trust it because it’s retrieving a fact. When ChatGPT gives you an answer about a blood result, it’s generating a response based on patterns. Those are two very different things.
What this means for you right now
You don’t need to understand the technical details yet. We’ll get to those in the next lesson. For now, hold onto one idea.
Every piece of software you’ve used in your career — EMIS, SystmOne, electronic prescribing, pathology systems, NHS Spine — all of it follows rules. Deterministic. Predictable. Same input, same output.
AI is different. It generates. It creates. It predicts. And sometimes it gets things spectacularly wrong while sounding completely confident.
This isn’t to say AI is bad. It’s genuinely remarkable and it has real potential to help us and our patients. But it’s a different kind of tool. And you can’t use it safely if you treat it like the software you already know.
A stethoscope and an ultrasound machine both help you examine a patient. But you wouldn’t use them in the same way. You wouldn’t trust them to answer the same questions. And you wouldn’t assume that being good with one makes you competent with the other.
The same is true for traditional software and AI.
Key Takeaway
AI is not better software — it’s a different kind of thing entirely. Traditional software retrieves and calculates. AI generates and predicts. Understanding that distinction is the foundation for everything that follows.