Module 2: Using AI Safely
Lesson 5 of 8~6 min read

Why How You Ask Matters

The difference between a useless answer and a useful one

Listen to this lesson

0:00
-:--

I want to show you two prompts. Same person. Same AI tool. Same topic. Completely different results.

Prompt one: “Tell me about blood pressure.”

The AI produces a generic essay. It defines blood pressure. It explains systolic and diastolic. It mentions risk factors. It is the kind of thing you might find on page one of a medical textbook. Accurate, but useless for any practical purpose.

Prompt two: “I am a UK GP. Write a patient information leaflet about high blood pressure for an elderly patient with limited health literacy. Use plain English at a reading age of 11. Include what blood pressure numbers mean, lifestyle changes that help, when to contact their GP, and what medications they might be offered. Use NHS terminology and UK units.”

The AI produces a focused, practical leaflet. It explains that a reading of 140/90 is considered high. It lists specific lifestyle changes. It mentions when to call the surgery. It uses words like “GP” instead of “physician,” “paracetamol” instead of “acetaminophen,” and “millimetres of mercury” instead of unclear abbreviations.

The difference is not the AI. It is the prompt. And learning to write good prompts is the single most practical skill in this entire module.

What a prompt actually is

A prompt is simply the instruction you give to the AI. It is the text you type in the box. And it works exactly like giving instructions to a person.

Think about it in clinical terms. A hospital registrar phones you and says: “I need advice about a patient.” What do you say? You say, “tell me more.” You need context. You need to know what they are asking. You need specifics.

If the registrar says: “I have a 72-year-old man with a three-day history of worsening breathlessness, bilateral ankle oedema, raised JVP, and he is already on furosemide 40mg” — you can help. You have enough information to give a useful answer.

AI is the same. “Tell me about blood pressure” is the equivalent of “I need advice about a patient.” It is too vague to produce anything useful.

The four elements

I want to give you a framework. Four elements. You do not need all four every time, but the more you include, the better your results.

Element 1: Role. Tell the AI who you are and who the output is for. This matters because AI defaults to a generic, often American, perspective. Without role: “Explain hypertension management.” The AI gives you an American-style overview. It mentions lisinopril by brand name. It uses mg/dL. It references the American Heart Association. With role: “I am a UK GP writing for my practice nurses.” Now the AI knows to use NICE guidelines, NHS terminology, and UK-appropriate medications. The output is immediately more relevant.

Element 2: Task. Tell the AI exactly what you want it to produce. Be specific about the format, the length, and the type of output. Vague task: “Help with hypertension.” The AI does not know if you want a patient leaflet, a clinical protocol, a presentation slide, or a research summary. It guesses — and it usually guesses wrong. Specific task: “Write a one-page protocol for managing newly diagnosed hypertension in primary care, formatted as a numbered checklist.” Now the AI knows the format, the length, the audience, and the clinical context.

Element 3: Context. Give the AI the background information it needs to produce relevant output. For a patient leaflet, this might be the reading level, the patient’s age group, or specific concerns to address. For a protocol, this might be the relevant NICE guideline, your practice’s recall system, or your team structure. For a clinical question, this might be the UK guidelines you want it to reference. Without context: “Write about diabetes sick day rules.” With context: “Write about sick day rules specifically for patients on SGLT2 inhibitors, focusing on the risk of euglycaemic diabetic ketoacidosis, for patients who may not realise their blood sugar can be normal while they are dangerously unwell.” The second prompt produces something far more specific and clinically useful.

Element 4: Constraints. Tell the AI what to avoid. This is where you prevent the most common problems. Do not use American terminology. Do not exceed 500 words. Do not use medical jargon without explaining it. Use UK units only. Do not make up references or cite specific studies. That last constraint matters more than you might think. AI will happily invent journal references that do not exist. Telling it not to does not guarantee it will comply, but it reduces the frequency significantly.

Putting it together

Let me show you what a complete prompt looks like with all four elements:

“I am a UK GP. Write a patient information leaflet about gout for my practice patients. Use plain English at a reading age of 11. Cover what gout is, what causes flare-ups, dietary triggers to avoid, when to see their GP, and what treatments are available. Use NHS terminology. Do not use American drug names. Do not exceed 400 words.”

Role: UK GP, output for practice patients. Task: patient information leaflet about gout. Context: reading age of 11, specific topics to cover. Constraints: NHS terminology, no American drug names, word limit.

That prompt will produce a usable leaflet in about thirty seconds. You review it, check the clinical accuracy, and make any adjustments. The whole process takes less than ten minutes.

Compare that to the output from “tell me about gout.” No comparison.

In the next lesson, I am going to give you three complete worked examples — prompts you can copy and use today — along with practical tips for building your own prompt library.

Key Takeaway

The quality of AI output depends on the quality of your prompt. Use four elements — role, task, context, and constraints — to transform vague queries into precise, clinically useful results.