It is week three of your AI rollout. Two clinicians have stopped using the tool. A patient has complained. The error rate is higher than expected. And someone has just asked whether the practice should abandon the whole thing. Do not panic. Every practice hits bumps. Here is how to handle them.
No AI implementation goes perfectly. The technology is good but imperfect. People adapt at different speeds. Workflows take time to settle. Problems are not a sign of failure — they are a normal part of any change process.
What matters is how you respond to problems. The practices that succeed are not the ones that avoid all difficulties. They are the ones that identify issues early, address them systematically, and learn from each one.
Problem 1: Clinicians stop using the tool
This is the most common problem, and it usually happens in the first month. A clinician tries the tool for a week, finds it frustrating or unhelpful, and goes back to typing their own notes.
Why it happens: The review workflow feels slower than expected. The AI makes errors that feel unacceptable. The clinician is not comfortable talking to patients about AI. The tool does not fit their consultation style.
How to fix it: Have a one-to-one conversation, not a group discussion. Ask what specifically is not working. Often it is a technical issue (microphone placement, room acoustics, software settings) that has a simple fix. Sometimes it is a skills issue — the clinician has not yet developed an efficient review workflow. Pair them with someone who has.
The most common reason clinicians abandon AI tools is that they expected instant time savings and instead found the first two weeks slower. Set expectations upfront: "The first fortnight may actually be slower while you develop your review workflow. The time savings come from week three onwards." This single piece of expectation management prevents most early dropouts.
Problem 2: AI errors causing concern
Every AI tool makes errors. The question is not whether errors occur, but whether they are caught during review and whether the error rate is acceptable.
Omission errors (the AI leaves something out) are the most common. They are also the easiest to catch if the clinician has a good review habit. The fix is to reinforce the four-point review process from Module 3.
Hallucinated negatives (the AI records the opposite of what was said) are rarer but more dangerous. If you see a pattern — the same type of error recurring across multiple consultations — document it and report it to the supplier. This may be a model-level issue that needs fixing.
Confabulated details (the AI adds information that was never discussed) can be alarming but are usually caught on review. If a clinician finds a medication or diagnosis in the AI note that was not part of the consultation, this needs immediate correction and logging.
Any AI error that reaches the patient record uncorrected is a clinical incident. Treat it with the same seriousness as any other documentation error. Report it through your significant event process. Use it as a learning opportunity for the whole team.
Problem 3: Patient complaints or concerns
Patient complaints about AI documentation generally fall into three categories.
"I was not told." This is a consent and communication failure. Review how clinicians are informing patients. Ensure the wording is clear and consistent. Check that waiting room and website notices are in place. This is preventable with good process.
"I do not want it." A patient has the right to decline AI documentation. Ensure every clinician has a clear, practised workflow for this: pause the tool, document manually, note the patient’s preference in their record so it applies to future consultations. This should never feel like an inconvenience to the patient.
"The note is wrong." A patient has seen their AI-generated note (perhaps through the NHS App or online access) and found an error. This is a clinical records complaint. Handle it through your standard complaints process, correct the record, and investigate how the error passed review. This is actually a quality improvement opportunity — patients spotting errors is an additional safety layer.
In most cases, patient concerns resolve quickly with honest communication. "Thank you for raising this. The AI tool produces a draft note that your doctor reviews before it goes into your record. We take accuracy very seriously and I will make sure this is corrected."
Problem 4: Automation complacency
This is the problem that worries me most, because it develops slowly and invisibly.
Automation complacency is what happens when clinicians trust the AI output too much. They stop reviewing notes carefully. They skim rather than read. They assume the AI has captured everything because it usually does. And then they miss the one time it has not.
How to spot it: Review times get shorter over time without a corresponding improvement in AI accuracy. Clinicians sign off notes within seconds of them being generated. Error rates in audits increase. When you ask a clinician to walk you through their review process, they cannot describe a systematic approach.
How to prevent it: Regular audit and feedback. Share anonymised examples of errors that were caught — and errors that were not. Remind the team that the AI is a drafting tool, not an author. Build review into the workflow so that it is a structured step, not an afterthought.
Automation complacency is not laziness. It is a well-documented psychological phenomenon that affects experienced users of any automated system. Pilots, radiologists, and factory workers all experience it. The solution is not to blame individuals but to build systems that maintain vigilance — regular audits, error sharing, and structured review processes.
Problem 5: Technical issues
Some problems are simply technical.
Poor audio quality. The AI cannot transcribe what it cannot hear. Check microphone placement, room acoustics, and background noise. A directional USB microphone on the desk often performs better than a laptop’s built-in microphone.
System integration issues. If the AI tool does not integrate smoothly with EMIS or SystmOne, clinicians waste time copying and pasting notes between systems. Work with your supplier and clinical system provider to resolve integration issues. If they cannot be resolved, this is a legitimate reason to reconsider the tool.
Network problems. Cloud-based AI tools need a reliable internet connection. If your surgery has connectivity issues, the AI tool will be unreliable, and clinicians will lose trust in it. Address the infrastructure before blaming the tool.
Technical problems are frustrating but fixable. The important thing is to fix them quickly — every day a clinician has a bad experience with the tool is a day they become less likely to persevere with it.
When to escalate
Most problems can be resolved at practice level with good management and support. But some situations require escalation.
Safety concern: An AI error reaches the patient record, causes harm, or leads to a near-miss. Report through your clinical governance process and to the tool supplier.
Data breach: Patient data is exposed, lost, or used inappropriately. Report to your Data Protection Officer, your ICB, and the ICO within 72 hours.
Systemic tool failure: The AI consistently produces a specific type of error across multiple clinicians and consultations. Report to the supplier and to NHS England’s AVT team.
Having an escalation process written down and shared with the team before problems occur is much better than trying to create one in the middle of a crisis.
Key Takeaway
Problems are normal in any AI implementation. The five most common issues are clinician dropout (set realistic expectations), AI errors (reinforce review skills), patient complaints (improve communication), automation complacency (maintain audit and feedback), and technical issues (fix infrastructure). Have an escalation process ready for safety concerns, data breaches, and systemic tool failures.