Who's Liable When AI Gets It Wrong?
New EU regulations suggest it won't be just physicians on the hook
Many docs have asked me about their liability when using AI tools - they feel "damned if they do, damned if they don't." Follow a wrong AI suggestion? You're liable. Ignore a correct one? Also liable. That might be changing with Europe’s new regulations.
The EU has been busy passing several laws that will significantly affect healthcare AI tools over the next few years. While you might think European regulations don't concern you, the legal liability frameworks being established there often become global standards, just like what happened with GDPR.
The GDPR is best known for making many, many “Accept All or Decline” buttons pop up on your phone when you visit a new website.
Three Key Regulations Coming Into Effect
The EU AI Act (entered force August 2024) creates the world's first comprehensive AI legal framework. High-risk healthcare AI systems must meet strict requirements for risk mitigation, data quality, transparency, and human oversight. Full compliance required by August 2026, with medical devices getting until August 2027.
The European Health Data Space (EHDS) (entered force March 2025) mandates data sharing for research and AI development by March 2029. Healthcare institutions face fines up to €20 million or 4% of global revenue for non-compliance. Note that this kind of data sharing is ambitious but could be incredibly useful for research if it can be done safely.
https://www.european-health-data-space.com/
The EU Product Liability Directive treats software developers as device manufacturers. Here's the critical part:
"the product's defectiveness shall be presumed when the claimant demonstrates that the product does not comply with mandatory product safety requirements."
This last point represents a fundamental change in how AI liability works. Instead of having to prove an AI system caused harm, the burden shifts to companies to prove their system was compliant with safety requirements. If your AI tool doesn't meet EU standards and causes harm, you're presumed liable until proven otherwise.
For physicians worried about being caught in the middle, this could be significant. Rather than wondering whether following or ignoring AI advice will get you sued, the question becomes: did the AI company required to prove their system was safe before you used it? If they didn't meet those safety standards, the liability burden might shift to them.
This accountability-first approach is distinct from the US model, where liability typically requires proving negligence or defects after the fact. Currently, if an AI system gives bad advice, physicians often find themselves having to justify their clinical decision-making. Under the European model, the question might become whether the AI company properly validated their system in the first place.
The UK Also Increases Regulation
This move toward increased regulation is also happening in the UK, where NHS England recently classified ambient scribes as medical devices. Any ambient scribing product that uses AI for "further processing, such as summarisation" now requires registration with the Medicines and Healthcare products Regulatory Agency (MHRA) and must meet medical device safety standards.
This decision surprised many companies who viewed basic transcription tools as low-risk administrative software. The UK's position that AI summarization crosses into medical device territory suggests regulators are taking an expansive view of what constitutes healthcare AI—and what level of pre-market validation should be required.
For physicians using these tools, this could mean more confidence in the systems they're working with, knowing they've undergone medical device-level scrutiny rather than being treated as general software tools. It may also mean a longer time to market for these tools, and possibly that they’ll become more expensive as they pass the costs of testing and compliance onto hospitals.
A Primer on AI and Healthcare Agencies Across the Pond
Why This Matters for US Healthcare
Even if you're not directly using EU-developed devices, these kinds of norms may spread and be a sign of where regulation is headed in the US as well. Several indicators suggest this shift could influence American policy:
Precedent for global adoption: GDPR became the de facto global privacy standard despite being European law. Companies found it easier to build one compliant system than maintain separate standards.
Vendor influence: Major healthcare AI vendors will need to meet EU standards to access those markets. These compliance requirements often become baseline features in their global products.
Regulatory momentum: The UK's classification of ambient scribes as medical devices has many commentators suggesting the EU and FDA might follow similar approaches.
Physician protection: If European-style liability frameworks spread to the US, they could provide clearer legal protection for physicians by requiring AI companies to prove safety upfront rather than leaving doctors to defend their clinical judgment after bad outcomes.
The Practical Takeaway
The European approach represents a shift from "move fast and break things" to "prove safety first." While US regulation remains largely voluntary and guidance-based, the EU is creating binding legal frameworks with significant financial penalties.
For healthcare leaders and AI developers, this suggests:
Documentation standards are rising: The EU requires extensive technical documentation and risk management systems that may become global expectations.
Liability models are evolving: The presumption of defectiveness for non-compliant systems could shift liability from physicians to AI companies that fail to meet safety standards.
Regulatory classification is expanding: The UK's broad interpretation of what constitutes a medical device suggests regulators worldwide may take more expansive views of AI oversight.
Physician liability could decrease: If AI companies must prove their systems are safe before deployment, physicians might have stronger legal ground when using properly validated tools.
The Europeans are essentially beta-testing comprehensive AI governance for healthcare. Whether these approaches prove effective or overly burdensome, they're likely to inform regulatory development globally.
For physicians currently worried about AI liability, these developments suggest the burden of proof might shift toward the companies building these tools. Instead of wondering whether you'll be held responsible for following or ignoring AI advice, the question might become: did the AI company fulfill their safety obligations before putting this tool in your hands?
For now, it's worth following along to see how this process goes in Europe, not for compliance reasons, but to understand where the regulatory conversation is headed. I know physicians are very invested in swatching whether the liability framework being established in Europe becomes the template for AI accountability worldwide.
Are you seeing changes in how your organization thinks about AI liability and documentation? I'm tracking these regulatory developments and their practical implications for US healthcare.