The most recent Mission Impossible movie pits Tom Cruise against an all-knowing, autonomous AI that can (somewhat bizarrely) only be shut down via a specific key inserted into a submarine. The subtext is: Imagine how powerful these machines could become, that they could control what seems true in our increasingly digital age! And yes, there are some terrifying thought-experiment scenarios out there about existential AI risk.
In the short term, though, there are the more mundane and likely types of AI risk that have to be managed, which are particularly hazardous in healthcare.
Categories of AI risk for all businesses
Most AI healthcare risk overlaps with the same risks most businesses using AI would encounter. They generally fall into the categories of:
Privacy
Most businesses have to follow some kind of legal framework for how they get the data for their models and where it comes from. The GDCR and the new AI Act in Europe are examples of how companies have to comply with privacy laws
Privacy is much more part of the culture of medicine than it is in most businesses. Instead of pushing the limits of the law, privacy about healthcare is considered a fundamental tent, enshrined in law in 1996 with HIPAA.
Security
Cybersecurity has been a concern since the introduction of computerized systems. The usual issues with unauthorized access and viruses still apply. AI introduces additional risks of “data poisoning”, in which bad data are inserted into the model, or the model being extracted without the knowledge of the company.
In medicine, the increasing use of third party vendors and hacking into the system are real concerns.
Fairness/bias
In business, biased models could potentially harm users (such as by assigning certain groups higher interest rate mortgages, for example), and businesses may be liable for those actions.
In healthcare, the imperative to eliminate data, algorithmic, and systemic bias is even more crucial and adheres to a higher ethical standard than in many fields. There is continued liability for this kind of liability as well.
Transparency/explainability
If a customer complains to a regulatory body that her data was used improperly, the business must have sufficient transparency into its own AI model that it can see and explain where the data came from and how it was used.
Transparency in medicine mirrors the business concern, but also has specific related issues:
Does a patient need to know that AI was used during the visit, even if it was used similarly to how a risk calculator would be used currently?
Can the model explain how it developed its recommendations, thus giving clinicians assurance that the model’s suggestion is clinically reasonable?
Can the physician explain the role of AI in a patient’s clinical course to the patient and to other clinicians?
If there are AI-related billing errors, who is responsible for fraud?
Safety/performance
Most businesses have some kind of minimum performance requirement to meet to maintain their contracts. AI errors, or misinterpretation of AI outputs, could lead to claims of negligence from customers.
Clearly, the bar for safety and performance is much higher for healthcare AI than it is for most businesses. A failure could literally have life and death consequences, rather than loss of income as it might for a business. Two factors make understanding healthcare AI safety risk more complex:
Its output may not be the same every time, even with the same set of inputs
Its accuracy and validity may change over time without notice
The bar for clinical practice has long been the human “standard of care”, but AI is not currently part of that standard.
The standard of care usually doesn’t have metrics or accuracy associated with it; there’s not a clear percentage of missed appendicitis cases that would lead to legal action, for example, or a percentage of incorrect chart information that would lead to firing of a clinician.
The “nirvana fallacy”
When is AI risk acceptable? Is it zero risk? Is it when the risk is less than the current state? Can we measure risk in AI vs current state directly? The nirvana fallacy describes the tendency to require AI to be perfect before it can be used, and holding it to a higher standard than other interventions.
AI Risk and the Law
In the vast majority of circumstances, the law is not settled in crucial areas related to AI in healthcare. There will likely be multiple lawsuits and cases that will guide future decisions. Until then, my personal view is that physicians will likely continue to be held responsible for any decision that is made in their purview, whether AI is involved or not. This graphic from a paper in Millbank summarizes the current state:
Next week, we’ll look at how the National Institute for Standards and Technology frames AI risks and AI risk mitigation, then at the difference in SaaS vs SaMD and risk levels per the FDA.