Can Governance Speed Up AI in Healthcare?
Why smarter oversight might be the key to faster, safer healthcare AI adoption.
Have you ever dragged four children through a grocery store, then realized you could just… stop doing that? That you could pull into a parking space, pop the trunk, and not tell anyone to stop licking random bananas? The first time I used grocery pickup, I felt a surprising amount of joy—like I’d been handed back an hour of my life I didn’t even know I’d lost.
This is a random kid licking a random banana, not my kid
That’s exactly how I felt when we stopped having to click through seven useless boxes to “demonstrate quality” after every anesthesia case to meet a CMS reporting rule. Sometimes that meant 140+ clicks per day. It wasn’t improving care. It wasn’t even capturing new information—it was already in the chart. It was just reporting for reporting’s sake. And once it was gone? Again, hours of my life back.
So yes, I understand why people groan when they hear the word “governance”. It brings to mind click boxes and TPS reports and endless forms. Pointless forms are bad for business, bad for healthcare workers, and ultimately bad for patients.
But what if it didn’t have to be that way?
What if AI governance in healthcare actually helped?
To be clear, governance and regulation are distinct. Regulation usually comes from the outside—laws, mandates, or reporting requirements imposed by government agencies or payers. Governance, on the other hand, is internal. It’s the system an organization sets up to make decisions, manage risk, and ensure accountability. In the context of healthcare AI, governance is what determines how a hospital evaluates a tool, who has decision-making authority, and what standards count as “safe enough.”
I think we can all agree that box-clicking for its own sake is pointless. But also that it’s unlikely that healthcare will suddenly morph into an industry with zero regulations or governance. We’re never going to be in an environment in which we’ll just throw a new AI tool into the hospital without someone doing some kind of check. The problem comes when the checks - the governance system - is different in every institution and for every tool.
Because you know what else is bad for business?
Uncertainty.
For vendors, not knowing what questions they’re going to be asked, how long the process will be, or even what good answers look like increases uncertainty. For the chairs of AI governance committees, uncertainty about standards, institutional norms, and what constitutes “safe” AI makes the decision to adopt technology much more dependent on the risk tolerance of both the institution and the individual committee members.
They might feel the need to have more “proof” than they really need to demonstrate they did due diligence, or there might be many extra people who have to sign off on any decision.
Without governance, vendors and AI governance committees are left to puzzle out decisions together, often with wildly divergent approaches and outcomes for the same AI tools.
Once you accept that there will be a healthcare AI governance process, the question goes from “Will the healthcare system have (internal or external) guidelines in place for new healthcare AI products, particularly those that are clinically risky and/or do things we rely on humans for now” (The answer is yes) to “How can we make the governance process as fast and efficient as possible to get good new technology in place to help as many patients and clinicians as possible?”
And what does the government say about AI governance?
It turns out, “Let’s use governance to make AI go faster” is the subject of a new memorandum to the heads of government agencies. It states:
“...agencies will be charged to lessen the burden of bureaucratic restrictions and to build effective policies and processes for the timely deployment of AI. Agencies are directed to accelerate the Federal use of AI by focusing on three key priorities: innovation, governance, and public trust.”
“Effective AI governance is key to accelerated innovation as it empowers professionals at all levels to align processes, establish clear policies, and foster accountability while reducing unnecessary barriers to AI adoption”
A few concrete approaches they encourage include:
Giving more people “at the lowest appropriate level” decision-making power about risk acceptance. Notably, they mention actually training people in these roles to “identify, assess, mitigate, and accept risk for AI use cases”. Having this kind of training for our front-line hospital staff would be incredibly helpful. It would empower them to weigh in on AI decisions in ways that actually reflect clinical reality.
Pilot programs are also encouraged, though the guidelines around scope and risk level are unclear.
And health systems are already figuring out ways to use evaluation and governance processes to decrease implementation time. A recent Kaiser paper describes how they accelerated AI implementation by creating a team to evaluate tools in parallel with deployment. Instead of waiting for every checkbox to be ticked before getting started, they figured out which safeguards needed to be in place up front and which could be addressed in real time. It’s an innovative approach that heavily depends on context and risk level, but it shows how governance can be both thoughtful and nimble.
A novel thought: developing a plan to stop using tools that don’t work
And possibly my favorite line from the White House memorandum:
“When the high-impact AI is not performing at an appropriate level, agencies must have a plan to discontinue its use”
Let’s all take a moment to think about what might have changed if health systems had a plan for discontinuing previous technology that was not performing at an appropriate level. EHRs, for example. How would that have impacted EHR vendors’ behavior?
Even the mental exercise of figuring out what constitutes an appropriate level of performance for health technology is kind of mind-blowing. And that we could just get rid of it if it didn’t work, instead of contorting ourselves around poorly-designed tech? We’ve somehow created a system in which this is hard to imagine.
Conclusion
Healthcare AI governance is going to happen. Instead of fighting that idea—or assuming it will become a tedious slog of checkboxes and compliance—we have a chance to build something better.
We can create systems that make it easier to try promising tools, faster to say yes (or no), and clearer about what “good enough” really means. We can reduce uncertainty for the people building tools and the people evaluating them. And we can set real expectations for performance—so that when something isn’t working, we stop using it.
It won’t eliminate all checkboxes. But it could be the difference between the pain of dragging four kids through the store and the joy of popping your trunk at pickup.
Hi there, I’ve enjoyed your work. I’m a long-time gastroenterologist and I just joined Substack as well. I’ve been blogging for 16 years, but on another platform. I hope you'll follow me at http://mkirsch.substack.com/. Best wishes!