Building AI That Patients Can Trust
Chandan Kaur on lessons from medical device development
Chandan Kaur embodies the intersection of perpetual curiosity and uncompromising dedication to healthcare innovation. From her foundational years as a biomedical engineer, she advanced rapidly into designing and launching Class II and III surgical tools, implants, and drug-device combination products. At companies like Medtronic, BD, Abbott, Chandan consistently took on the challenge of Class II and III product innovations, navigating some of the most scrutinized regulatory landscapes and always prioritizing patient safety with uncompromising quality standards.
Never content to follow the expected path, Chandan pursued an MBA from The Wharton School and then boldly pivoted from traditional hardware into the realm of healthcare AI. As founder of Acucare, she leverages her ability to see market shifts before they happen, bridging the gap between the rigor of medical device standards and the dynamic potential of digital health.
Her uncommon perspective stemming from lived experiences as a patient and highly experienced medtech innovator fuses operational rigor, regulatory insight, and business growth strategy, ensuring that innovation remains both impactful and responsible. What truly distinguishes Chandan is her willingness to push boundaries, but in a thoughtful, informed way. She has seen the value in decades of proven safety protocols, rigorous risk management systems, and comprehensive post-market surveillance in Med devices that AI developers are only beginning to adopt. Her unique experience brilliantly bridges these worlds, offering invaluable lessons on how the time-tested rigor and accountability that patients expect from traditional medical technology
She is dedicated to advancing healthcare technologies in ways that are daring, disciplined, and above all, meaningful for the people it serves.
Background
You’ve built a career across medtech, engineering, and now health AI. How do you describe your career arc, and what drew you into this intersection of technology and patient care?
Some of the toughest moments in my life ended up shaping the direction of my career. When I was 17, I had an unexpected autoimmune flare that affected my vision. One day I came home from school with holes in my sight, and the next day, I was sitting in a clinic having images taken of my retina. That experience stuck with me—not just because it was scary, but because I was fascinated. I remember being more curious about how that imaging device worked than worried about the vision loss itself.
That mix of fear and fascination sparked a deep interest in the human body and the tools we use to understand it. When I discovered biomedical engineering—still a relatively new field at the time—it felt like the perfect intersection of my love for math and physics and my growing interest in health.
I started out in medtech, working on surgical tools, implants, and drug-delivery devices. I’ve always been drawn to hardware—things you can build, take apart, and improve. But over time, I started seeing how much of modern medicine also depends on information—how we collect it, interpret it, and act on it. That led me to pivot into health AI and clinical informatics a couple of years ago, building tools that support physicians in decision-making.
Throughout my career, I've been driven by a need to solve real problems—starting from personal experience, then asking: what’s not working, why, and how can we do it better? We’ve extended human life dramatically over the last couple centuries. Now, I think the next frontier is improving the quality of that extended life. That’s what keeps me going.
Looking back, what experiences most shaped your perspective on how to build safe, effective, and meaningful health technologies?
For me, this one’s personal. Everyone in my household is a patient in some way, and I’ve lost my aunt at a young age, to an autoimmune disease that’s much more manageable today. That experience keeps me grounded. It reminds me that behind every drug or device is someone’s parent, sibling, or child—just like mine. We owe it to them to show up with care, responsibility, and respect. Safety and effectiveness aren't optional—they’re the baseline.
I’ve also learned that we can’t always wait for the perfect solution. When my aunt got sick, the right treatments didn’t exist yet. Now they do, but they’re still not perfect. That loss taught me a hard truth: waiting for perfection can cost lives. As engineers and builders, we have to focus on what’s needed now, make the best tradeoffs we can, and keep moving forward—always transparently.
In the end, meaningful health tech is about informed choices. Patients deserve safe, effective tools—and the information they need to decide what’s right for them. That’s the balance we aim for: progress without compromising trust.
Lessons from Medtech
You’ve worked across multiple device classes—Class II, Class III, and programs requiring PMA approval. How did those experiences shape your approach to risk, regulation, and product development?
I see regulations as solutions—they’re designed to control risk and protect patients. But like any solution, they come with side effects. One of the most common is the documentation burden, which is often blamed for delays. From a business standpoint, I understand the frustration—small delays can have big financial and even life-altering impacts.
Still, from a safety perspective, that documentation is critical. It ensures every decision is backed by evidence and creates the accountability patients deserve. I have deep respect for what regulation is meant to do: keep unsafe or unproven technologies out of clinical use.
What I don’t find helpful is when regulation gets treated as the root cause of delays. In my experience, it’s rarely the rules themselves—it’s poor planning, unclear communication, or decision paralysis. And those are all solvable problems.
That’s why I’m excited about the potential of AI and ML to streamline product development—not just clinically, but operationally. Many tasks, especially rule-based documentation and compliance checks, are ideal for automation. Of course, we still need human oversight—but we can reduce time and cost without compromising safety.
As a patient, I value strong regulation. As an innovator, I see constraints as a catalyst for better solutions. The goal isn't just to meet the standard—we now have the tools to exceed it, and to do so more efficiently than ever before.
You also led through a Class I recall. How did that influence the way you think about safety, reliability, and patient trust?
Going through a Class I recall really puts things in perspective. It’s a stark reminder that we don’t operate in a perfect world—and that innovation always comes with risk.
As both a patient and an innovator, it all comes down to risk versus benefit. I’d rather have an imperfect solution with clearly communicated risks than no solution at all. The key is transparency. Patients deserve to know what they’re being exposed to so they can make informed choices—not have that choice taken from them.
As builders, we’re constantly juggling those tradeoffs. I’ve worked with brilliant, deeply compassionate engineers who want to get it exactly right. But the reality is, if we chase perfection, we may never launch—and that delay can mean real harm for people waiting for a solution today.
The human body is complex. We can’t predict everything. What we can do is identify and mitigate known risks, maximize benefit, and be honest about what we don’t know. No technology comes with zero risk—often, the most impactful innovations carry the most serious tradeoffs. And our responsibility is to manage those with care, integrity, and respect for the people who are trusting us.
Transition to AI and Informatics
After more than a decade in medical devices, what led you to begin working on AI/ML-driven data platforms?
It started with a very personal problem—one I’ve seen worsen over the last two decades. I once read a doctor say, “We’re drowning in information, but starving for knowledge.” And that couldn’t be more true.
We’ve developed some of the best diagnostics and assays in history, and we’re generating billions of terabytes of health data globally. But in clinical practice, most of that data still goes unused. Not because people don’t care—but because it’s humanly impossible to process that much information, dozens of times a day, and extract meaningful insights.
I’ve lived that burden as both a patient and a caregiver. I found myself manually organizing lab results, reading papers, tracking symptoms—just to show up prepared for a doctor’s appointment. Doing that across multiple family members with complex, chronic conditions? It was exhausting, and frankly, unsustainable.
That’s when it clicked for me: this is exactly where automation belongs. So I pivoted toward building AI and ML-based data tools to support clinical care. When validated and applied thoughtfully, these technologies can help translate raw data into real insight—something clinicians and caregivers can actually use. And to me, that’s how we get closer to delivering truly personalized, value-based care.
How do you see your engineering and regulatory background informing the way you approach AI in healthcare?
Honestly, I don’t see it as all that different from medical device development. Take valve technology—it’s based on fluid mechanics, a field that’s been evolving for thousands of years. And yet, putting a valve in the human heart is still a 10-year development effort. Why? Because it’s high risk. Even a well-understood technology has to be carefully adapted, tested, and qualified for a very specific use case—like preventing backflow in the heart.
It’s the same with AI. Machine learning and natural language processing are powerful, established fields that are continuing to evolve. But when we apply them to clinical care, we take on a new level of responsibility. Just like devices, AI tools must be qualified for each specific use case, and we have to demonstrate that they’re safe, effective, and appropriate for that context.
The higher the potential benefit, the greater the risk—and with that comes more regulatory responsibility. That’s not a barrier; it’s the framework that ensures we’re building tools people can trust.
So to me, med device engineering and AI tool development are fundamentally aligned. The technologies may differ, but the mindset is the same: understand the risks, design for safety, and do the work to prove that the solution is fit (safe and effective) for its designed purpose. We have more tools and knowledge than ever, and we can move faster—but only if we manage risk as rigorously as we pursue innovation.
What excites you most about the potential of AI/ML to transform healthcare compared to traditional devices? What do you think the AI/ML community can learn from the rigor of medical device development?
What excites me most is the scale of insight AI/ML can unlock. It’s like going from solving math problems by hand to using Excel—then leaping to systems that can process billions of data points in real time. Humans simply can’t do that manually, at least not consistently or at scale.
In the short term, I’m excited about how AI can help offload repetitive, high-burden tasks—things that don’t require clinical expertise. But in the long term, I’m even more excited about predictive AI. These systems can identify patterns, anticipate outcomes, and help personalize care in ways we’ve only dreamed of. That’s where the real transformation happens—moving from reactive to proactive, from generalized to truly individualized care.
But with that potential comes real responsibility. The AI/ML community has a lot to learn from the rigor of medical device development. That rigor exists for a reason—because in healthcare, even small mistakes can have serious consequences. Medical device regulations evolved from hard lessons, and if we don’t apply that same level of discipline to AI, we risk repeating them.
That said, I’m optimistic. We’ve already seen the early frameworks for Software as a Medical Device (SaMD) take shape, and they’re helping guide responsible innovation. The challenge now is to move fast and stay grounded—building systems that are not only powerful, but safe, transparent, and truly beneficial to the people they’re meant to serve.
Leadership and Innovation
You’ve led global cross-functional teams and worked across clinical, regulatory, and business priorities. What lessons stand out about aligning diverse stakeholders around innovation?
This is a tough one—because at the end of the day, it all comes down to people. The more fluent you become in speaking other people’s “languages,” the easier it is to align teams with very different priorities—whether those are technical, regulatory, clinical, or business.
In my 20s, I saw everything as a math or physics problem. But over time, I realized that the technical challenges are often the easiest part. The harder part is aligning humans—each with their own goals, constraints, and ways of thinking. That’s where the real work lies.
Early in my career as an R&D engineer, I didn’t always get that. I thought showing lab results was enough. But when you're talking to legal or regulatory teams, they don't need test data—they need clarity on implications. When you're talking to business leaders, they need to understand risks, trade-offs, and timing—not test protocols.
Looking back, I didn’t always have the skills to translate across those functions, but very quickly, I learned that communication is everything and to be effective I had to learn other people’s languages - Reg, Clini, Business, Manufacturing, Ops, Legal etc. You have to approach every team member like they’re your customer: understand what they need from you to do their job well, and deliver it in a way that makes sense to them.
That, to me, is how high-performing teams work—not by aligning everyone to one way of thinking, but by unblocking and enabling each other so everyone can do what they do best. That’s what turns a group of experts into a winning team.
How do you balance the creativity needed for innovation with the discipline required for safety and compliance?
That’s a great question—because honestly, there’s no one-size-fits-all formula. The balance really depends on the stage of development.
In the early phases—say phases 0 to 2—there’s a lot more room for creativity. That’s when you explore ideas, test assumptions, and get scrappy. But even then, creative thinking needs to be grounded in good business judgment. Without that, you risk investing in ideas that won’t hold up later.
As you move into phases 3 and 4, the focus shifts more heavily toward compliance. You’re closer to commercialization, the stakes are higher, and the tolerance for risk is much lower. But interestingly, that’s often when I’ve seen some of the most creative solutions emerge. Constraints—whether technical, regulatory, or operational—can actually fuel innovation by forcing us to think differently.
So for me, creativity and compliance aren’t opposites. They go hand in hand. Regulations set the guardrails, but the challenge of working within them is often what sparks the most meaningful and practical innovations.
What advice would you give to engineers and product leaders who want to cross over into health AI?
The same advice I have to remind myself of: learn to appreciate the process. It’s easy to fixate on outcomes—especially if you come from fast-paced industries. But in health AI, where the stakes are people’s health, the journey to ensure safety truly matters.
Innovation here requires patience, persistence, and a deep respect for the way things are done. The process might sometimes feel like a burden, but it’s there for a reason—to minimize harm. When you understand the real risks involved, you start to appreciate why those guardrails exist and what they enable us to do safely.
That said, we desperately need fresh thinking in this space. The process isn’t perfect, and it won’t improve unless more people bring new ideas. So yes—bring your creativity, your drive, and your urgency.
Because we need new ideas, fresh energy, and a real sense of urgency to help make this process better, faster, and safer—for everyone.
Looking Ahead
As you think about the next chapter in your work, where do you see the greatest opportunities for AI/ML in healthcare?
Honestly, I see the greatest opportunities for AI/ML in tackling the boring, repetitive tasks that we simply don’t have enough humans to do especially when it comes to delivering the level of high-quality care that everyone deserves.
What risks or blind spots worry you most about how AI is being deployed in clinical settings today?
What worries me most is the lack of true collaboration and how some large enterprise players have essentially created monopolies due to their access to patient data. When just a few companies dominate the space, it stifles competition and innovation—the very forces that drive better products and better care.
It frustrates me that so many patients’ well-being depends on a small handful of companies. If we had more competitive dynamics, I’m confident we’d see not only superior AI tools but a healthcare system more focused on value and outcomes.
Honestly, this is a very serious topic of its own—one we could spend an entire episode on. But at its core, that concentration of power through data. It is the biggest risk I see today, and I want to see this changed.
If you imagine the healthcare landscape five years from now, what role do you hope AI and data platforms will be playing in patient care?
Looking five years ahead, I hope AI and data platforms become central to scaling care while improving quality. The reality is, we simply don’t have enough healthcare professionals to meet growing patient demand. In some specialties, one provider cares for 10,000 patients—that’s not sustainable.
AI’s role, I believe, will be to automate the daily, repetitive tasks that weigh down doctors, nurses, and care teams, freeing them to focus on what only humans can do: deliver expert, compassionate care.
With data platforms, the biggest opportunity lies in connecting the dots. Today, health data is fragmented and siloed across countless systems. If we can unify and coordinate this data into dynamic, integrated platforms, with measures to build and commercialize tools safely, the possibilities are endless.
Even with the data we already have, better connection and smarter use could propel us beyond managing diseases to truly preventing them. That’s the future I want to help build.


