Is it Tom Cruise? Or Your Internist?
What Mission Impossible can teach us about digital biomarkers and biometrics
Cue the Mission Impossible theme song
In every Mission Impossible movie, there’s always a key biometric detail that lets Tom Cruise get past the bad guys. Whether it’s a chopped off finger or a 3D-printed face mask, biometrics are portrayed as the highest-tech approach to securing diamonds, cash, or a nuclear arsenal. But with the advent of more advanced AI models, that technology has arrived in clinics and hospitals with much more common, if less suspenseful, results.
Imagine being alerted to a possible Parkinson’s diagnosis from your patient’s voice when you use ambient dictation, or getting a readout of their daily METs instead of asking them how far they could walk without stopping. This information is already available; researchers have started to identify voice markers that lead to new diagnoses, and activity trackers are already recording activity information. The future is here, and it's filled with AI-powered health monitors, chatbots, and biometric recognition systems that promise to revolutionize healthcare.
The Double-Edged Sword of Biometric Data and Digital Biomarkers
Biometric data is becoming an integral part of our daily lives, from unlocking your phone with a glance to the security of verifying your identity with a fingerprint. However, as these technologies find their way into healthcare, they bring along a host of privacy concerns. The voice and facial recognition technologies that allow for personalized care also open the door to potential misuse and surveillance. The thought of law enforcement or advertisers accessing our most intimate health details without consent is unsettling, to say the least.
A recent Nature Digital Medicine article makes the case for defining digital biomarker ‘fingerprints’, or unique signatures of a defined healthcare characteristic, which is “made by the integration of various digital biomarkers”. This focus on healthcare characteristics rather than purely physical markers like facial recognition differentiate digital biomarkers from biometric data, though the distinction is still new with a lot of overlap. The graphic below from that article lists the many ways health data can be captured, from gait analysis to speech to eye movements. Many of these are being captured now but not saved, or saved but not analyzed, or analyzed but not integrated into caring for actual patients.
The FTC's Stand on Biometrics: A Beacon of Hope
However, the more data we collect about ourselves and our patients, the more data is potentially up for sale and at risk of data breeches. The Federal Trade Commission (FTC) recently warned companies of their obligations related to the misuse of biometric data. Two of the main points of the FTC’s new policy statement include:
“Failing to assess foreseeable harms to consumers before collecting biometric information;
Failing to promptly address known or foreseeable risks and identify and implement tools for reducing or eliminating those risks”
These statements are meant to encourage companies to tread carefully, ensuring compliance with privacy laws and considering the ethical implications of their technologies. These statements apply to all companies using biometric data including healthcare AI technologies, which navigate an especially complex list of “foreseeable harms”, such as inadvertent diagnosis or release of diagnosis when using voice recognition software for other purposes. And as we discussed last week, the likelihood that this data will be sold is fairly high.
Global Biometric Data Collection: No Consensus
Other countries handle biometric data with a range of practices, with some embracing extensive surveillance and others adopting more restrictive approaches. Biometric data is almost always collected at airports, and I’m sure many countries have biometric data on my family and me.
China stands out for its pervasive use of facial recognition on 626 million cameras nationwide. China also allows biometric data monitoring while at work. Some Chinese companies have monitored employees with EEGs to determine how productive they were, and monitoring of employee emotions via wristbands has also been tried. Multiple countries use facial recognition to persecute minority groups, and Iran uses it to identify women who are defying its new hijab law.
Meanwhile, the EU’s GDPR regulations offer some safeguards against unchecked biometric surveillance. The GDPR includes provisions for individuals to access their biometric data and the “right to be forgotten”, meaning they can request their biometric data be erased. Yet, even in these regulated environments, the collection of biometric data marches on, raising questions about the balance between security and privacy.
https://www.comparitech.com/blog/vpn-privacy/biometric-data-study/
Audiomics: A New Field in Health Biometrics
With the ability to capture and analyze these new biomarkers, I expect we will start to see new fields of science emerge to understand how to integrate this information into healthcare. One early example is the realm of "audiomics," which the authors of a new paper describe as “the interdisciplinary field of audio analysis applied to biomedicine to identify unique audio biomarkers of health and disease.”. This emerging field offers a noninvasive way to screen for health conditions, from laryngeal cancer to heart failure, using just your voice.
The technology picks up not only differences in how the vocal cords vibrate, but also in tone of voice and affect that can lead to the identification of mental health disorders. Similarly, the sounds from a cough may be able to distinguish pneumonia from a COPD exacerbation.
The implications are staggering, yet the science is still early: there are not yet any FDA-approved audiomics algorithms due to concerns about generalizability. For example, in most audiomics studies of laryngeal cancer “reported training datasets of fewer than 300 patients, and only 2 studies used more than 1 data modality to train their models”.
One major hurdle is a lack of clarity in how a voice can be de-identified and how much HIPAA applies to voices. Without the ability to share data easily, researchers are (and will continue to) have difficulty providing AI models with enough training data to truly make use of the technology. Making this even more difficult is the fundamental relationship between identifiability and technological improvement: as the possibilities for medical uses increase, so do the possibilities for re-identification.
The authors call for three conditions to be met for the field to move forward, including clarification of the “unique ethical and legal challenges in audiomics”:
[HIPAA] applicability
Potential reidentification of patients by their voice
Voice hacking
Data ownership
Biomarkers and the future
I expect that many new fields will face similar challenges in balancing privacy, patient rights, and scientific discoveries. HIPAA is an outdated and inadequate way to address many of these concerns, and I hope regulators will find a way to encourage progress while protecting patients.
The integration of biometric data and digital biomarkers into our clinical practices is unfolding before our eyes, even as the practical applications may feel as far off as a Tom Cruise plot. In the future, these tools may mean that diagnosis and monitoring are not just about what we see in the clinic but about harnessing a spectrum of data that speaks to the unique biological narratives of our patients. Yet, as we embrace this future, we must remember that our commitment to patient care extends beyond the exam room—it encompasses safeguarding the very data that define their identities. As a profession, we will have to navigate this new reality with the dexterity of a Mission Impossible protagonist, ensuring that in our quest to outsmart diseases, we never compromise on the core values of medical ethics and patient privacy.