AI Industry Innovations , AI Technologies , AI-Based Attacks
US House Members Eye Potential Regulations in Healthcare AI
Industry Experts Share Concerns Over Misuse, Privacy and Security With CommitteeAs Congress weighs potential legislative and regulatory guardrails for the use of AI in healthcare, issues such as human oversight, privacy and security risk need close attention, said healthcare industry experts who testified during a hearing of the Health Subcommittee of the House Energy and Commerce Committee on Wednesday.
See Also: Securing the Data & AI Landscape with DSPM and DDR
The concerns about AI tools used in healthcare surfaced during the hearing. Experts also testified that the Centers for Medicare & Medicaid Services should make changes in reimbursements to help incentivize more physicians and other clinicians to invest in AI decision support tools, which could ultimately help improve patient care and outcomes.
Without financial incentives for the investment in these tools, the playing field ultimately won't be level in terms of the equity of all patients having access to the most beneficial AI-enabled technology tools that can improve diagnosis and treatment - and help avoid medical errors.
"If these tools are shown to decrease mortality, increase survivorship, then they will become a best practice that should be used in every case," testified Dr. Michael Schlosser, senior vice president of care transformation and innovation of HCA Healthcare.
"We have a great opportunity with AI to take a business-case-minded approach in how it's deployed. Not just another technology that adds cost and then we try to drive up reimbursement and therefore drive up costs to the healthcare system," he said. "But instead: How can these technologies make us more efficient and effective and decrease the overall costs of the healthcare system as we deploy them?" Schlosser said.
Medical mistakes that could have been prevented by using AI technology - or are potentially caused by it -could eventually become a liability consideration. Ultimately, it is crucial for the clinician to have the final say over a patient care decision recommended by AI, testified Dr. Christopher Longhurst, chief medical officer and chief digital officer of UC San Diego Health.
"Both transparency and keeping a human in the loop at all times" are critically important, he said.
Consumer Health AI Concerns
Other areas ripe for potential congressional or regulatory action include the privacy and security of AI tools outside the HIPAA umbrella.
"There are greater risks with consumer apps and other such mechanisms. There's a lot of healthcare data floating around that is not covered under HIPAA," Schlosser said. That data includes consumer health information not covered by HIPAA that is collected, stored and shared, including for advertising or to train AI algorithms.
"This committee needs to consider a federal data privacy law to set the foundation of protections of such data. We should do this before regulating AI in healthcare," Rep. Greg Pence, R-Ind., told the witnesses.
But regulatory gaps related to the safety of AI health tools used by consumers also pose risks to patients, especially when it comes to diagnosis, some witnesses testified.
"If we think about direct-to-consumer symptom checkers for diagnosis, there's a legal disclaimer that says, 'This is not medical advice,' but patients are taking it as medical advice," testified Dr. David Newman-Toker of Johns Hopkins University School of Medicine.
For instance, if a symptom checker tells a consumer that his acute bout of dizziness is not a serious concern and the individual forgoes seeing immediate medical attention but is actually having a stroke, that's a dangerous issue, Newman-Toker said.
"It's really incumbent to us to pay more attention to that consumer health space. There should be accountability - and there is no accountability outside the confines of a hospital or clinic setting," he testified.
The Good and the Bad
Rep. Morgan Griffith, R-Va., expressed concern over recent research showing that AI has the potential to be used by malicious actors with "wet lab skills" to "revive" deadly pathogens, such as Spanish flu. "Do we need a test ban treaty like we do with nuclear?" he asked.
Dr. Benjamin Nguyen, a senior product manager at Transcarent, a provider of AI-driven virtual care applications, suggested paying more attention to "alignment research" that studies malicious use of AI and "how to defend against those increasing vectors."
Peter Shen, head of digital and automation for North America at medical equipment maker Siemens Healthineers, explained to the subcommittee that AI has been used for years in some areas of healthcare - including medical devices - and already is subject to important regulatory guardrails.
"Our algorithms go through a regulatory approval process with the Food and Drug Administration. We follow all AI/machine learning-enabled medical device regulatory requirements for premarket review and postmarket surveillance to ensure the safety and efficacy of our devices," he said.
"We also engage with the FDA regularly on AI/ML and provide feedback on ways to ensure the continued safe and effective application of these technologies. In this regard, our AI is distinct from unregulated AI products."
Ultimately, any action Congress decides to take to regulate AI in healthcare must be carefully weighed, said some of the committee members.
"Without any congressional statutes yet, how do we guarantee all the positives you presented?" Anna Eshoo, D-Calif., asked the witnesses.
"I believe in AI's potential, but we need to take a long hard look at how this will work," she said.
Ultimately, "we want to protect patient safety without stifling innovation," said Brett Guthrie, R-Ky., the chair of the Health Subcommittee.