Harvard Law School researchers call for more regulation in medical AI

Existing safeguards from the FDA and other regulatory bodies will require significant updates in order to maintain safety and ethics in healthcare’s use of constantly “learning” adaptive artificial intelligence algorithms, according to an article published Dec. 6 in Science.

Advertisement

In the article, researchers from Cambridge, Mass.-based Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics, and Fontainebleau, France-based INSEAD described the dangers of leaving medical machine learning algorithms unregulated after their initial approval.

“Our goal is to emphasize the risks that can arise from unanticipated changes in how medical AI/ML systems react or adapt to their environments,” they wrote. “Subtle, often unrecognized parametric updates or new types of data can cause large and costly mistakes.”

To avoid those mistakes, medical AI should undergo regular and thorough reviews as the algorithms continuously self-update, rather than expecting regulators to predict these adaptations before they occur, during initial assessment.

“To manage the risks, regulators should focus particularly on continuous monitoring and risk assessment, and less on planning for future algorithm changes,” the authors wrote.

View the full article here.

More articles on AI:
Google AI reads chest X-rays with ‘expert-level accuracy’
VA launches national AI institute: 4 things to know
Beth Israel Deaconess team using AI to improve liver biopsies

Advertisement

Next Up in Health IT

Advertisement

Comments are closed.