Researchers from The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics
at Harvard Law School and INSEAD, Singapore and Fontainebleau, urged for significant updates on existing safety and ethics measures to prevent a regulatory lockdown in healthcare as AI driven adaptive algorithms become ever more influential.
According to an article published in Science, the researchers argued that once a machine learning driven medical solution has been proven to be low-risk and effective, should regulatory bodies like the US Food and Drug Administration (FDA) authorize only the submitted version to be released into the market or permit the one which can.
They asserted their goal is to highlight the possible challenges that could arise from “unanticipated changes in how medical AI/ML systems react or adapt to their environments” as this is something unique which does not present in the reviewing processes of drugs or medical devices.
They believe regulators should focus on “continuous monitoring and risk assessment and less on planning for future algorithm changes” to avoid related mishaps