The AI Revolution and the FDA: Becky Wood and Deeona Gaskin’s Fireside Chat with Former FDA Commissioner Dr. Scott Gottlieb
AI tools are already advancing patient care by opening up new avenues for drug and product discovery, evaluation, and pharmacovigilance. But they also raise regulatory challenges as our panelists discuss.
In this ‘fireside chat’, Sidley’s Becky Wood and Deeona Gaskin speak to physician and public policy expert Dr. Scott Gottlieb, former Commissioner at the U.S. Food and Drug Administration (FDA), about how AI is transforming the delivery of healthcare and how FDA is using and regulating AI. Four key topics to emerge from the wide-ranging discussion were:
- Areas where AI has the greatest potential: These include drug and biologic development, where AI could shorten the process of predicting which agent might be a good drug candidate. AI has the potential to facilitate and increase the diversity of clinical trial populations by enabling better clinical trial representation and helping to better target enrollment. AI tools are also proving useful in pharmacovigilance, where they are able to help identify and process information by sifting and correlating huge amounts of data.
- The direction of investment: Venture capital is targeting life sciences companies engaged in purposefully building well-validated data sets for use in health care. For example, the investment flowing into AI technologies aimed at drug discovery is huge. Explainability is important for all AI tools used in healthcare, and an AI tool may be able to provide a sound biological rationale for why it thinks a drug ought to work, and cogent explanation of why it arrived at that conclusion. This may enhance a strong public health narrative during the drug-development process.
- FDA’s use of AI: FDA reviewers and compliance assessors are beginning to use AI for data analysis, regulatory assessments, and post market surveillance. FDA and the life sciences industry could benefit from additional dialogue about which tools FDA may use for regulatory purposes overlap with those industry wishes to deploy. Such discussions could result in AI tools being developed that would both satisfy the FDA’s regulatory purposes and could be deployed by industry players to help them evaluate information and make decisions. Development of common platforms could be particularly beneficial in pharmacovigilance—where both companies and the FDA need to cross-analyze large data sets—and manufacturing and inspections, where the tools that companies might use to analyze the information that they generate about their manufacturing processes and quality control measures, may also be of interest to the FDA.
- The regulatory challenge of large language models: The recent advancements with generative AI have sparked great interest among consumers, investors, life sciences companies, and FDA. However, to date, the FDA has cleared, authorized, or approved around 1,000 AI medical devices, none of which use generative AI. AI medical devices that are generative (i.e., have the ability to create outputs) are more challenging to validate and regulate. This technology may be hard to fit into the existing medical device paradigm. Because of uncertainties around the regulatory path, many entrepreneurs are currently choosing to position their AI tools as the types of clinical decision support software with physician oversight that do not need FDA approval, authorization, or clearance.
Click here to view a video of the full discussion.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.