Key Steps Toward Using Artificial Intelligence in Pharmacovigilance – Sidley Insights on the Recent CIOMS Draft Report
The Council for International Organizations of Medical Sciences Working Group XIV (CIOMS) has produced a draft report (the Draft Report) on the use of artificial intelligence (AI) in pharmacovigilance (PV). Torrey Cope, Anna Melin, and Andrew James provide insights on how life sciences companies can take steps to implement key concepts from the Draft Report.
While AI continues to be a topic of interest for industry and regulators alike, there are many open questions related to how it can be applied in a reliable and appropriate manner. This is especially true in the U.S., where no overarching AI legislation currently exists. There, the Draft Report can be helpful to lawmakers, regulators, and other stakeholders as they develop approaches to using AI in PV. In regions like the EU, where the AI Act provides a comprehensive legal framework, the Draft Report can still be useful because it helps translate requirements into actionable steps for PV.
To that end, here are several key steps life sciences companies should consider as they seek to deploy AI for use in PV activities and how the Draft Report can help inform that process:
- Translate Risk Management Principles Into PV Practice. The Draft Report contextualizes risk management considerations within the unique workflows, data categories, and risk profiles of PV. Companies should look to the use-cases presented in the Draft Report as guideposts for interpreting risk in light of regulatory principles and leverage the guidance it provides on PV-specific risk assessments.
- Operationalize Human Oversight. The Draft Report expands on conceptualizations of human oversight by defining practical models of oversight and mapping these to specific PV tasks. It provides concrete examples of how life sciences companies can implement, monitor, and adapt oversight models over time, ensuring that human agency and accountability are maintained in line with both regulatory and ethical expectations. Companies should review and consider the Draft Report’s examples of ways to implement, monitor, and adapt AI oversight models to ensure human agency and accountability in PV operations.
- Ensure Validity, Robustness, and Continuous Monitoring. The Draft Report details how to establish reference standards, validate AI models using real-world PV data, both qualitatively and quantitatively, and set up continuous performance monitoring to detect model drift or emerging risks. It also addresses the challenges of data quality and representativeness in PV and provides strategies that companies can leverage to mitigate against biases that could be introduced into AI systems.
- Build in Transparency and Explainability. The Draft Report provides a PV-specific roadmap for achieving transparency objectives: it outlines what information should be disclosed to stakeholders, how to document performance evaluations, and how to implement explainable AI techniques to support regulatory audits, user trust, and error investigation. The Draft Report also highlights the importance of communicating the provenance of data and the role of AI in generating or processing safety information. As they move through the development cycle, companies should consider what information they may need to disclose, how to track it, and how they may approach explaining the relevant AI system.
- Address Data Privacy and Cross-Border Compliance Issues. The Draft Report reinforces the need for strict data privacy controls in PV. It discusses the heightened risks posed by Generative AI and large language models, including potential re-identification and linkage of previously anonymized data, and it provides practical recommendations for de-identification, data minimization, and secure data handling that life sciences companies should seek to implement.
- Promote Nondiscrimination. The Draft Report operationalizes nondiscriminatory goals by advising on the selection and evaluation of training and test datasets to ensure representativeness and the implementation of mitigation strategies for identified biases. Companies should be mindful of identifying and accounting for any biases that may be present in data used to train their AI systems.
- Establish Governance and Accountability Structures. The Draft Report recommends the establishment of cross-functional governance bodies, assignment of roles and responsibilities throughout the AI life cycle, and regular review of compliance with guiding principles. It provides tools, such as a governance framework grid, to help organizations document actions, manage change, and ensure traceability — facilitating both internal oversight and external regulatory inspection.
Increasingly, companies have a need to ensure AI literacy within their organization. Some regulations, such as the EU AI Act, even require it. Accordingly, as AI becomes an expanding part of day-to-day operations, companies should ensure they have appropriate resources, either internal or external, commensurate with their adoption of AI. While the comment period for the Draft Report has closed, companies, particularly those looking to be early adopters of AI solutions, will still find its extensive analysis of AI topics through a PV-focused lens helpful in navigating the evolving landscape and in distilling high-level concepts into practical approaches.
Additional Sidley Resources
- The Trump Administration’s 2025 AI Action Plan – Winning the Race: America’s AI Action Plan – and Related Executive Orders
- EU Commission Publishes AI Continent Action Plan and Seeks Input
- Meeting EU Data, Cybersecurity, and Artificial Intelligence Law Obligations: A Checklist for Swiss Life Sciences Companies
- Pharmacovigilance Is at a Crossroads
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.