European Regulator Clarifies Guidance on the Use of AI in the Medicinal Product Lifecycle

The European Medicines Agency (EMA) has published a final reflection paper on the use of AI in the drug lifecycle, which provides important insights into the expectations from the EMA to clinical trial sponsors, as well as marketing authorization (MA) applicants and holders who use AI systems. Josefine Sommer and Zina Chatzidimitriadou explain.

AI and machine learning (ML) are transforming almost all aspects of the drug lifecycle, from early drug discovery and R&D to manufacturing and post-marketing obligations. In Europe, as elsewhere, the regulatory landscape for AI is developing fast, including via the new EU AI Act which entered into force on August 1, 2024. However, the EU AI Act is sector agnostic and regulates only a small subset of AI/ML systems used in the life sciences industry.

The EMA addresses this regulatory gap in its Reflection paper on the use of Artificial Intelligence (AI) in the drug lifecycle (the AI Reflection Paper or the Paper), which was published in September. The Paper sets out principles for the integration and use of AI/ML technologies throughout the drug lifecycle, together with guidance on how sponsors and MA applicants and holders can ensure – and document – their safe and effective use.

The AI Reflection Paper is intended to align with the EU AI Act, and to be a building block in the EMA and the Heads of Medicines Agency (HMA) AI Workplan for 2023‒2028 (AI Workplan), which aim at facilitating the development and use of responsible and beneficial AI by industry and regulators. The AI Reflection Paper builds on the same key objectives of the EU AI Act and the AI Workplan, which both set out strategies for maximizing the benefits of AI to stakeholders, while managing the related risks in pragmatic and principles-based ways.

The AI Reflection Paper has already attracted a great deal of industry interest, with the EMA receiving over 1,300 responses to the draft which it put out for consultation last year. The final version adapts wording to better align with the EU AI Act, takes into account the specificities and risks of the application of AI in the drug lifecycle, including the way in which it impacts GxP, and provides specific considerations for the use of AI and ML in innovative areas such as precision medicine. It sets out the following main principles:

  • Risk-based approach: Like the EU AI Act, the AI Reflection Paper underlines the importance of an ethical approach to the use of AI in the drug lifecycle. For example, to differentiate from the EU AI Act, it introduces terms such as “high patient risk” for systems affecting patient safety and “high regulatory impact” for cases where the impact on regulatory decision-making is substantial. It recommends a risk-based approach to the development, deployment, and performance monitoring of AI/ML tools, an approach which encourages developers to proactively define the risks to be managed throughout the lifecycle of an AI system.
  • Transparency and explainability: The concept of the “explainability” of an AI model is generally hailed as a “gold standard” for ensuring fairness, accountability, and the prevention of bias. However, the AI Reflection Paper accepts that some types of modeling architectures and the use of “black box” models necessarily mean that it will not always be possible to fully explain how an AI system works. In such instances, users of AI systems may instead be expected to show “interpretability,” i.e., substantiate that there is human oversight and transparency over questions arising when a model does not perform or is not sufficiently robust. Such aspects should be discussed in detail within the EMA’s qualification of innovative development methods or scientific advice procedures.
  • Data integrity: The core of AI accuracy and trust lies in the principles of data governance and data integrity. The AI Reflection Paper stresses that the data used in model development, training, and deployment are of paramount significance for ensuring the integrity of the generated results, and that such data must be free from bias. For example, if a third-party AI system is to be used within the drug lifecycle with high regulatory impact or high patient risk, it is expected that the software developer has provided necessary details through a methodology qualification process covering the specific context of use. Organizations seeking to implement these principles may, for example, wish to ensure that SOPs implementing GxP principles on data and algorithm governance are appropriately extended to include all data and models used for high regulatory impact or high patient risk AI.
  • Regulatory interaction: Industry is expected to perform regulatory impact and risk analysis of all AI/ML applications, and are recommended to seek regulatory interactions when no clearly applicable written guidance is available. The Paper encourages industry to engage early with regulators, if they wish to use AI systems in the drug lifecycle that may have a high impact on the regulatory decision-making.

Overall, the AI Reflection Paper encourages continuous improvement and adaptation of AI/ML in the drug space, including in the development of more sophisticated AI algorithms with higher accuracy, the improvement of the quality of data used to train AI systems, and the development of a better ability to address bias in order to ensure fair outcomes.

Going forward, we expect to see new EMA and HMA scientific guidelines and related support in AI/ML materializing soon, including in more application-specific guidance such as the use of AI in pharmacovigilance and general guidance to support the implementation of the EU AI Act for uses that fall within the scope of the Act, but which may also impact regulatory decision-making.

The EMA and the HMA are jointly hosting an online workshop on the safe and responsible use of AI on November 5, 2024.

Sidley partners Rebecca Wood and Deeona Gaskin and former Commissioner of the FDA Dr. Scott Gottlieb recently discussed unique challenges and opportunities presented by the use of AI in the life sciences industry; the outlook on novel regulatory approaches and considerations faced by regulators; and future trends in FDA’s regulation of AI, in particular, generative AI. The webinar, “The AI Revolution at the FDA: Fireside Chat With Former FDA Commissioner Dr. Scott Gottlieb,” is available for viewing here

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.