European Parliament’s Position on the AI Act Moots Significant Changes

The European Parliament’s provisional agreement on the EU’s AI Act encompasses generative AI and imposes the requirements that are applicable to high-risk AI systems onto general purpose AI. As negotiations continue, Josefine Sommer, Eva von Mühlenen and Zina Chatzidimitriadou explain.

Last week, the European Parliament (the Parliament) reached a provisional agreement about its position on the draft Artificial Intelligence Regulation (the AI Act), which is the first major piece of legislation anywhere in the world to horizontally regulate artificial intelligence (AI). This is a significant step in a legislative process which began when the draft AI Act was published in April 2021.

The AI Act is not expected to become law before the second half of 2025. Once adopted, the AI Act will provide the EU life sciences and med-tech industries with solid guidelines on how to navigate AI-related risks, creating an opportunity for European businesses to take the lead globally by developing AI systems against a specific framework.

The text that was adopted last week is the result of intense political discussions on several key issues, notably the novel issue of generative AI and general purpose AI. The proposal includes these developments:

  • Definition of AI includes generative AI: The definition of an AI system is highly important because it determines which AI systems will fall within the scope of the AI Act. The Parliament’s definition aligns with that of the OECD, which means the wording is now clearer and aligned with international standards. The new wording also clarifies that generative AI — AI that is capable of creating content — will be included within the scope of the AI Act.
  • New categories of general purpose AI and foundation models: The Parliament has retained the European Council’s (the Council’s) proposal to create a new category of AI systems, so-called general purpose AI (GPAI). The Parliament has proposed that certain requirements applicable to “high-risk AI systems” shall also apply to GPAIs, including risk management and sustainability requirements. The Parliament has also introduced an external audit requirement throughout the life cycle of a GPAI which is deemed to have the potential to cause “significant harm” to health, safety, or fundamental rights. Also, it proposes that if a GPAI generates human-like text — without a natural person being legally responsible for the text — the same data governance and transparency obligations as those applicable to high-risk systems shall apply. Crucially, it proposes that “foundation models” — AI models trained on broad data which may be used downstream for either specific or general purposes — should be subject to extensive ex ante new obligations, with significant fines for breaches.
  • Scope of the AI Act limited/specified to intention: The Parliament has proposed that the AI Act should apply to systems producing output that is intended to be used in the EU by providers or deployers. This represents a narrowing of the Council’s proposal which did not encompass the intent requirement but, instead, proposed to cover any systems used within the EU. This change means non-EU-based life sciences companies could consider declaring the intended territorial scope of their AI systems to avoid the AI Act.
  • High-risk AI systems and exemptions: The Parliament has published a revised list of high-risk systems, which may now include certain GPAIs. It has also maintained the Council’s definition of high-risk AI systems, meaning that many AI systems used in the life sciences sector will be classified as high-risk. It proposes that a standalone AI system not listed in the draft legislation’s Annex III shall also be classified as high-risk if it poses a significant risk to health, safety, fundamental rights and — if used as a safety component of a critical infrastructure — to the environment. However, the Parliament has also proposed that providers who believe their AI system is not of significant risk of harm to people’s health, safety, or fundamental rights will be able to submit a request that it be exempted from the high-risk requirements.
  • Sandboxes: The AI Act introduces regulatory sandboxes which allow companies to explore and experiment with innovation under a regulator’s supervision. According to Parliament’s draft text, sandboxes will also serve to clarify compliance questions under the AI Act.

Negotiations on the text of the AI Act will continue, as trilogues with the Council and the Parliament will now begin. However, with harmonized standards for AI risk management already emerging, life sciences and med-tech companies would be well advised to start thinking now about how to implement these policies.

For a more detailed examination of the Parliament’s proposed text, click here.

This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.