Europe’s Regulatory Approach to Continuous Learning AI
The regulation of continuous learning models involves the challenge of navigating the future. The EU is striving to stay ahead of the game with its forthcoming AI Act and harmonised standards. Eva von Mühlenen, Oliver Haase of Validate, and Leon Doorn of Aidance discuss.
Technology which uses continuous learning has the potential to transform the healthcare sector. Continuous learning models are, for example, revolutionizing precision medicine by continually adapting to new data, and enabling more accurate and personalized patient treatments. However, there is very real need for guardrails on the use of continuous learning to avoid its creating unnecessary risk.
The regulation of such systems involves walking a fine line between overregulation — which can hinder innovation — and a laissez faire policy which can involve accepting risks posed by this powerful new technology. In Europe, the EU’s draft AI Act is an attempt to walk this line by simultaneously safeguarding the fundamental rights of EU individuals and promoting technological innovation. This legislation will also impact North American and Asian life sciences companies as it will apply to developers and deployers of AI systems anywhere in the world, if the output produced by their systems is intended for use in the EU.
The new AI Act could even potentially pave the way for a regulatory pathway specifically tailored to continuous learning AI systems. Currently, European life sciences regulations such as the Medical Device Regulation (MDR) and the In Vitro Medical Device Regulation (IVDR) lack specific provisions for AI systems that continuously learn. These regulations certify AI systems that operate in a fixed state as software, under the MDR and IVDR. AI systems that are retrained after being placed on the market must currently be reviewed by notified bodies whenever their systems are substantially changed.
However, the AI Act departs from this state of affairs and could reduce the need for multiple reviews and provide a streamlined regulatory approach to continuous learning AI systems. The current draft of the AI Act assumes that AI systems which continue to learn after being placed on the market should not be considered to be “substantially changed” — for the purposes of certification — if the changes to the algorithm and its performance were i) predetermined by the manufacturer and ii) were pre-assessed at the time of the relevant conformity assessment. This is an interesting development which may pave the way to an EU certification framework for continuous learning systems.
Trilogue negotiations on the AI Act are currently ongoing. The European Parliament’s position in these negotiations is that the AI Act should also provide for AI-specific requirements to be implemented within vertical regulations where applicable. If adopted, this position would mean that the MDR and the IVDR could potentially be amended by the AI specific requirements newly introduced by the AI Act.
However, the real rulemaking for AI systems is likely to occur within the so called “European harmonised standards”, which may include global standards. The AI Act’s remit is restricted to absolutely essential requirements, such as the need to have a risk assessment. Meanwhile technical details such as specifications for a risk assessment management system will to a large extent be left to these European harmonised standards. This means that these harmonised standards will play a key role in the implementation of the AI Act.
Proposals for the European harmonised standards are currently being drafted by Standardisation Organisations such as the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) as well as the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). The European Commission has requested that these be finalised by April 2025.
Until we have definitive standards and definitive regulations, some have called for an industry AI code of conduct to bridge the gap, including by providing clarity on how to deal with continuous learning systems. There are currently transatlantic talks on such industry guidance, but we have also seen guidance emerging at a national level in Germany and the Netherlands. All this guidance is welcome as it might help to prevent a situation in which continuous learning systems are subjected to overly stringent regulatory requirements because the regulators are uncertain about how to deal with such systems.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.