How the EU’s new AI laws will impact your medical device
The EU is currently in the process of negotiating a new framework for regulating artificial intelligence (AI) technology - having published its draft AI Act in April 2021, followed by the draft AI Liability Directive last month. The proposed legal framework is intended to promote “trustworthy AI” and will apply to providers, users, importers and distributors of AI systems.[1]
For many medical device manufacturers, these laws will introduce new requirements that must be met before bringing a product to market. In this post, we set out the key challenges for the medical device industry under the new AI laws and how you can prepare for them.
Do the new AI laws apply to my device?
The draft AI Act will cover ‘artificial intelligence systems’, which broadly means software developed using machine learning, logic/knowledge-based or statistical approaches that can generate outputs for a given set of human-defined objectives.[2]
In the medical device context, this may include systems for detecting diseases, providing patient-specific prognoses or determining medicine dosages. It does not matter whether the AI system is placed on the market as an incorporated component of a medical device (e.g. it is integrated into a CT machine) or as a medical device in and of itself (e.g. it is cloud-based technology for analysing CT scans). Any device which involves an element of AI will be covered by the new laws.
What new requirements will I have to meet?
The draft AI Act adopts a risk-based approach for determining the applicable requirements. For ‘high risk’ AI systems, the following obligations will apply:
- adequate risk assessment and mitigation systems
- high quality datasets feeding the system to minimise risks and discriminatory outcomes;
- logging of activity to ensure traceability of results
- detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
- clear and adequate information to the user
- appropriate human oversight measures to minimise risk
- high level of robustness, security and accuracy.[3]
According to the AI Act, an AI system will be considered ‘high risk’ where a third-party conformity assessment is required under specified EU legislation.[4] The list of specified EU legislation is set out in Annex II of the Act and includes both the Medical Device Regulations and the In Vitro Diagnostic Device Regulations.[5] Therefore, any medical device which is subject to a notified body conformity assessment (i.e. it is a Class IIa/IIb/III medical device or a Class B/C/D IVD device) will be deemed ‘high risk’ for the purposes of the AI Act.
By comparison, Class I medical devices and Class A IVD devices (which are not required to go through a notified body conformity assessment) will generally be deemed ‘low or minimal risk’. The obligations that apply in these cases are far less strict, with limited transparency requirements for certain AI systems (chatbot, emotion recognition/biometric categorisation or deep-fake systems) and a general statement on voluntary codes of conduct.
To determine the rules that will be applicable to you under the AI Act, it is therefore crucial that you first know your classification under the medical device regulations. Our Regtik platform can help you figure this out - providing a guided assessment that takes your device through the regulatory maze, followed by a fully reasoned report that outlines key information (including device classification and pathways to market). If you are interested in learning more about Regtik or would like to request a demo, please contact any member of our team or register your interest below.
[1] European Commission Press Release, Europe fit for the Digital Age: Artificial Intelligence
[2] EU Draft Artificial Intelligence Act, Article 2
[3] European Commission, Regulatory framework proposal on artificial intelligence
[4] EU Draft Artificial Intelligence Act, Article 6
[5] Regulation (EU) 2017/745 and Regulation (EU) 2017/746 respectively