The new EU AI regulation proposal, medical devices and IVDs

Now this is fun: at a time just before the date of application of the MDR when we do not even have harmonised standards for the new software requirements in Annex I, section 17 MDR and Annex I, section 16 IVDR, the Commission proposes new mandatory regulation to supplement the the MDR and the IVDR that overlaps mostly with the MDR and IVDR.

Ladies and gentlemen, I give you the proposal for a a regulation laying down harmonised rules on artificial intelligence (the Artificial Intelligence Act for short, or the AIA for even shorter).

This post is a first quick impression of the proposal, especially with respect to its effects in the medical devices and IVD space. It’s fairly long and winding, because I did not have time to write something more concise.

Background

Nobody wants to be left behind when it comes to AI and its regulation, and everybody still wants their jurisdiction to be attractive to innovation. The AIA is part of the EU’s artificial intelligence strategy that covers many aspects of AI, such as legal personality for artificial beings, liability, copyright and ethics for AI deployment and functioning.

The AIA is intended to underpin the risk and benefits aspects when deployed in the world, as to ensure that AI remains trustworthy and in service of humans (‘human-centric’, as the proposal calls it), as well as operate within the boundaries of the law. In healthcare this would look like the proverbial holodoc AI from Star Trek Discovery (which show has, apart from the worst character ever developed in the Star Trek universe (Neelix), also epic characters such as Captain Janeway).

We knew that this proposal was coming, because the Commission had announced in its White Paper on AI that EU product legislation would be impacted, and mentioned medical devices regulation specifically in that context:

“The proposal sets harmonised rules for the development, placement on the market and use of AI systems in the Union following a proportionate risk-based approach. It proposes a single future-proof definition of AI. Certain particularly harmful AI practices are prohibited as contravening Union values, while specific restrictions and safeguards are proposed in relation to certain uses of remote biometric identification systems for the purpose of law enforcement. The proposal lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to the health and safety or fundamental rights of persons. Those AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market. Predictable, proportionate and clear obligations are also placed on providers and users of those systems to ensure safety and respect of existing legislation protecting fundamental rights throughout the whole AI systems’ lifecycle. For some specific AI systems, only minimum transparency obligations are proposed, in particular when chatbots or ‘deep fakes’ are used.” (P. 3 proposal)

Keywords of the proposal are ‘trust’, ‘safety’ and ‘human-centric’: 

“If AI is to be a tool for genuine public good, then the public must understand it. As well as promoting transparency and explainability, national governments and international bodies like the EU should be investing in skills. Not so we know the minutiae of how every AI application works but so we can trust, based on evidence, that its impact is positive.”

From EURACTIV: https://www.euractiv.com/section/digital/opinion/ai-rules-must-help-increase-public-trust/

As we have seen with other tools for genuine public good like vaccines, this will be a very hard sell to the public. The public is showing time and again that it is not very good at understanding even well-understood technology for good, often assisted in utter misguidedness by the very companies that stand to be regulated by this proposal (yes, you, Facebook, for example).

Also, as long as we have large data breaches in the news every day and companies hoarding data are evidently not doing this for the public good, this will be an even harder sell. 

All the sci-fi movies about evil AI that decides that its first post singularity to-do item is to rid the world of humanity have probably not helped very much either.

Finally, member states are generally totally lacking in teaching the skills humans need to understand IT services that they consume (positive exception: the Finnish, that left the EU the Elements of AI course as a gift after the last Finnish presidency – that a was a cool course to do and I enjoyed it, thanks very much!). 

So, before I start picking this proposal apart to discuss how it could be improved, let me say that it is very difficult to put together a piece of horizonal legislation that is supposed to regulate very complex technology that few people really understand, and then regulate it in a way that its use produces net benefits for society. This regulation could be a project much like the GDPR (as you will read below it has a lot of overlaps with it) which was supposed to curb companies (like Facebook) that were not planning to do anything good for society at all, but rather use it as raw material for their own purposes. The difference with the GDPR is that the GDPR is very much principle based regulation (requiring a lot of clarification in guidance and notoriously imprecise and the AIA is technical essentially goods legislation, relying more on standards and conformity assessment (and probably still requiring a lot of guidance).

CE squared – two integrated overlapping conformity assessments

The AIA sets up a system of conformity asssessment for artificial intelligence systems, which, given the definition of AI system, will almost always double as medical device under the MDR if deployed for medical intended purpose. The conformity assessment will also involve notified bodies like under the MDR.

The ‘provider’ of an articial intelligence system will also have post-market monitoring obligations to proactively collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions, very much like PMS under the MDR.

The AIA prohibits certain AI practices that mainly have to do with transparency, but these does not seem to interfere with deployment in healthcare.

All software that qualifies as medical device under the MDR or medical devices running software with an AI component will be classified as a high risk AI system under the AIA because it is

“the product whose safety component is the AI system, or the AI system itself as a product”

covered by the MDR or the IVDR (article 6 (1) AIA). This definition seems to have been chosen with a concept of direct actuation in the human world (bad decision -> human interacting with product dead) but this definition ignores the indirect and more insidious effects of harm that we know from IVDs for example, that do not interact directly with a human. The definition of ‘safety component’ (“a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property”) does not seem to have been written with diagnostics firmly in mind, although you could say that a failure of an AI system interpreting IVD instrument data could endanger health of persons by means of generating false positives or false negatives which means that this would be covered. But what about AI systems deployed in drug discovery leading to ‘discovery’ of a medicinal compound with much more than necessary side effects? This is an entirely different degree of causality.

Anyway, medical devices / IVDs in scope of the MDR and IVDR will in basically all cases constitute high-risk AI systems in the meaning of the AIA. This means that they will be subject to the requirements of among other things

  • Risk management system (similar to MDR and IVDR) (article 9)
  • Data governance and data management practices (similar to MDR/IVDR and GDPR) (article 10)
  • Technical documentation (similar to MDR/IVDR article 11)
  • Logging capabilities (similar to GDPR) (article 12)
  • Transparancy and information to users (similar to GDPR) (article 13)
  • Human oversight requirements (similar to MDR/IVDR) (article 14)
  • Accuracy, robustness and cybersecurity (similar to MDR/IVDR and GDPR) (article 15)
  • Obligations very much like article 10 MDR/IVDR (device manufacturer obligations plus QMS) (articles 16 and 17)
  • Economic operator requirements (similar to MDR/IVDR) (articles 25 to 28)
  • MDR and IVDR PMS systems must integrate AIA PMS elements (Article 61 (4))

The proposal mentions in the discussion of stakeholder input that

“several stakeholders warn the Commission to avoid duplication, conflicting obligations and overregulation”.

I think these stakeholders are right. While the AIA says in recital 85 that it is supposed to amend the MDR and the IVDR, it is quite hard to see where the actual amendments are in the avalanche of overlap, as indicated in the list above. Some measures are taken to avoid the worst of overlap, such as the option to provide a single set of technical documentation for the AI systems that are also devices in the meaning of the MDR and IVDR (article 11 (2)). Otherwise this is remains quite the puzzle.

Article 24 AIA contains strange overlap provision:

“Where a high-risk AI system related to products to which the [MDR or IVDR ] apply, is placed on the market or put into service together with the product manufactured in accordance with those legal acts and under the name of the product manufacturer, the manufacturer of the product shall take the responsibility of the compliance of the AI system with this Regulation and, as far as the AI system is concerned, have the same obligations imposed by the present Regulation on the provider.”

This seems to be a kind of system / kit provision intended to manage exactly the cases caught under … systems and kit provisions already provided for under the MDR and IVDR. So more overlap, not necessarily consistent.

Article 43 (3) AIA manages overlap and conformity assessment re overlap, providing that where AI systems that are devices or are part of a device can be assessed under the MDR or IVDR conformity assessment producedure, with some AIA extras. This begs the question how will that work with notified bodies? Do they need accreditation under the AIA as well to do a full MDR or IVDR AI system / device assessment? Under what MDR / IVDR code would that notified body competence be covered? Would it be possible to split the device / AI system by having the AI part evaluated by an AIA notified body and the device part by an MDR / IVDR notified body? Or what if (theoretically) an MDD class I software device also needs to obtain an AIA certification before May 2024 – would that be a significant change in the meaning of article 120 (3) MDR? Interesting puzzle to figure out.

The AIA is not very clear about the result of conformity assessment under overlapping assessment and how this will be reflected in a final declaration of conformity. The result under the AIA would be an EU technical documentation certificate (article 44 AIA) which seems to be complementary to an MDR / IVDR certificate and, according to the MDR / IVDR, might be accounted for in a single declaration of conformity for the AI system under both regulations (AIA and your choice of MDR or IVDR) – see article 48 (3) AIA.

Article 63 (3) AIA provides that the MDR and IVDR competent authorities shall be market surveillance authorities for AIA. This made me raise an eyebrow or too, as this solution is too ‘practical’ to be realistic. With the structural understaffing of Member States’ competent authorities for medical devices and IVDs as it is, it will be very interesting to see where they will get the expertise in AI needed for proper market surveillance and enforcement. This would require Member States to take medical devices as a policy area more serious and invest in competent oversight with sufficient capacity, which will of course not happen if the MDR and IVDR are any measure for this. It seems that the ‘competent’ in competent authority will be a tenuous claim for AI in the EU, thus at least not ticking that box for the EU’s AI strategy. Good legislation is one thing, but actually competent authorities is another if you want to achieve the goals of legislation like the AIA.

Seriously, lobby and trust

There is also the category of ‘seriously?’, which contains provisions like the user obligations under article 29, which entail among other things that users of high-risk AI systems shall use such systems in accordance with the instructions of use accompanying the systems (article 29 (1) AIA). This provision is rather alien in the universe of CE marking legislation, where the user does not have direct obligations because the scope of deployment of the AI system would be limited by the scope of the CE marked intended use anyhow (like is the case under the MDR and IVDR). A separate obligation on the user to only use the system in accordance with the IFU creates an entirely extra layer of regulation for AI systems that are also a medical device.

The first traces of lobby are also visible in the proposal, for example where it says that

“Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control.” (article 29 (5) AIA)

If there is anything that would inspire trust it would be that the logs generated are always under control of the user, especially since that provision does not mention who must keep the logs if they are not under control of the user, which is a major regulatory oversight in my opinion.

Some trust enhancing measures for medical AI systems can be found in article 7 MDR and IVDR, which pose limitations on the (marketing and advertising) claims that can be made for devices, and consequently for AI systems that are also devices.

And then there are the GDPR overlaps / dovetails as well. See below fro more discussion of the resulting three-dimensional problems that this produces.

Better legal recourse against notified body decisions re AI system certification, but not if they’re medical devices

Genuinely new in the AIA is a first step to better legal recourse against notified body decisions, which is news for me as a lawyer. This is different from the MDR / IVDR. Article 45 AIA provides that

“Member States shall ensure that an appeal procedure against decisions of the notified bodies is available to parties having a legitimate interest in that decision.”

This kind of provision is sorely missing in the MDR and IVDR, and was one of my criticisms on the MDR and IVDR, as these provide for legal recourse against notified body decisions only via the certification agreement and a requirement for the notified body to have an internal appeal process. Some member states provide for extra legal recourse pathways because they treat notified bodies as emanations of state or similar entitites. The provision in the AIA does not limit recourse to the parties of the certification agreement alone, which is also an interesting development. Just like under proper administrative law licensing procedures, interested third parties should be able to appeal a certifciation decision under the AIA.

Does this now mean that a decision concerning an MDR / IVDR certificate covering an AI system can be challenged under this provision? No, because it would be an MDR / IVDR certificate, which would seem to mean that an AI system provider is worse off for legal protection under the MDR / IVDR than under the AIA because the AI system is also a medical device. On the other hand, the AI system provider has nothing to worry about in terms of interested third parties appealing the certification decision, which could be a problem under the AIA.

Transparency

Article 52 provides for an ‘anti-holodoc’ transparency obligation:

“Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.“

AI systems posing as a doctor for example will need to clearly be in the uncanny valley in order to be compliant, or must be very clear about their status. Fortunately the holodoc was never short of drama about what it was like to be an AI-powered hologram confined to the sick bay, but this will almost need to be a feature.

An AIA-MDR/IVDR-GDPR Rubik’s cube

The proposal contains many implicit and explicit links with the GDPR, such as that users of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation to carry out a data protection impact assessment under the GDPR (article 29 (6) AIA). This provision seems to assume that a DPIA is always required in this context, which is not necessarily true. Another interesting link with the GDPR is the provision of a legal basis for bias monitoring (article 10 (5) AIA) for the functioning of the AIA. I would assume that this legal basis extends to the context of clinical data for devices that often does not take account of the lack of clinical data for women and children, and is often biased for adult men, which the AI system would be too if trained on such biased data.

Implicit links exist where we are talking privacy by design and and security by design under the GDPR, and risk management under the MDR / IVDR. The AIA adds a third dimension which will make designing and deploying AI systems even more complex, as you will be dealing with three regulatory dimensions. I have described in a lot of detail how the MDR / IVDR risk management requirements and the GDPR design requirements interact (see for example here, specifically slide 34).

The result would be now that you would have overlapping technical documentation for the AIA and MDR/IVDR requirements and a GDPR DPIA and system design documentation for the purpose of the GDPR, which all need to be consistent with each other over time. Speaking of three dimensions, it will be like continuously working a regulatory Rubrik’s cube between parts of the company that typically do not engage every much with each other.

This is just a very short summary specifically for medical devices and IVDs and there is a lot more to say about the AIA links with the GDPR than this.

Is this proposal one in the category of putting the NO in innovation?

As has happened with the MDR and IVDR, the software industry will be slow to catch on to this, wait too long with starting preparations and be engaged in a last minute scramble for compliance. Dear industry, now you know – time to start preparing. This regulation will happen, one way or the other because politicians have decided it is needed and it will look a lot like this proposal. You only have yourself to blame if you wait too long.

A big question is who will be the notified bodies that can certify AI systems that are also medical devices or IVDs. This will be done by MDR / IVDR notified bodies that have been assessed for an AI systems competence top up (article 43 (3) juncto article 33 (4), (9) and (10) AIA). Given that it took the average notified body about two years to go through notification for the MDR and IVDR and that the major bottleneck was approval of the NB CAPA plan after joint audit, this will be interesting. Of course the MDR and IVDR notified bodies will have their QMS up and running so hopefully this application could be handled swiftly. On the other hand, we do have enormous bottlenecks for MDR and IVDR NB capacity and that did nothing to move things along any faster. I cannot begin to emphasize in how bad of a situation we are with notified body capacity for the IVDR, and not these few notified have just gotten extra tasks for AI systems in these IVD space. Like I’ve said before: why would we need effective IVD market access policy in these pandemic times, right? It’s not like we need to do large scale testing for any purpose or something or would need AI systems to be able to help identify new mutations of corona viruses or something useful like that any time soon – pardon the sarcasm.

Another interesting question is how the AIA relates to the in-house exemption under articles 5 (5) MDR and IVDR, as this is not addressed in the AIA. This would mean that health institutions developing their own home-brew AI systems for deployment in the health institution will need to certify them as high-risk AI systems with a notified body that is not necessarily competent in AI in healthcare because there is no obligation to go to an MDR or IVDR notified body with an AIA top-up. This is very likely to limit innovation in health institutions, because the whole idea of the in-house exemption is to forego CE marking of the in-house device. It is also not clear if the existence of a commerically CE marked AI system would pre-empt health institutions for deploying their own AI system (article 5 (5) (d) MDR and IVDR).

Will this proposal make the EU a world leader in AI by setting clear guidelines? In its current form I think it will not.

Although the proposal can definitely be improved, my worry is the execution. We are right back at the glaring undercapacity of market access procedures and lack of expertise at competent authorities level, which in the end is caused by the Member States on the one hand refusing to cede competence to the Commission (and properly fund the Commission for this) and on the other hand the Member States’ inability to properly resource their part of the regulatory system. While the Commission is probably doing what it can with the instruments that it has, this kind of thing is not something you can make work with a pretty technical regulation that follows conventional CE logic but relies on a market access system that is proven to be under-resourced for years to come.

You actually need a functioning regulatory infrastructure with a market access mechanism with sufficient capacity and predictability which is simply not there at the moment. We are utterly lacking that for the MDR and IVDR, and as a consequence for the AIA in the medical space. If this problem is not solved, the AIA will not live up to expectations in the health space and will become a disappointment.


Navigate through our knowledgebase

Related articles

Article

A case of so-called fiscal neutrality

Sometimes you come across cases that violate Mandalorian Creed: “One does not speak unless one knows.”. This happened to me last week when I read the Dutch Supreme Court’s judgment in a…

Read more

Article

Can we fix / improve the MDR and the IVDR?

Or in other words that I’ve asked on this blog before: can the maker repair what he makes? This blog will argue that he can and he should. It still happens to…

Read more

Article

MDR and IVDR amendment has entered into force now

Today is the day that the amendment, aka the ‘extension’, to the MDR enters into force because it was published in the EU’s Official Journal today, number L080. As you are reading this,…

Read more