Code on computer.

Defragmenting European Law on Medical AI

By Audrey Lebret

In the medical field, artificial intelligence (AI) is of great operational and clinical use. It eases the administrative burden on doctors, helps in the allocation of healthcare resources, and improves the quality of diagnosis. It also raises numerous challenges and risks. Balancing competitiveness with the need for risk prevention, Europe aims to become a major digital player through its AI framework strategy, particularly in the field of digital health. The following provides a rapid overview of the normative landscape of medical AI in Europe, beyond the borders of the EU and its 27 Member States. It also takes into account the treaties in force or emerging at the level of the Council of Europe and its 46 Member States. The purpose is to illustrate the reasons and difficulties associated with the legal fragmentation in the field, and to briefly mention a few key elements towards the necessary defragmentation.

A fragmented legal landscape of Medical AI

Currently, there is no specific legal regulation of medical AI in Europe. Nevertheless, its development and use does not take place in a legal vacuum. The Council of Europe has developed several international human rights conventions of general scope that are applicable to medical AI. The first of these instruments is the European Convention on Human Rights (ECHR) and its additional protocols (such as protocol 12, which has a broader scope than the ECHR, but has only been ratified by 20 countries to date). Among other rights, these texts protect the right to be free from discrimination and oblige States to prevent discrimination, whether intentional or unintentional, the latter being one of the higher risks to rights in the algorithmic context. In application of the European Court of Human Rights’ doctrine of positive obligations and of the “horizontal effect of rights,” States must adopt a legislative framework effectively preventing violations by private actors. Moreover, the ECHR (art. 8) or “Convention 108” protected the right to private life, including the protection of personal data, far before the adoption of the GDPR at the EU level. The European Social Charter (in its 1961 and 1996 versions) also contains the right to health, which provides an interesting framework for medical AI by imposing, for example, information accessibility. These instruments are completed by more specific conventions such as the Oviedo Convention on Human Rights and biomedicine and its additional protocols.

At the EU level, the GDPR guarantees the right to protection of personal data and allows the processing of sensitive health data under certain conditions. Besides, two regulations on medical devices partially cover medical AI as they apply to health care “software,” which includes medical AI. They set out several pre-market compliance obligations.

In addition, several standards currently are being developed on the European scene, such as the Proposal of AI Act at the EU level, which classifies AI systems according to risk, excludes marketing when these risks are unacceptable, and establishes a certain number of obligations for suppliers and users/deployers when the risk is high or limited. In parallel, last year, the Commission submitted a draft regulation on the European Health Data Space. This specific regulation will complement the forthcoming EU Data Act and form part of the overall EU data strategy. Meanwhile, the Council of Europe is working on the first legally binding international instrument on AI and fundamental rights. This Convention is conceived as a general and transversal instrument; and its capacity to cover the challenges of medical AI is only partial. Nevertheless, the initiative is to be welcomed as the project encompasses the notion of unlawful harm, the protection of the individual right to be informed about the use of AI, and requires contracting States to provide a right of individual petition. The project also recommends a common approach to civil liability and establishes a monitoring procedure.

The issue of consistency and legal clarity

The above-mentioned instruments have distinct foundations, which are partly explained by the different nature of the ‘Europe of the market’ (EU), and the ‘Europe of Rights’ (Council of Europe). At the EU level, the above-mentioned texts, either general or sectoral, are market-driven (based on art. 114 TFEU; only the GDPR is based on art.16 TFEU i.e., the right to the protection of personal data). They aim to encourage innovation and regulate competition while preventing the most significant risks. Hence, they do not adopt a rights-based approach, which was a substantial gap until the Council of Europe’s initiative. Although the European Parliament subsequently gave a more important place to article 16 TFEU on data protection as an additional legal basis of the AI Act during the amending process (as with other regulations), it does not affect the foundational roots of the Act. By contrast, the Council of Europe, as the main human rights organization in Europe, is primarily concerned by ensuring the respect and protection of fundamental rights and human dignity within the jurisdiction of its 46 member States. Therefore, the Convention will complement the EU approach with a transversal rights-based approach.

Faced with this pluralism of norms, based on distinct foundations, the question arises of the overall coherence of the regulatory landscape of medical AI. Potential conflicts or inconsistencies may appear between norms that respond to distinct challenges, but also between general and sectoral norms. With regard to the former, the limited references to human rights in the AI Act, partially due to its legal basis, raise questions about the ways in which binding and interrelated human rights standards will be integrated into AI impact assessments. With regard to the latter, certain inconsistencies regarding consent requirements (and derogation to consent) related to the secondary use of health data have, for example, been identified between the draft European Health Data Space Regulation and the GDPR.

Harmonizing and ordering European law on Medical AI

From the perspective of users of AI, including doctors, and citizens’ interests, the fragmentation of the law undermines legal clarity and security. While the Council of Europe aims to require States to create an individual right to petition for human rights violations, the very identification of risks caused by AI systems depends on the prior integration and ordering of legal requirements. This is a multi-scale challenge.

This requires decompartmentalizing approaches for constructive normative interactions. In fact, the risk-based approach of the EU closely interacts with the rights-based approach of the Council of Europe. Impacts on human rights are obviously among the major risks of AI. Further, the implementation of human rights conventions relies on a balancing approach between competing interests. Limits on individual rights can be justified by proportionate interferences, i.e., those necessary to achieve legitimate interests, such as innovation in public health. Terminology aside, these instruments may complement and enrich each other.

Another challenging aspect is to align definitions and standards. In this regard, members of the European Parliament have shown their concern for legal clarity by amending the definition of AI in the EU AI Act to bring it into line with other major documents such as the definition provided by OECD. Alongside this horizontal international alignment of definitions, European bodies should ensure the complementarity between horizontal and sectoral approaches. That starts by avoiding inconsistent requirements, such as the aforementioned potential tension over consent requirements for secondary use of health data.

Apart from eliminating inconsistency, defragmenting European law calls for the closing of legal loopholes that might have been facilitated by scattered standards. The last amendment to date of the AI Act has addressed the uncertainty surrounding the regulation of large language models such as ChatGPT. Indeed, while conversational AI can be used for medical purposes, the fact that this use was not its intended function, which is part of the Medical Device Regulation’s definition of a medical device, raised doubts about its applicability. Without modification of the AI Act, such models could have fallen outside the scope of European regulation. This issue of internal consistency at EU level could have been partially compensated by the rather broad approach of “AI systems” in the Council of Europe Convention. However, it is worth remembering that, unlike the EU Regulations that are directly applicable in EU Member States, States of the Council of Europe will be able to modulate their commitments through interpretative declarations when they ratify the treaty.

Finally, the defragmentation of European law on medical AI raises some broader institutional issues. While the EU has an international legal personality and must accede to the ECHR, it has still not done so, mainly out of concern for preserving its autonomy. The EU’s accession to this “constitutional instrument of European public order” would facilitate the ordering of European law. In addition, it would be a great step towards harmonization if the EU were to ratify the future Convention on AI and human rights. At this stage, the Council of the EU adopted a decision authorizing the European Commission to negotiate the Council of Europe Convention on behalf of the Union.

Leaving aside the multiplication of legal norms in Europe, an important issue remains the adaptation of current standards such as non-discrimination requirements to the algorithmic context. Judges will have an crucial role to play in this important interpretative work. In this regard, it would be constructive if the Court of Justice of the EU (CJEU) and the European Court of Human Rights engaged in a judicial dialogue to harmonize the interpretation of medical AI law in Europe.

Audrey Lebret

Audrey Lebret is Associate Professor in Law at the University of Copenhagen, Centre for advanced studies in Biomedical Innovation Law (CeBIL), and Associate Researcher at the Paris Human Rights Institute (CRDH).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.