Oct 2024

European Union

Law Over Borders Comparative Guide:

Artificial Intelligence

Contributing Firm

Introduction

The EU approach to regulating Artificial Intelligence (AI) is underpinned by an attempt to both reap the benefits of AI and mitigate its potential risks and harms. The policy aim has been to ensure that AI systems are designed to be “trustworthy” and remain so over time. In other words, that they are socially acceptable, such that businesses are encouraged to develop and deploy these technologies while citizens embrace and use them with confidence. This and other concerns first led the European Commission to endorse guidelines and policy recommendations from a High-Level Expert Group on AI (AI HLEG) back in 2019. The view was that the EU first needed a set of harmonised principles reflecting European values and fostering a best practice approach based on voluntary compliance. 

Quickly after, though, the EU also started the process of adopting a binding regulatory framework for AI systems, with strong oversight and enforcement mechanisms. After heated debates, but in a relatively concise timeframe considering the ambitions of the text, the EU eventually enacted the Artificial Intelligence Act (AI Act), officially the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence. It was published in the Official Journal on 12 June 2024 (http://data.europa.eu/eli/reg/2024/1689/oj) and entered into force on 1 August 2024. The AI Act is designed to gradually become applicable through a staged series of implementation deadlines. The first of these is 2 February 2025 in relation to prohibited AI systems and practices and the obligation to train employees, while the other provisions become applicable across 2025 and 2026. 

AI is first and foremost a technical artifact that leverages the combination of data and software, to produce outputs that help design products and make them work better, across a whole spectrum of sectors. Increasingly, AI is, and will be, embedded in toys, consumer electronics, radio and electrical equipment, medical devices, machines, cars, manufacturing lines, factories, etc. From a policy perspective, with AI being seen as an engineering technique and embedded in standalone products, the primary consideration of EU legislators went into regulating market access and ensuring an acceptable level of health and safety of users of such products. This is something the EU has typically regulated in the past through product safety rules, through the so-called New Regulatory Framework. That approach had a profound, structural impact on the AI Act, with such embedded AI being regulated in an aligned manner.

To introduce proportionate and effective binding rules, the EU purports to follow a risk-based approach with at least three different layers. Unacceptable AI practices are prohibited, limited-risk AI systems are allowed subject only to transparency obligations, and high-risk AI systems (the “intermediate” category defined by reference to existing product safety legislation and to certain sectors or activities in Annexes 2 and 3 of the AI Act) must comply with numerous risk management and self-certification obligations to obtain market access and legal certainty. A fourth category, so-called general-purpose AI, of which the definition is somewhat blurred, hangs somewhere between the limited-risk and the high-risk AI systems, as will be explained below. 

Next to the risk-based product safety approach followed by the AI Act, the EU hasn’t let go of its bigger policy ambitions. Be it through the AI Act or other instruments (such as the GDPR) that continue to apply in a concurring fashion, the lawmaker is looking to combine many different goals: make sure that AI governance reflects key European values; ensure protection of public interests and fundamental rights and freedoms; and, enable innovation and research whilst preserving a fair competition in the relevant markets. As a result, it is important to read the AI Act in a wider context that includes the willingness to promote fair and equal access to data and markets, as well as to preserve fundamental rights. That is why we will: first, introduce the EU approach to data sharing and the markets for data and automation, then; turn to the application of the GDPR to some aspects of AI systems and decision-making, before we can finally; present the AI Act more in detail. 

1. EU data landscape

The EU’s ambition to regulate data is based on the recognition that data is a key factor of production in the digital economy. As a result, the EU wants to promote data sharing as much as possible, and as a policy orientation it demands that data remain findable, accessible, interoperable and reusable (often referred to as the FAIR acronym). On that basis, a significant batch of recently enacted pieces of legislation deeply influence the wider area of digital regulation in Europe. In this section, we briefly explain the framework for voluntary data sharing, and describe situations in which EU law now can impose specific data sharing obligations. 

1.1 Overview

The latest version of the Open Data Directive was adopted in 2019 (Directive (EU) 2019/1024 of the European Parliament and the Council of 20 June 2019 on open data and the re-use of public sector information (recast), O.J., 26 June 2019, L,56-83). Subject to its implementation in Member States’ national laws, it is designed to enhance transparency and fair competition in the access to, and re-use of, datasets held by public sector organisations and some public undertakings, as well as of research data. As a rule, that access must be free of charge and non-exclusive. Holders of such data must authorise the re-use thereof for both commercial and non-commercial purposes, and subject only to transparent, fair and non-discriminatory conditions. Specific rules also apply for so-called high-value datasets that must be available free of charge and via APIs, given their high societal value, and subject to detailed requirements with respect to granularity, key attributes, frequency of updates, geographical coverage, etc. The relevant fields for these high-value datasets are geospatial, earth observation and environment, meteorological, statistics, companies and company ownership, and mobility. However, the Open Data Directive does not create sharing obligations for private entities, only for public bodies. 

The Open Data rules do not apply, notably, to categories of documents where there is a need to respect third parties’ intellectual property rights or commercial confidentiality, or where the rules on the protection of personal data of individuals prevail. However, the Data Governance Act (DGA — Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending regulation (EU) 2018/1724 (Data Governance Act)) complements the Open Data Directive in that respect. For datasets that contain personal data, include trade secrets or proprietary information, the DGA specifies how data can be shared in spite of such limitations, ensuring an effective protection of third parties’ rights. The DGA also creates additional sources of data sharing, next to public sector bodies: data intermediaries and data altruism organisations are recognised and held accountable to specific rules of independence and transparency, in the hope that they will also contribute to building a stronger market for data exchange. It is now in force across the EU, although some Member States still need to appoint competent authorities. 

One particular area of concern for the EU lawmaker is the ability of tech companies to control massive amounts of data and leverage the same to influence market behaviour and fairness of data exchanges in general. The EU’s efforts to rein in “big tech” and make digital markets more competitive culminated with the adoption of the Digital Markets Act (DMA — Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), O.J., 12 October 2022, L, 1-66) and the Digital Services Act (DSA — Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act), O.J., 27 October 2022, L, 1-102). 

The Data Act (Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act), O.J., 22 December 2023, L, 1-71) combines several policy concerns, and brings about some fundamental changes, across the fields of connected devices, cloud services and access to some datasets. Next to mandatory data sharing obligations that we discuss below, the Data Act creates a framework for standardisation of technical standards and rules regarding data exchanges and data spaces, authorises Member States to access data held by private actors in exceptional circumstances and for the common interest, and imposes cloud service providers obligations to make it easier for customers to switch to a different provider. 

Lastly, it is noteworthy that the EU’s efforts to promote data sharing will continue to develop further in specific sectors, with complementary legislation to be expected such as for health data (see the Commission proposal for a Regulation of the European Parliament and of the Council on the European Health Data Space, COM(2022) 197 final, 2022/0140(COD), and the final compromise text available at https://www.consilium.europa.eu/media/70909/st07553-en24.pdf) and financial data (see the Commission proposal for a Regulation of the European Parliament and of the Council on a framework for Financial Data Access, COM(2023) 360 final, 2023/0205 (COD)) in the future. 

For companies operating at a European scale or on a wider basis, it is important to be aware of two important dimensions of the regulatory landscape. First, for a number of the legal instruments, enforcement powers lie to a large extent in the hands of the European Commission, rather than being decentralised as is the case, for instance, with national supervisory authorities under the GDPR. In practice, in most cases, a combination of EU and national authorities will exercise oversight and can investigate alleged breaches, but with a significant shift of the effective powers to the European Commission. Second, the practical result of many of the new obligations is that companies must embed compliance in the very design of their systems, products and services. Businesses that tend to automate the delivery of products of services, must also ensure that the software or AI systems that underpin such automation are designed in a way that they can live up to the expectations of the regulators. This holds true for the AI Act as well, as we will discuss below.

1.2 Voluntary data sharing

When an obligation to enable re-use under the Open Data Directive does not apply, or when the source of the data is a private entity or a data altruism organisation, it is a legitimate concern for businesses that personal data be safeguarded and third-party claims based on any kind of ownership do not hamper the contemplated data sharing. For these situations, the DGA sets down one simple fundamental rule: sharing datasets and safeguarding personal data or intellectual property rights and trade secrets must go hand in hand. In practice, achieving that ‘simple’ goal can be quite complex, as it requires compliance with several requirements: (i) data must be shared on a fair, transparent, non-discriminatory, and (as a rule) non-exclusive basis; (ii) data recipients are accountable and must commit to respecting IP and trade secrets as well as data protection laws, implement anonymisation or other protections against disclosure, pass on contractual obligations to their counterparts involved in the data sharing, facilitate the exercise of rights by data subjects, etc.; and (iii) a secure processing environment must be implemented to ensure these principles are abided by. Interestingly for AI, even the mere “calculation of derivative data through computational algorithms”, qualifies as a use that requires such a secure processing environment to be put in place. The DGA refers to modern techniques for preserving privacy, such as anonymisation, differential privacy, randomisation, etc. Such techniques could be further defined or extended, or made explicitly mandatory, through implementing legislation. 

As a result, the DGA can be seen as an additional layer to the GDPR, and a foundation for the set-up of future “European Data Spaces” as well as for private data sharing schemes that companies would consider on a voluntary basis. 

1.3. Mandatory data sharing for businesses

Typically, data collected or exchanged in the context of connected products is held by manufacturers or users of such products. Those are at liberty to keep such data secret or to make it available for the purposes and upon the terms they deem fit (if for such data sharing they would work together with a data intermediary qualifying under the DGA, businesses would need to take into account the conditions and contractual terms that said DGA imposes), although some limited exemptions can apply such as for text and data mining (see Articles 3 and 4 of the Directive 2019/790 of the European Parliament and the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/CE and 2001/29/CE). This traditional view that data is proprietary to whoever has it in its possession must now be reconsidered with the entry into force of the Data Act, becoming applicable in separate stages as from 12 September 2025. 

The Data will oblige “data holders” of data generated by connected devices and ancillary services to make the same data available to the device users or their designated third parties providing a relevant service (e.g. maintenance). That includes an obligation to make sure that design and engineering of the products enable the generated data to be made available in an accessible way. The Data Act also brings in several limitations to contractual freedom and intellectual property rights, in particular database rights, to avoid hindering the objective of opening up a market for generated data for connected devices. Although data holders can still make access to their products subject to contractual terms, these must be fair, reasonable, non-discriminatory and transparent, and the Data Act also prohibits a number of “unfair terms” that in practice would defeat the purpose of making the generated data accessible for the device user. The hope is that such increased access to datasets will foster innovation, and bring about shifts on the market, including for the development of AI systems. 

Obviously, to protect companies’ trade secrets and proprietary information, disclosure should be limited to the extent strictly necessary and subject to confidentiality measures and commitments. In practice, this will require businesses to be even more prepared to defend and prevent dissemination of their trade secrets, anticipating as much as possible the occurrence of an access request. At the time of writing, it is not possible to advise what the exact scope of the Data Act will be, what exact types of use cases it will regulate and how. Nor is it possible to forecast whether the text will improve accessibility of data in a manner that is useful or consistent with the specific needs of those making or developing AI systems. 

Next to the Data Act, both the DMA and the DSA impose limited obligations to make datasets available. In short, the DMA sets out detailed actions that entities with a certain market power (“gatekeepers”) must or must not take, provided that such gatekeepers will be designated by the European Commission. Whilst the DMA applies to “tier 1” gatekeepers, the DSA takes a more horizontal approach and imposes content moderation obligations upon online platforms and other providers of digital services. The DMA and DSA shift from an ex post to an ex ante regulatory approach to create more competition and they will result in a significant compliance burden for businesses. With respect to data, the DMA notably limits potential to combine datasets and increases portability rights for end users. The DSA imposes increased transparency around algorithms used for recommendation or for profiling purposes. 

2. Automated decision-making and the GDPR

Initially, European data protection law was meant to deal with the challenges of public sector databases and the aggregation of information about citizens on computer mainframes. Then it evolved as a right to self-determination and included more and more aspects to tackle the use of data by private businesses. Nowadays, fundamental rights to the protection of personal data are enshrined in the EU Charter and the GDPR grants individuals significant rights and powers to control not only the collection and use of their personal data, but also further operations such as profiling or conducting analytics, and combining with other datasets for new purposes, etc. According to recital 4, GDPR, “the processing of personal data should be designed to serve mankind”, and there are little to no aspects of the lifecycle of personal data that are left unaddressed by its provisions. In addition, the notion of “personal data” has an extremely broad definition and scope of application, such that individuals are and remain protected in respect of any information that not only directly relates to them, but also has the potential for impacting various aspects of their lives. Under the GDPR, personal data refers to any information that relates to an identified or identifiable individual. By “identifiable”, the GDPR means that the individual “can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”. The standard of whether an individual is “identifiable” has been set by the European Court of Justice in Breyer (ECJ, 19 October 2016, C-582/14), judging that in order to ascertain whether the person is identifiable, “account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person”. In recent cases, the European Court of Justice has not fundamentally deviated from that approach but it may soon have an opportunity to clarify the practical application of this standard in respect of an entity that accesses pseudonymised data, a situation that often happens in the field of AI and data analytics: where that entity does not possess the coded identifiers nor the additional data that could enable it to identify individuals, can it still be said to be processing personal data (relating to an identifiable person) in its own right? (See ECJ, pending case C-413/23, in which a decision could be issued in the course of 2025.) 

On that basis, it is only reasonable to state that the GDPR does provide a strong regulatory framework for AI systems that process personal data, in the sense that it regulates to a large extent decisions that are made, or outputs that are produced, as a result of computing or analysing such personal data. Some argue that the existing data protection framework must be improved as it cannot easily be applied in respect of so-called big data analytics. In particular they argue that the re-use of massive sets of data for previously unknown purposes and for goals that are and remain partly undefined, seems at odds with classical data protection principles such as purpose limitation and data minimisation (“Big data and data protection” by A. Mantelero, in “Research Handbook on Privacy and Data Protection Law. Values, Norms and Global Politics“, G. Gonzalez Fuster, R. Van Brakel, P. De Hert (eds.), Edward Elgar, 2022, pp.335–357). However, in spite of these theoretical arguments, we see in practice that courts and data protection authorities currently use and apply the GDPR provisions to address algorithmic processes and the use of personal data to support (or replace) decision-making processes. This can be seen in respect of the fundamental rights approach that underpins the GDPR, both with respect to risk assessment and to individual rights to control one’s data, and of the specific provision on automated decision-making (Article 22). We look at each of those aspects in turn. 

2.1 Risk assessment under GDPR 

The GDPR essentially requires companies to anticipate, assess and manage risks and harms that the processing of personal data entails for rights and freedoms of individuals (“data subjects”). Given that AI systems have a clear ability to interfere with many fundamental rights and heavily rely on the processing of personal data, the GDPR clearly springs to mind as one of the key regulatory layers for the development, use and deployment of AI systems within the European Union, beyond the specific realm of automated decision-making that we analyse in the next section. It is useful to briefly highlight some aspects of its content that can impact developers and users of AI systems in practice.

First, any “high risk” data processing system must be subjected to an impact assessment that describes the potential harms for rights and freedoms of individuals, as well as the measures meant to address and mitigate those risks. This assessment exercise is iterative, and there are situations in which the results must be shared with regulators before any processing occurs.

Second, the risks that may result from the processing of personal data are varied in nature: according to recital 75 of the GDPR, they can materialise as physical, material or non-material damage, and may include situations as diverse as discrimination, identity theft or fraud, financial loss, reputational damage, loss of confidentiality, unauthorised reversal of pseudonymisation, but also “any other significant economic or social disadvantage” or the deprivation of rights and freedoms. One of the particular categories of risks includes profiling. Under the GDPR, profiling is defined as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”. As one can see from that definition, any use of personal data to support decision-making is likely to fall into the notion of profiling.

Third, each of these specific risks must be assessed and factored into a management and risk mitigation exercise, coupled with the implementation of appropriate technical and organisational measures to ensure that the provisions of the GDPR are complied with. In addition to such measures, profiling and all other types of processing operations must comply with fundamental principles such as the need for a legal basis, the requirements of accuracy and data minimisation, fairness, general transparency and non-discrimination. It follows that every discrete data processing operation involved in an AI system must be tested or challenged on the basis of these rules and principles.

On that basis, several screening or risk assessment systems have already been found to qualify as profiling and to breach important GDPR provisions: for instance, an online screening application for gun licenses to assess the psychological state of applicants, a tax fraud risk assessment system, an online tool for automated evaluation of job seekers’ chances to find an employment, or even the creation of commercial profiles of customers. In those cases, courts or data protection authorities have prohibited the continuation of processing operations, imposed an increased burden of transparency or mandated the disclosure of explanations about the logic of the profiling or about the rationale for a decision, in order to enable verification of the accuracy and lawfulness of the personal data handled. In some of these cases, the issue at hand was essentially that the processing carried out by a public authority or a government agency was not sufficiently “prescribed by law”, that is that the lawmaker should have provided a sound legal basis for it with an appropriate democratic debate taking place to define with enough granularity the particulars of what was allowed and what was not, and what the possible means of redress could be. In other cases, however, the courts or data protection authorities went after practices of private businesses, in areas such as employment, workforce management or recruitment, creditworthiness, marketing and fraud detection, where algorithms and AI systems were developed or used to support the decision-making process.

2.2 Rights-based approach

The GDPR additionally creates a strong set of individual rights that give data subjects a significant degree of control over their personal data and the use thereof by businesses. These rights can be exercised ex post, i.e. when processing operations have been launched and are running. As a result, they can create substantial legal risks for companies that have not anticipated them. The line of cases from the European Court of Justice in recent years confirm that these rights have a broad scope and far-reaching consequences. 

Individuals have a right to find out whether their personal data are being processed and to receive detailed information about the processing as well as copies of the data referring to them. The ECJ clarified a number of aspects of this right of access. First, in respect of medical records, data subjects must be given a true and understandable copy of all the data undergoing processing, including excerpts from databases where necessary to enable the individual to effectively exercise their rights under the GDPR. Second, the individual must be informed about the date and the reasons for which others accessed or consulted their personal data. The data subject has no right to be given the identity of agents or employees who accessed the personal data, unless that would be necessary to fulfil their rights, but they have the right to be given the precise identity of all the organisations that received access to the data. Third, it does not make a difference whether the data subject submits an access request for reasons that go beyond the verification of the lawfulness of the processing (e.g. accessing medical records in order to support a claim for damages against the practitioner). There is little doubt that the application of the right of access to datasets used for training AI systems, for instance, will give rise to difficult practical questions, but the position of the EU regulators and courts in Europe seems to be that individuals have a wide range of access rights that must be respected. 

2.3 Automated decision-making 

As set out above, profiling can be useful to support decision-making, but in some situations a decision can be made purely on the basis of automated processing, in the sense that there is no human intervention in the process (“automated decision-making” or ADM). 

The GDPR, in its Article 22, devotes a specific provision to such situations, where a decision is taken “solely” on the basis of automated processing, including profiling, provided that it “produces legal effects” or “similarly significantly affects” individuals. Recently, the Court of Justice clarified that this provision entails a general prohibition of automated decision-making systems (ECJ, 7 December 2023, SCHUFA, C-634/21). The only situations where such automated decision-making is allowed, are when it is: (a) necessary for the conclusion or the performance of a contract; (b) authorised by a law of the European Union or of a Member State, or; (c) based on the explicit consent of the data subject. In any event, suitable measures must be taken to safeguard fundamental rights, including at least the right to obtain meaningful human intervention on the part of the data controller, to express one’s point of view and/or to contest the decision. Lastly, Article 15 GDPR provides that the data subject may request access to information as to whether automated decision-making is in place or not, and to obtain “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. It is unclear whether this duty to disclose information about the logic and consequences of the decision-making applies exclusively to qualifying automated decision-making or extends to other cases where profiling or automated processing is part of the decision-making process.

The GDPR and its provisions on automated decision-making and profiling do, at least to some extent, regulate the deployment and use of AI systems to the extent they can have an impact for individuals’ lives. Although certainly imperfect, the GDPR has the potential of bringing with it severe prohibitions and substantial fines, together with potential claims for damages. Therefore, in practice, businesses and companies that develop AI systems for use within the European Union must carefully think about how they can mitigate those potential adverse consequences. It appears that they must set for themselves: (a) a degree of internal preparedness and organisation to ensure substantial and meaningful human involvement, including organisational rules for decision preparation or review, training for employees; (b) a degree of transparency towards end users and governments as concerns the constitutive elements of the decision-making process, including the specific factors and parameters that are utilised and how these could be possibly altered or adapted; and (c) avoiding immutability as to the actual consequences of machine-based decisions for individuals, to ensure that the effects for fundamental rights can either be mitigated, undone or at least explained and justified.

3. AI Act

3.1 General overview 

General

The newly adopted European regulation on artificial intelligence (hereinafter referred to as the AIA) was published in the Official Journal on 12 July (Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), O.J., 12 July 2024, L., 1-144), and will enter into force on 1 August, 2024, and enter into application in phased stages as from 2 February, 2025 (see below). The AIA looks at artificial intelligence primarily from the point of view of safety and risk management, and it requires careful attention from compliance and regulatory professionals. It embodies a critical compliance and conformity process for AI systems and models in its own right, which, where relevant, must also be applied in combination with existing product safety legislation in verticals or sector-specific legislation. But next to that, the AIA does not operate in isolation from other pieces of legislation. In particular, the AIA itself stresses the need to safeguard all fundamental rights and to assess the risks in the light of the specific context, purpose and use of each AI systems and models, it will require legal teams to take a 360 degree view and ensure all potential impacts of the development and deployment of AI systems and models are duly identified, assessed and adequately addressed. 

To give just one example, the AIA does not approach automated decisions in the same way as consumer law, where individuals are given significant rights to transparency and control (Cons. P. Hacker, “Manipulation by algorithms. Exploring the triangle of unfair commercial practice, data protection and privacy law”, Eur. Law J., 2023, 29, 142–175; K. Sein, “The Growing Interplay of Consumer and Data Protection Law”, in H.-W. Micklitz, C. Twigg-Flesner (eds.), “The Transformation of Consumer Policy in Europe”, Hart Publishing, 2023, 139–158) and data protection law (Article 22 of the GDPR, in particular, see (in French). L. Huttner, “La décision de l’algorithme. Etude de droit privé sur les relations entre l’humain et la machine”, Nouvelle Bibliothèque de Thèses, vol. 235, Dalloz, 2024). But the AIA does not exclude these other rules either, which will therefore apply within their respective fields of application together with some provisions of the AIA.  

Ethics

The adoption of the AIA was preceded in particular by the appointment by the Commission of a high-level group of independent experts (AI HLEG) on artificial intelligence, tasked in particular with drawing up ethical guidelines. This group presented its guidelines on 8 April 2019, which the AIA incorporates as a kind of “moral compass” (see N. Smuha, “Beyond a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea”, Philos. Technol. 34 (Suppl 1), 91–104 (2021), https://doi.org/10.1007/s13347-020-00403-w). The AI HLEG guidelines not only define compliance but also work as a benchmark for the practical impact of AI systems for end users, through seven key principles: (i) human action and control; (ii) technical robustness and safety; (iii) privacy and data governance; (iv) transparency; (v) diversity, non-discrimination and equity; (vi) social and environmental well-being; and (vii) liability. 

Legislative process

In parallel, the Commission continued its reflections with the publication of a White Paper on 19 February 2020 (Commission White Paper, “Artificial Intelligence — A European approach based on excellence and trust”, COM(2020) 65 final, 2020) and, on 21 April 2021, it tabled a proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence legislation) and amending certain Union legislative acts (COM(2021) 206 final, 2021). On 25 November 2022, the Council adopted its common position. On 14 June 2023, the European Parliament adopted its position and its (numerous) proposed amendments. Following this, intense negotiations finally led to the adoption of a compromise text on 2 February 2024, which was then subject to the necessary translations and tidying-up and was officially approved by the Parliament and then by the Council. 

Phased implementation

Although the AIA entered into force on 1 August 2024, it only becomes applicable, and thus binding, in phases: 

  • The key definitions and the prohibition of certain AI practices will enter into application as from 2 February 2025 (Chapters I and II). The identification of prohibited practices and the potential deprecation of already running AI products and services that qualify as prohibited practices, should therefore be started as a priority, prior to that date. 
  • Codes of practice from the European AI Office to guide providers in their compliance obligations are mandated to be ready by 2 May 2025.
  • All the rules for general-purpose AI models, the penalties under the AIA, as well as governance and professional secrecy rules for enforcement authorities, will become applicable on 2 August 2025 (Chapters V, XII, and section 4 of Chapter III as well as Chapter VII and Article 78). 
  • The obligations and requirements for high-risk AI systems under Annex III, the obligations for providers and deployers, the transparency requirements for limited-risk AI systems, the rules on sandboxing and real-life testing, and the database and post-market, oversight or enforcement rules, will become applicable as from 2 August 2026 (Article 6(2), Chapter III, Chapters IV, VI, VIII to XI and XIII). 
  • Finally, the classification as high-risk AI systems and corresponding obligations for (safety components of) regulated products listed in Annex I will become applicable as from 2 August 2027 (Article 6(1), and corresponding provisions across the AIA). 

Guidelines and standards to be adopted

The AIA introduces a comprehensive set of new obligations and compliance requirements for a broad category of economic operators. Certain concepts and rules are also drafted in somewhat vague and imprecise terms that lend themselves to diverging interpretation. However, many of the obligations the AIA lays down are likely to be translated or transposed in the form of standards or guidelines, which have yet to be adopted. Thus, requirements for high-risk AI systems should be expressed as standards or common specifications, whilst Commission guidelines are expected with respect to the definition of an AI system, the prohibited practices or the transparency obligations for low-risk AI systems. For businesses, this reliance on standards and implementing acts of the European Commission is expected to increase the level of legal certainty. In the short term, that also means that businesses may find it useful to engage with the AI Office (i.e. the European Commission) to express their concerns and possibly weigh in on the definition or phrasing of certain implementing regulations. 

That approach continues to feed criticism from many, arguing that the AIA fails to really embed fundamental rights and equates them to a set of technical requirements, as if defining standards was enough to ensure effective and meaningful protection of human rights. In other words, the choice of regulating AI systems through standardisation and conformity assessments could well lead to insufficient safeguards for human rights, a poor enforcement framework and a lack of democratic participation. (M. Ebers, “Standardizing AI. The Case of the European Commission’s Proposal for an ‘Artificial Intelligence Act’”, in L. A. DiMatteo, C. Poncibo, M. Cannarsa (eds.), “The Cambridge Handbook of Artificial Intelligence. Global Perspectives on Law and Ethics”, Cambridge University Press, 2022, 321–344; Smuha, N., Yeung, K., “The European Union’s AI Act: beyond motherhood and apple pie?”, in Smuha N. (ed.), “The Cambridge Handbook on the Law, Ethics and Policy of Artificial Intelligence”, Cambridge University Press, forthcoming.) That being said, as mentioned above, businesses will still need to comply with additional combined pieces of legislation that mandate fundamental rights assessments and grant end users significant individual, enforceable rights. In addition, the business risks and potential liabilities created by the AIA may act as a deterrent from too loose or “creative” compliance.  

3.2 Key concepts 

Relevant AIA provisions

Articles 1 and 2 define the purpose and scope of the text. Recitals 1 to 5 clarify the economic and social challenges of AI that the EU took into account in formulating the measure. These underpin the essential objective, set out in recitals 6 and 7, of promoting “human-centric and trustworthy” AI through common rules to ensure a high level of protection of health, safety and fundamental rights within the internal market. Recitals 8 to 11 outline the existing regulatory framework in which the new harmonised rules are to be applied. Article 3 defines no less than 68 concepts that structure the scope and the logic of the AIA. Some of these concepts are commented on in recitals 12 to 19, which provide clarifications and specify the intention pursued by the legislator in practice. Since the AIA applies to certain categories of economic operators by virtue of them developing, marketing or using “AI systems” or certain categories of “AI models”, it is essential to define what is meant by these terms. 

AI system

The definition of an AI system is virtually identical to the one established by the OECD. On 3 May 2024, the OECD Council adopted an amended version of the Council Recommendation on Artificial Intelligence. This contains a definition of an AI system (available at https://legalinstruments.oecd.org/fr/instruments/OECD-LEGAL-0449). An explanatory memorandum provides useful explanations of the background to the adoption, ambitions and scope of this definition (available at https://www.oecd-ilibrary.org/science-and-technology/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_623da898-en;jsessionid=j7yeBSIOFMQqZGF0kxXbNBbBAB2q1xD6JMpeEzNK.ip-10-240-5-117). Thus, an AI system is characterised by a set of elements, some of which are defined in a deliberately flexible manner, thereby encompassing a spectrum of computer science technologies (Article 3(1) of the AIA and recital 12), that the AI system: (i) is automated; (ii) has a variable degree of autonomy (i.e. it can operate without human intervention and with a certain independence of action); and (iii) has the ability to continue learning and therefore to evolve after it has been put into service (for example, voice recognition software that improves as it is used by adapting to the user’s voice). Importantly, an AI system can be used on its own as well as like a component of a product, into which it may or may not be incorporated. (Similarly, the definition of high-risk AI systems by reference to regulated products is irrespective of whether the AI system is placed on the market independently of the products in question.) As it appears, these elements of the definition are not highly distinctive. The AIA drafters also intended it to be sufficiently flexible to take account of rapid technological developments. 

The more characteristic elements in the legal definition are essentially the ability of the AI system to infer, from the input it receives, how to generate predictions, content, recommendations, decisions or any other outputs. The AI system therefore uses inputs, which may be rules, simple data, or both, and which may be provided either by humans or by sensors or other machines (see the definition of input data in Article 3(33) of the AIA: “data provided to or directly acquired by an AI system on the basis of which the system produces an output”). On that basis, the AI system is able to infer how to generate outputs of various kinds. This ability to infer appears to be the most distinctive feature of the AI system under the AIA. It can be based on various techniques such as learning from data, encoded knowledge or the symbolic representation of a task to be solved. This seems to indicate a desire to describe a vast range of AI techniques as AI systems, without distinguishing according to the technical methods used to achieve the essential result, i.e. the tool’s ability to generate outputs (predictions, recommendations), and it can be assumed that this includes both supervised and unsupervised learning techniques. It is clear that the AI system must offer “something more” than traditional computer programming approaches. According to recital 12, such an AI system is “enabling learning, reasoning or modelling”, beyond the processing of basic data, and is distinct from simpler traditional software systems or programming approaches, such as systems based on rules exclusively defined by natural persons to automatically perform operations. The AI system within the meaning of the AIA must also produce outputs of such a nature as to influence the context(s) in which it operates (referred to as “the physical or virtual environments”), or be capable of inferring models or algorithms, or both, from data. Finally, if the AI system infers outputs, it is for “explicit or implicit purposes”, which may be different from the AI system’s purpose in a given context. In the case of autonomous driving systems, for example, or generative tools such as ChatGPT, the objectives are not explicitly programmed or defined, but rather identified through their own machine learning process. In this sense, they are implicit. (Of course, their programming obeys an explicit objective on the part of the person who designed them.) 

The key observation that emerges from these elements is that, in the AIA’s definition, the AI system is endowed with its own capacity to produce outputs in a way that is not predetermined by a human, at least not totally or in any case not in totally explicit terms. Basically, without explicitly presenting it as a criterion for legal qualification, the AIA posits that the learning capacities of an AI system (in the sense of machine learning) equate a kind of unknown or unchartered territory for the human mind or reasoning. 

AI model

Although it does not provide an explicit definition, the AIA regards the AI model as a component of an AI system, which needs to be supplemented by other components such as a user interface, to become an AI system (see recital 97 of the AIA). According to the OECD definition, the model is also conceived as an essential component of any AI system that gives it the ability to infer how to generate outputs. These models may include statistical, logical, bayesian or probabilistic representations or other types of functions. The OECD explanatory memorandum gives several examples of concrete tasks that can be carried out by an AI system, ranging from object or image recognition to event detection, prediction, personalisation, interpretation and creation of content such as text, sound or images, optimisation of methods according to a given objective, or a form of reasoning based on inference and deduction from simulations and modelling. Enabling a machine to perform such tasks involves an engineering and development process comprising, schematically, an initial design phase (“build”) and a deployment phase (“use”), which may follow a linear sequence or take the form of a repetitive cycle: once designed and trained on input data, one AI system will be able to perform its functions without any further need for this data, while another may require ongoing new training and learning phases using new data, or a third one may adapt and change its own outputs, depending on various parameters. As can be seen, the AI system’s ability to identify, on the basis of inputs, the way to generate outputs, can refer both to the system’s initial design phase (creating an analysis or classification model from data, for example) as well as to its actual deployment and use (detecting incidents in real-life situations, for example). 

General-purpose AI model

The AIA intends to address the risks of AI systems, but also of general-purpose AI models (Article 3(63) of the AIA, and recitals 97 to 99). This is a particular type of AI model that displays “significant generality” and is capable of competently performing a wide range of distinct tasks, and that can be integrated into a variety of downstream systems or applications. It is also apparent from the text that this type of model is typically created and trained with very large amounts of data, using a variety of self-supervised, unsupervised or reinforcement learning techniques. It can be brought to market in a variety of ways, including downloads, programming interfaces or libraries. It can also be modified or improved, in which case it can turn into a new, different, AI model. AI models used for research, development or prototyping activities before they are placed on the market are not considered to be general-purpose AI models. It is also clear from the recitals that generative models are a typical example of a general-purpose AI model, and that a model with at least one billion parameters, trained with a large amount of data using large-scale self-supervision, should be considered a general-purpose AI model (see recitals 98 and 99, AIA). As will be seen below, the notions of AI system and general-purpose AI model may partially overlap (see in particular recital 85, AIA). The AIA also takes account of the fact that one may be integrated with the other in a value chain. In this respect, the AIA also defines the “general-purpose AI system”, which is an AI system based on a general-purpose AI model and which has the capacity to serve various purposes, both for direct use and for integration into other AI systems (Article 3(66), AIA), as well as the “downstream provider”, which integrates an AI model into an AI system (possibly for general use), either into its own products or services or those of a third-party subcontractor or integrator (Article 3(68), AIA). 

Intended use and performance of an AI system

The AIA aims to prevent risks associated with certain AI systems, with regard to the practical use of such systems. That is why concepts like the “intended use” and the “performance” of AI systems play a central role in the identification and mitigation of these risks. The intended use “means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation” (Article 3(12), AIA). Relatedly, the “performance” of the AI system is the ability to achieve its intended purpose (Article 3(18), AIA). Several critical aspects of the risk mitigation obligations under the AIA build upon these notions. This is the case for the “reasonably foreseeable misuse” (Article 3(13), AIA; the concept plays a role in defining the obligations of providers of high-risk AI systems, with regard to the risk management system (Article 9), the obligations of transparency and information to deployers (Article 13) and the requirement for human control (Article 14)), which is the use of the AI system in a way that does not conform with its intended purpose, but which may result from human behaviour or interaction with other systems if these are reasonably foreseeable. Similarly, “substantial modification of an AI system” (Article 3(23), AIA; this notion plays a role in defining the level of requirements for high-risk AI systems with regard to the role of deployers (Article 25), the logging and recording obligations for (Article 12) and the obligation to carry out a conformity assessment (Article 43)) which is a modification of the system after it has been placed on the market, which was neither foreseen nor planned in the initial conformity assessment and which may affect the compliance of the system, or may lead to a change in the purpose for which the AI system was assessed (see also recital 128, which states that changes to the algorithm and performance of an AI system that continues to learn after it has been put into service do not, as a rule, constitute a substantial modification, provided that they have been predetermined by the supplier and assessed at the time of the conformity assessment). These concepts all help in defining and shaping the respective roles and responsibilities of businesses operating along the AI value chain (see below).

Definitions relating to players and placing on the market

Two other concepts are worth discussing as they trigger the application of the AIA with respect to an AI system or model, i.e. placing on the market and putting into service. “Placing on the market” is the first making available or supply of an AI system or general-purpose AI model, for distribution or use on the EU market in the course of a commercial activity but regardless of whether paid for or free of charge (Article 3(9) and (10), AIA). “Putting into service” means supplying an AI system for first use, directly to the deployer (i.e. user) or for own use in the EU, in accordance with its intended purpose (Article 3(11), AIA). For example, an employer using a high-risk AI system has certain obligations to inform workers: these apply before the system is put into service, but not necessarily before it is placed on the market (see Article 26(7), AIA). 

The “provider” is any person or entity (this may be a natural or legal person, a public authority, an agency or any other body: Article 3(3), AIA) that develops, or has developed, an AI system or general-purpose AI model and places it on the market or puts the AI system into service under its own name or trade mark, whether for a fee or free of charge. The “deployer” is a person or entity who uses an AI system under its authority, except in the course of a personal non-professional activity (Article 3(4), AIA). The most onerous obligations concerning high-risk AI systems fall on providers, but deployers are also bound by certain rules of prudence, risk anticipation and information for individuals. According to the AIA, they are best placed to understand the context in which an AI system will be used and, consequently, to identify potential risks that have not been foreseen yet (see recital 93, AIA). In addition, deployers may, in specific cases, be charged with the same obligations as providers. 

As will be seen below, the AIA addresses the responsibilities along the AI supply chain. It applies to the “importer”, being a natural or legal person established in the EU that places on the market an AI system bearing the name or trade mark of a person established outside the EU, and to the “distributor” as well, being a person that is neither the supplier nor the importer but still plays a role in the supply chain to place an AI system on the EU market (Article 3(6) and (7), AIA). In a similar fashion, the AIA also applies to a general-purpose AI model integrated into an AI system and then placed on the market, through the notion of downstream supplier, as highlighted above. 

One may still question whether the AIA fully acknowledges the reality of the AI supply chain. As has been observed, because of, or thanks to, cloud computing technology, the various components of an AI system can be provided from several countries or places, the specific location of which can be difficult to determine. In addition, AI capabilities can be offered as a service to business users who do not want or cannot afford to build the entire chain of necessary components themselves and opt for using machine learning operations (MLOps) or AI-as-a-Service commercial offerings built by larger players. (See notably J. Cobbe, J. Singh, “Artificial Intelligence as a Service: Legal Responsibilities, Liabilities and Policy Challenges”, forthcoming in Computer Law & Security Review, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3824736.) In such cases, the delineation of responsibilities for providers and deployers needs to be thought through carefully. Customers of such offerings could qualify as providers or deployers given the broad definitions of the AIA. But because of their lack of knowledge or insight into the technology they could find themselves unable as a matter of fact to comply with requirements like risk management and quality management systems, technical documentation, transparency, human oversight, etc. 

Definitions relating to data

The definition of an AI system makes it obvious that data plays an essential role as one of the types of inputs that are used to infer and generate outputs. Several data-related definitions provide useful practical indications. First, “input data” may come from a human operator but may also be directly captured by the AI system through data acquisition capacities (Article 3(33), AIA). Providers must subject input data to several forms of verification and in some cases provide for a possibility of logging the same. Second, “training, validation and test data” is the data used for the design of an AI system or its specific training (Article 3(29) to (32), AIA; training data is data used to train an AI system through fitting its learnable parameters; validation data is data used to provide an evaluation of the trained AI system and to tune its non-learnable parameters and its learning process, in order, inter alia, to prevent underfitting or overfitting; and test data is data used to provide an independent evaluation of the AI system in order to confirm the expected performance of that system before it is placed on the market). Such training data must be subject to fairly advanced quality requirements in terms of its preparation and use to avoid the risks of errors, biases, gaps or deficiencies in a high-risk AI system. Third, the AIA contemplates different categories of data, in particular biometric data. These are also defined in Article 4(14) of the GDPR, which is also explicitly recognised as a source of inspiration by the AIA. The latter prohibits certain forms of use of biometric data for remote identification purposes (see below). The AIA as a whole applies without prejudice to the GDPR, which it is not intended to amend (see recital 10, AIA). Moreover, the concepts of personal data and profiling are defined by direct reference to the GDPR (Article 3(50) and (52), AIA). The same parallelism can be observed with regard to the concept of special categories of personal data, which are defined by reference to Article 9(1) of the GDPR, and from which the AIA introduces an exception for processing for the purposes of detecting and correcting bias and subject to compliance with a number of specific conditions (Article 10(5), AIA). On the other hand, the AIA has not adopted the definition of data itself, which appears in the Data Governance Act (Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending regulation (EU) 2018/1724 (Data Governance Act), O.J., 3 June 2022, L-152, 1-44) or the Data Act (Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 concerning harmonised rules on fairness in access to and use of data and amending regulation (EU) 2017/2394 and Directive (EU) 2018/1828 (Data Act), O.J., 22 December 2023, L., 1-71.), where it is defined as “any digital representation of acts, facts or information and any compilation of such acts, facts or information, in particular in the form of sound, visual or audiovisual recordings”. 

Definitions relating to classification and use cases

To conclude, several concepts relate specifically to practices that are deemed to present an unacceptable level of risk. They are prohibited under Article 5 of the AIA, although in limited circumstances some of the prohibited use cases can be deployed subject to specific safeguards, preconditions and limitations. This is particularly the case, for example, with the prohibitions on real-time remote biometric surveillance. Biometrics, in particular, form a key component of the AIA regulatory framework and in this regard it is important to distinguish how the concept is used, as each variation presents different risks:

  • Biometric verification is defined as automated, one-to-one verification, including authentication, of a natural person’s identity by comparing that person’s biometric data to previously provided biometric data (Article 3(36), AIA; see also recital 15, which states that the sole purpose of biometric verification is to “confirm that a person is who he or she claims to be” for the purpose of gaining access to a service or premises or to unlock a device). This includes storing a fingerprint or retinal print to activate a telephone or computer. 
  • Biometric identification concerns the automated recognition of physical, physiological, behavioural and psychological human features (such as face, eye movements, body shape, voice, prosody, gait, posture, heart rate, blood pressure, smell and typing) for the purpose of establishing a person’s identity by comparing their biometric data to biometric data stored in a reference database, irrespective of whether or not the person has given their authorisation (Article 3(35), AIA, and recital 15). 
  • Biometric categorisation, to conclude, involves assigning people to certain categories on the basis of their biometric data. Such categories can be varied and include sex, age, hair or eye colour, tattoos, behavioural or personality traits, language, religion, membership of a national minority or sexual or political orientation (Article 3(40), AIA and recital 16). 

For an analysis of how these concepts are managed generally on a risk basis by the AIA, please see later on in this chapter.

3.3 Scope of application: territorial

Introduction

It is no secret that the EU intended to play a pioneering, standard-setting global role, and to define rules applicable beyond the borders of the European Union. This is crystallised in Article 2 of the AIA, which defines its scope of application. We describe first the “classic” situations, where the AIA applies on the basis of a place of establishment within the EU, and then turn to various cases of extraterritorial application. 

Principle: application on the territory of the Union

There are four categories of organisations or people to whom the AIA applies by virtue of their establishment or presence within the EU. First, the AIA applies to providers and deployers established or located in the EU, though with some nuances: for deployers, the fact that they are established in the EU suffices, without the need to demonstrate that the AI system is used in the EU. For providers, the only decisive criterion is whether they place on the market, or put into service, AI systems or general-purpose AI models in the Union. Second, importers and distributors of AI systems fall under the AIA ratione loci because, by definition, they are either established in the EU (importers) or make AI systems available on the EU market (distributors). Third, authorised representatives of providers of high-risk systems: where providers are established outside the EU they must appoint a representative established in the EU that will act on their behalf and represent them, in particular with EU competent authorities (Article 22, AIA). And fourth, the AIA applies to “affected persons that are located in the Union”, although the exact meaning of this term is highly debatable. Whilst the English and Dutch texts seem to cover both legal and natural persons, the French and German texts refer to the equivalent of “data subjects” in English, which according to the GDPR includes individuals only. It is unclear whether Article 2(1)(g) of the AIA intends to make the whole Act applicable to any natural or legal person that is “affected” and resides in the Union, or whether it only means that the specific AIA provisions granting individual rights apply to data subjects if they are located in the Union. As partial corroboration of the latter position, the AIA does indeed enshrine a number of subjective rights, including the right to consent to tests in real-life conditions and the right to obtain from deployers an explanation of certain decisions (Article 86, AIA). 

Extraterritoriality

In addition to these hypotheses, the AIA applies in certain situations irrespective of any establishment in the territory of the Union. Firstly, any provider who places AI systems or general-purpose AI models on the market or puts them into service in the EU is subject to the AIA, regardless of the territory in which it is established (Article 2(1)(a), AIA). Secondly, the AIA applies to providers and deployers established or located outside the Union, when “the output produced by the AI system is used in the Union” (Article 2, paragraph 1(c), AIA). Recital 22 explains this hypothesis by emphasising the drafters’ intention to avoid too easy a circumvention of the AIA by relocating the execution or use of the AI system when the output produced by the said system is intended to be used in the EU. However, the text of Article 2 does not reproduce this element of intentionality, opening up doubts as to its exact scope. The sole condition set by the text, which is that the output of the AI system must be used within the Union, gives it exceptionally wide, potentially extra-territorial, applicability: it is not required, for example, that these “outputs” can have an impact or even simply relate to people residing within the Union. And although this second criterion specifically concerns AI systems, and not general-purpose AI models, this does not really seem to limit the scope. According to recital 97, the simple addition of a user interface is sufficient to transform a general-purpose model into an AI system. Consequently, the AIA would be applicable to providers and deployers established outside the EU who make available or use a web interface to access a general-purpose AI model, even by making it inaccessible on the EU market, as long as the outputs of that AI system themselves can be used in the EU. Thirdly, the AIA also applies to manufacturers of products who place an AI system on the market or in service, as part of their product and under their own name or trademark: even if these manufacturers are established outside the Union, the mere fact that an AI system is integrated into their product and placed on the market in the Union under their name or trademark will trigger the application of the AIA. This could also be the case, for example, of an AI system constituting a component of another product placed on the market in the EU. 

3.4 Relationship with other instruments

Complete harmonisation?

The AIA aims to create a uniform regulatory framework and harmonised rules within the internal market. It prevents Member States from imposing restrictions on the development, marketing and use of AI systems, except as explicitly authorised by the AIA (as specified in the first recital of the AIA). The AIA has a double legal basis. Primarily, it is based on Article 114 of the Treaty on the Functioning of the European Union and is explicitly designed to avoid fragmentation of the internal market, legal uncertainty and barriers to the free movement, innovation and adoption of AI systems (see recital 3, AIA). For that purpose, it lays down uniform rules and obligations within the Union, to ensure a high and consistent level of protection and a uniform protection of citizens’ rights and reasons of general interest. However, certain provisions of the AIA regulate the protection of personal data and restrict the use of AI systems for certain purposes (these include remote biometric identification for law enforcement purposes, AI systems for biometric categorisation, and the use of AI systems to assess the risks associated with individuals for law enforcement purposes). These are therefore based on Article 16 of the Treaty on the Functioning of the European Union (cons. C. Bulman, “La compétence matérielle de l’Union européenne en matière de numérique”, in B. Bertrand (ed.), “La politique européenne du numérique“, Brussels, Bruylant, 2023, pp.253–275). In both cases, the intention is clearly to limit as much as possible the Member States’ room for manoeuvre, except in areas that fall outside the scope of the AIA (see also recital 9, in fine, AIA).

Interpretation with other regulations

The AIA leaves untouched existing regulatory instruments which are intended to apply cumulatively with its provisions. This is particularly the case for the rules on the liability exemption for online intermediaries (AIA (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a single market for digital services and amending Directive 2000/31/EC (Digital Services AIA), O.J., 27 October 2022, L-277, 1-102), data protection law in general (Article 2(7) of the AIA refers to AIA (EU) 2016/679 (GDPR), AIA (EU) 2018/1725 applicable to the processing of personal data by Union institutions, bodies, offices and agencies, Directive 2002/58/EC on privacy and electronic communications and Directive (EU) 2016/680 applicable to the processing of personal data by the competent authorities for the purpose of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties), consumer protection and product safety, and labour relations and employee protection. The latter is one of the few areas where the AIA explicitly allows Member States to adopt national rules more favourable to workers concerning the protection of their rights regarding the use of AI systems (Article 2(11), AIA). It should be noted that the EU has also recently adopted a proposal for a platform work directive (COM/2021/762 final), which provides a specific framework for the use of algorithms and automated decisions in the context of platform work. These specific rules apply notwithstanding the AIA on artificial intelligence (see recital 9, AIA). Moreover, recital 9 recognises that the AIA does not affect the exercise of fundamental rights such as the right to strike or other forms of collective action, or rules of national law which would have the effect of limiting the use of certain AI systems, in particular regarding the protection of minors. 

Public and private law activities

Generally speaking, the AIA is applicable regardless of the type of activity for which AI systems are designed and used, be it by businesses, private entities or public authorities and administrations. EU institutions, bodies, offices and agencies acting as providers or deployers of AI systems are also subject to the AIA (see recital 23, AIA). Some of the prohibited practices specifically concern the activities of public authorities, in particular the investigation and prosecution of criminal offences. Similarly, certain high-risk AI systems are identified by reference to activities of public authorities, such as access to essential public services, law enforcement activities, migration, asylum and border control management, the administration of justice and democratic processes (Annex III, AIA). This should come as no surprise, given that the aim of the AIA is to prevent risks to safety, health and fundamental rights arising from AI systems, while taking account of their potential benefits, in a wide range of economic and social contexts (according to recital 4 of the AIA, “the use of AI can give companies decisive competitive advantages and produce beneficial results for society and the environment, in areas such as healthcare, agriculture, food safety, education and training, media, sport, culture, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, environmental monitoring, preservation and restoration of biodiversity and ecosystems, and climate change mitigation and adaptation”).

Excluded activities and areas

The AIA provides a broad exemption for systems used for military, defence or national security purposes and more broadly in areas outside the scope of EU law (see recital 24, AIA). Where AI systems are not placed on the market or in service in the EU, and their outputs are used in the EU but exclusively for military, defence or national security purposes, the AIA does not apply either. All activities for military, defence or national security purposes are thus excluded, including when they are carried out by a private-law entity on behalf of one or more Member States, for example. On the other hand, if an AI system is developed for military purposes and is subsequently used for other purposes, it would then fall within the scope of the AIA. 

Domestic activities

The AIA “does not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity” (Article 2(10), AIA). Somewhat redundantly, the definition of a deployer also provides that a deployer is an entity using an AI system under its own authority, “except where that system is used in the course of a personal non-professional activity” (Article 3(4), AIA; see also recital 13, AIA). In any case, it appears that the strictly personal activities of a natural person are not subject to the obligations of the AIA concerning deployers. A teacher who uses an AI system to prepare lessons would, on the other hand, be a deployer subject to the AIA; and a natural person who designs an AI system and places it on the market or puts it into service, even free of charge, would remain subject to the rules applicable to providers. As for legal persons, public authorities, agencies or other bodies, they could challenge their status as deployers on some other basis, but not invoke Article 2(10), which does seem to apply to natural persons only. By comparison, the GDPR also rules out its application to processing carried out “by a natural person in the context of an exclusively personal and domestic activity” (Article 2(2)(c), GDPR). This exception has a strict scope but seems to be linked to personal and domestic purposes rather than to a confidential or intimate sphere: it remains valid when an individual uses social networks and online services, which may nevertheless involve the publication of personal data made accessible to everyone. But it does not benefit operators who provide an individual with the means to process personal data, including social networking services and online activities (see recital 18, GDPR). A similar clarification of the scope of the exclusion for personal activities would be useful in the context of the AIA on artificial intelligence.

3.5 Free and open-source licences

Open source

The AIA provides several exceptions for systems and their components that are published under a free and open-source licence. However, the exact scope of these exceptions is not easy to understand, especially as they seem to differ slightly when applied to AI systems, their components, and general-purpose AI models respectively. 

Defining open-source licences

The text of the AIA does not expressly define what a free and open-source licence is. Recitals 102 to 104 provide some useful elements, but they are somewhat ambiguous and do not make for a systematic and rigorous legal definition. 

With regard to software and data, including models, a free and open-source licence implies the freedom for the user (licensee) to share, consult, use, modify and redistribute them or modified versions. As regards general-purpose AI models, a free and open-source licence implies that their parameters, weights, information on the architecture of the model and information on the usage of the model are made public (see recitals 102 and 104, AIA); on the other hand, it seems that publication under a free licence does not imply the disclosure of information on the datasets used to train the model or on its fine-tuning: according to recital 104, the obligations to document these elements in a transparent manner remain applicable even in the presence of a free and open licence.. Finally, according to recital 102, a licence is also free and open source if it grants the right to exploit, copy, distribute, study, modify and improve software, data or models, subject to the obligation to credit the original provider of the model and to comply with “identical or comparable” conditions for redistribution. 

At first sight, this is quite similar to the main characteristics of many free software licences, although it would have been useful to list the permitted acts more systematically and to specify which conditions may or may not be imposed on the licensee. Furthermore, how are we to understand the reference to the duty to credit the supplier of the model and to “identical or comparable” conditions? Is this only an example relating to a model licence and transposable to any other licence, or does this passage only concern general-purpose AI models? And do the “identical or comparable” conditions apply only to the distribution of the initial model, software or dataset, or on the contrary to the entire new, modified version? 

AI systems and open-source licences

Article 2(12) excludes AI systems published under “free and open-source licences” but makes three reservations from this approach: if the AI system is placed on the market or put into service as a high-risk system, or if it falls within the prohibited practices referred to in Article 5 or within the systems subject to an obligation of transparency under Article 50 of the AIA. The wording seems quite illogical, because in order to know whether a given practice is prohibited or qualifies as high-risk, the AIA, of course, needs to be applied. In practice, however, it would seem that publishing an AI system under an open-source licence makes it exempt from some provisions of the AIA, but not all of them. Both the provider and the deployer of such an AI system will have to comply with their obligations under Article 5, Chapter III and Article 50 of the AIA. 

AI components and open-source licences

Although the AIA mainly sets out obligations for providers and deployers, it also takes into account the role played by third parties in the supply chain, and in particular those who provide AI tools, services, processes or other components other than general-purpose AI models. They must be transparent with providers who integrate such components, to enable them to meet their own obligations. In particular, they must specify with the provider, in a written agreement, the information, capabilities, technical access and any other assistance required, based on the generally recognised state of the art. However, if these tools, services, processes and other components are made available to the public under a free and open-source licence, this obligation does not apply (Article 25(4), AIA and recital 89). Recital 89 further specifies that the developers of these tools and components should be encouraged to accelerate the sharing of information along the AI value chain by means of good documentary practices such as model cards and data sheets in particular. 

The justification for this exemption is not otherwise specified, but it may be assumed to be that making information available under a free licence already makes it possible to ensure a degree of transparency towards the supplier. But access under a free licence does not mean ipso facto that the information referred to in Article 25(4) is made accessible. And the AIA does not lay down any particular requirement as to the scope or extent of this free and open-source licence in this particular context. Moreover, the reference in recital 102 to data and models as the subject of the free licence raises questions: a licence implies the granting of an intellectual right, which is certainly conceivable for software, but less for data or a model, since as such they may constitute no more than information, ideas or formulae, which cannot necessarily be appropriated (cons. P. Gilliéron, “Intelligence artificielle: la titularité des données”, in A. Richa, D. Campa (eds.), “Aspects juridiques de l’intelligence artificielle“, Stämpfli Editions, Recherches juridiques lausannoises, CEDIDAC, 2024, pp.13–40; S. Ghosh, “Ain’t it just software?”, in R. Abbott (ed.), “Research Handbook on Intellectual Property and Artificial Intelligence“, Edward Elgar, 2022, pp.225–244). Should we then understand that a free licence within the meaning of the AIA would imply publishing or making accessible this information? In practice, there are many free or permissive licences for AI models (Cons. A. Liesenfeld, M. Dingenmanse, “Rethinking open source generative AI: open-washing and the EU AI Act”, The 2024 ACM Conference on Fairness, Accountability, and Transparency, accessible at https://dl.acm.org/doi/10.1145/3630106.3659005; P. Keller, N. Bonato, “Growth of responsible AI licensing,” Open Future, February 2023, accessible on https://openfuture.pubpub.org/pub/growth-of-responsible-ai-licensing/release/2; M. Jaccard, “Intelligence artificielle et prestation de services. Réflexions juridiques et pratiques autour des contrats de l’intelligence artificielle”, in A. Richa, D. Campa (eds.), “Aspects juridiques de l’intelligence artificielle“, op. cit., pp.131–167). 

General-purpose AI models and open-source licences

The third free license exception concerns general-purpose AI models published under a free and open-source license. Provided that the licence allows the model to be viewed, used, modified and distributed, and that the parameters, including weights, architecture and usage information are made publicly available, vendors are exempt from two obligations. Firstly, they are not required to draw up and keep up to date technical documentation and documentation intended for downstream providers, as referred to in Articles 53(1)(a) and (b) of the AIA (Article 53, paragraph 2, AIA). Secondly, providers established in third countries are not required to appoint a representative in accordance with Article 54 of the AIA (Article 54, paragraph 6, AIA). However, these exemptions do not apply to general purpose AI models which present a systemic risk.

3.6 Measures to promote innovation 

Purpose

Chapter VI of the AIA is devoted to “measures to support innovation”, of which we will only briefly mention here the exemption for research and the sandboxing. 

Research and development activities

The AIA provides for two exclusions, linked to research activities, in order to promote innovation and to respect scientific freedom. The first is general and concerns AI systems and AI models that are specifically developed and put into service for the sole purpose of scientific research and development, as well as the outputs of these systems or models (Article 2, paragraph 6, AIA). This exclusion therefore only concerns the development and putting into service of AI models and systems, not their use: recital 22 confirms that an AI system used to conduct research and development remains subject to the AIA. The second exclusion concerns research, testing and development activities relating to AI systems and models themselves, but only before they are placed on the market or put into service (Article 2, paragraph 8, AIA). An AI system that is subsequently put into service or on the market on the basis of these research activities will be fully subject to the AIA. That said, any research and development activity must be carried out in compliance with EU law and applicable ethical and professional standards. Furthermore, these two exclusions are without prejudice to the provisions of the AIA relating to sandboxing and real-life testing. 

Sandboxes and real-life testing

The AIA intends to foster the creation of “regulatory sandboxes” alone or jointly by competent authorities. This enables setting up AI system development, training, testing and validation projects in a controlled environment and for a limited time before they are put into service or on the market. The implementation of a project is subject to appropriate supervision, by means of a specific and documented plan, as well as an output report and written proof of the activities successfully carried out in the sandbox. The competent authorities and participating providers, as well as the public, with the latter’s agreement, may have access to these reports and documents. Plans may be implemented under the responsibility of the provider with regard to third parties, but with a moratorium on administrative fines by the competent authorities. The Commission must specify in an implementing act the details of these regulatory sandboxes, such as eligibility requirements, applicable conditions and procedures, etc. In particular, these conditions must guarantee free access for SMEs and start-ups, and enable providers to comply with their obligations to assess conformity and ensure proper compliance with codes of conduct. 

Interestingly, Article 59 of the AIA creates the possibility of re-using personal data, lawfully collected for other purposes, in the context of the development, training and testing of certain AI systems in a sandbox, for the purpose of safeguarding “important public interests” in the field of public security or health, preservation of the environment and biodiversity, climate change mitigation and adaptation, energy sustainability, security of transport and mobility systems, critical infrastructure (within the meaning of Article 2(4) of Directive (EU) 2022/2557 of the European Parliament and of the Council of 14 December 2022 on the resilience of critical entities and repealing Council Directive 2008/114/EC, O.J., 27 December 2022, L-333, pp.164–198) and the efficiency and quality of public administration and services. Strict conditions, in line with the principles of the GDPR, govern this processing of personal data. 

In addition to sandboxing procedures, Article 60 of the AIA authorises providers to carry out tests under real-life conditions, prior to entry into service or on the market, subject to a set of rigorous conditions including close control by the competent authority, which must approve the test plan in advance, and the informed consent of participants, who remain free to withdraw from the test at any time and request the deletion of their personal data. Providers may conduct these trials in cooperation with deployers, provided that they inform them in detail and enter into an agreement with them specifying their respective roles and responsibilities. 

3.7 Risk-based approach

Ambitions of the AIA

As stated in the first recital, the aim of the AIA is to harmonise the rules of the internal market, but also to promote the adoption of trustworthy artificial intelligence while guaranteeing a high level of protection for health, safety and fundamental rights. High ambitions have been assigned to the AIA: artificial intelligence is conceived as a technology that must be human-centric and serve as a tool for people to enhance their well-being, while respecting the Union’s values and fundamental freedoms (see recital 6, AIA). Within its scope, the AIA therefore adopts a risk-based approach, following what is currently a predominant international strategy to regulate AI systems in a proportionate and effective manner (M. Ebers, “Truly Risk-Based Regulation of Artificial Intelligence. How to Implement the EU’s AI Act”, pp.5–7). AI systems must therefore be viewed in relation to their context and the commensurate intensity and scope of the risks they may generate (see recital 26, AIA). Achieving coherent, reliable and human-centric artificial intelligence also requires taking into account, in the design and use of AI models, ethical rules and principles for trustworthy and sound AI, without prejudice to the provisions of the AIA. 

As an illustration, high-risk AI systems must be subject to a risk assessment, and an associated risk management system must address and mitigate identified risks throughout the lifecycle of the relevant AI system (Article 9, AIA). 

Critics of the AIA’s risk-based approach in scholarly literature argue that the AIA should be complemented with a truly rights-based approach in order to protect fundamental rights, but also that the risk assessment system under the AIA is incomplete, creates legal uncertainties, and lacks empirical evidence to identify the high-risk AI systems whilst creating friction with existing regulatory frameworks. No doubt, these observations will also feed the debate as businesses, stakeholders and regulators engage with each other and as the European Commission, supported by its newly set up AI Office, starts working on guidelines, codes of conduct and similar interpretative documents.

Overview

The risk-based approach manifests in the provisions of the AIA in different ways. First, the AIA defines a scale of risks that distinguishes between unacceptable practices, high requirements for high-risk AI systems, and transparency obligations for certain AI systems whose level of risk is deemed to be lower, while proposing specific consideration of the risks posed by certain general-purpose models or systems. Second, with regard to high-risk AI systems, the AIA acknowledges that a wide range of risks may arise since AI systems can be integrated into many different products and services. To safeguard risks for fundamental rights but also for the safety and health of individuals, the AIA aims to cover from the outset the development, marketing and use of products and services using artificial intelligence, as well as AI systems as such. The regulatory approach is therefore very much inspired by the EU’s approach to product safety and attaches a great importance to the role of the various players in the AI value chain.

Risk scale and classification of systems and models

Introduction

The Commission’s proposal distinguished four levels of risk: at the first level of the pyramid, systems deemed to be the least risky and exempt from any obligation; at the second level, systems with a moderate level of risk and subject essentially to transparency obligations; at the third level, high-risk systems subject to cumbersome and detailed rules and requirements; and at the last level of the pyramid, practices deemed to present unacceptable risks and therefore prohibited. The commercial emergence of generative AI in the course of 2022 somewhat disrupted that initial approach (cons. A. Strowel, A. Sachdev, “The EU draft AI Act within the ongoing debate on AI AIA”, in A. Richa, D. Canapa, “Aspects juridiques de l’intelligence artificielle“, op. cit., pp.1–11; T. Christakis, T. Karathanasis, “Tools for navigating the EU AI Act: (2) Visualisation Pyramid”, AI-AIA Papers, 24-03-05, 8 March 2024, available at https://ai-regulation.com/visualisation-pyramid). Not only do these general-purpose AI models lend themselves to a wide range of uses, from the most to the least risky, but also, given their versatility, their actual level of risk appears indeterminate a priori: they can neither be systematically classified as high risk, nor considered as automatically presenting a low or moderate risk. The solution that emerged was to devote specific provisions to general-purpose AI models and, among these, to build a special regime for those presenting “systemic risks”. But the categories can sometimes overlap: a general-purpose AI model can qualify as a high-risk AI system, depending on its intended use, and at the same time present systemic risks if it meets the criteria defined by the AIA (see below). 

Classification of high-risk systems

There are two distinct routes for an AI system to qualify as high risk: either by reference to the EU harmonisation legislation set out in Annex 1 to the AIA, or by reference to a list of systems and purposes for the use of artificial intelligence set out in Annex 3 to the AIA. These annexes and the criteria defined by the AIA are supposed to enable a relatively “automatic” classification, which in any case does not require any designation by the competent authorities. 

High-risk AI systems linked to regulated products

Under the first method of classification, two conditions are required: (i) the AI system must be intended for use as a safety component of a product covered by a regulatory instrument referred to in Annex 1, or must itself constitute such a product; and (ii) this product or the AI system must, in accordance with the same legislation, be subject to a conformity assessment procedure by a third-party body with a view to being placed on the market or put into service. Annex 1 to the AIA contains a list of some 30 regulatory instruments covering a variety of fields, including machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, in vitro diagnostic medical devices, automobiles and aviation. Once again, this classification by reference must serve the purpose of ensuring a high level of protection for the safety, health and fundamental rights of individuals: the digital components of such products, including AI systems, must present the same level of safety and compliance as the products themselves (see recital 47, AIA). 

Standalone AI systems

The other method of classification is for “standalone” AI systems, in the sense that they are not themselves products covered by the harmonisation legislation (nor are they safety components of such products). Under this method, any AI system can qualify as high risk, regardless of its integration with other products, on the basis of its intended purpose and use in certain areas that are defined in Annex 3 of the AIA. Eight different areas are listed, where AI systems are likely to have a high impact on individuals, even within the limits of what is authorised under applicable law: (i) biometrics; (ii) critical infrastructure; (iii) education and vocational training; (iv) employment, workforce management and access to self-employment; (v) access and entitlement to essential services and social benefits offered both by private and public entities; (vi) law enforcement activities; (vii) migration, asylum and border control management; (viii) administration of justice and democratic processes. Examples include remote biometric identification, biometric categorisation and emotional recognition systems whose use is lawful; systems used to determine access, admission or assignment to educational and vocational training establishments, for the assessment of learning or for the supervision of examinations; systems for the recruitment or selection of candidates for employment or for decisions on the promotion or dismissal of workers or the allocation of tasks; systems for assessing eligibility for health care, creditworthiness and life and health insurance risks and premiums. The Commission is enabled to evolve and develop this list and the description of use cases to take account of novel use cases presenting risks greater than or equivalent to the current “high risk” classifications, in accordance with the procedure, criteria and conditions laid down in Article 7 of the AIA, which include, amongst others, the intended purpose and the extent of use of an AI system, its level of autonomy, and other aspects related to harm and possible mitigation thereof. These criteria do not appear to have been consistently or explicitly applied when identifying the areas listed in Annex 3. The latter seems to be only the result of a political compromise (see M. Ebers, “Truly Risk-Based Regulation of Artificial Intelligence. How to Implement the EU’s AI Act”, p.13–14, available for download at papers.ssrn.com). But it is unclear whether that opens up opportunities to challenge the list in Annex 3. This also exemplifies the current inconsistencies of the AIA’s risk-based approach. 

Possible exemption

Importantly, the classification of high-risk systems under Annex 3 works only as a presumption: an AI system may not be high risk if it does not “present a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not having a significant impact on the outcome of decision-making” (Article 6(6)(1), AIA). The AIA goes into some detail about what criteria are relevant in that respect. Thus, an AI system is always considered to be high risk when it profiles natural persons (Article 6(3)(3), AIA). It should be remembered that profiling is defined in accordance with the GDPR. This may seem quite a rigid approach and does not account for the fact that profiling as such is not harmful and can even benefit individuals. For the rest, four alternative criteria can be assessed to advocate that the system is not high risk. Their common theme is that the AI system has no substantial impact on decision-making. The Commission may also add, amend or delete these conditions (Article 6(6) and (7), AIA), given that the AI system is intended to: (a) perform a “narrow procedural task” (for example, transforming unstructured data into structured data, classifying documents by category or detecting duplicates (see recital 53, AIA)); (b) “improve the outcome of a previously performed human activity” (for example, a system that enhances the editorial style of a document to give it a professional or academic tone, etc. (see recital 53, AIA)); (c) detect patterns or deviations in decision-making, without substituting for or influencing human evaluation; or (d) perform a preparatory task to an assessment relevant for the purposes of the use cases referred to in Annex 3 (for example, tools for indexing, searching, text and speech processing or systems for translating initial documents). In practice, the provider must document the reasons for which they regard the AI system as not being high risk. They must carry out this assessment before placing the AI system on the market or putting it into service. They must also make their documentation available to the competent authorities and register the AI system in question in the EU database (see below).  

General-purpose AI models and systemic risks

As mentioned, the AIA distinguishes two levels of risk among general-purpose AI models: some are subject only to the general obligations set out in Articles 53 and 54 of the AIA, while others present so-called “systemic” risks and must comply, in addition, with the rules referred to in Article 55 (however, it should be noted that, in accordance with the risk-based approach, whether or not it presents systemic risks, any general-purpose AI model, when combined with an AI system as a general-purpose AI system, is likely to constitute, at the same time, a high-risk AI system, depending on its purpose and the criteria applicable under Article 6(1) and (2), AIA, or even a prohibited AI practice within the meaning of Article 5). Systemic risk is defined as “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain” (Article 2(65), AIA; recitals 110 and 111, AIA provide further information on this concept). Article 51 of the AIA defines more precisely the conditions under which a general-purpose AI model is classified as presenting a systemic risk. In particular, such a model is presumed to have “high-impact capabilities” when the cumulative amount of computation used for its training, measured in floating point operations, is greater than 10^25. According to the Commission, at time of writing, only OpenAI’s GPT-4 and DeepMind’s Gemini “probably” exceed this limit (see https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683). The Commission can reassess this threshold and the benchmarks and indicators in light of evolving technological developments, by means of a delegated act. 

However, beyond this mathematical criterion, the classification of a model as being “systemic risk” may give rise to debate. Article 51 provides for two alternative conditions for classification as systemic risk: either the model has high-impact capabilities, i.e. capabilities equal to or greater than those of the most advanced models, assessed where appropriate on the basis of criteria other than the quantity of training calculations (i.e. according to “appropriate methodologies and technical tools, including indicators and benchmarks”); or the Commission issues a decision stating that a model has such capabilities, taking into account the criteria set out in Annex 13 to the AIA (these include the number of parameters, the quality and size of the dataset, the amount of computation used for training, the ability to adapt and learn new distinct tasks, and the number of users). The provider of a high-impact model must notify the Commission without delay, and the Commission may also issue an automatic designation if necessary. The AIA also allows the provider to put forward arguments to avoid designation as a systemic risk model, or to request that it be reassessed. 

Transparency

Providers of high-risk AI systems listed in Annex 3 must register in a database before they are put into service or on the market and register their system there, even when they consider that it is not high risk. Annex 8 of the AIA lists the information that must be included in this database, which includes in particular the trade name of the AI system, a description of its purpose and its operating logic. The Commission must also maintain and publish a list of general-purpose AI models presenting a systemic risk (Article 52, paragraph 6, AIA). 

AI value chain 

Legal certainty

As can be seen from the classification of high-risk AI systems and AI models with systemic risks, the risk-based approach underlying the AIA is not limited to addressing specific risks associated with the use of AI in a specific case, as would be the case for the processing of personal data under the GDPR. By way of comparison, data protection laws impose all compliance and diligence obligations on one single type of operator, the data controller. The latter is thought of in the abstract, as the entity that determines the purposes and means of a processing operation, resulting in an approach that is essentially causal and contextual: a myriad of operators may be involved in various capacities in a series of processing operations, for more or less different or joint purposes, and they will bear a variable degree of responsibility depending on the specific circumstances. Such an approach does not serve legal certainty, even if it may be appropriate in the context of data protection. 

On the contrary, the AIA leans towards product safety regulation, laying down a number of covered entities, regulated activities and concrete requirements to be met according to the estimated level of risk. The Commission’s proposal already argued that this approach should enhance legal certainty for designers and users of AI systems, preventing the emergence of barriers to the internal market on grounds of safety, health or the protection of fundamental rights. Ultimately, therefore, the AIA opts to make the provider responsible for placing a high-risk AI system on the market or putting it into service, “irrespective of whether that natural or legal person is the person who designed or developed the system” (see recital 79, AIA). 

Scope of the provider’s obligations

Generally speaking, the provider is the “first link” in an AI value chain. Their obligations are also particularly extensive. They must ensure that the high-risk AI system placed on the market is in compliance with the applicable requirements in terms of risk management, data governance, technical documentation, human control and robustness, in particular. They must also provide deployers with all the necessary mandatory information. They must ensure compliance with conformity assessment procedures and make all necessary information available to the competent authorities. In addition, once the product has been placed on the market, the provider remains responsible for the quality management system and the necessary corrective measures, as well as for cooperation with the competent authorities. 

Deployer’s duties

At the other end of the chain, the deployer’s main duties are to ensure that they use the AI system in accordance with the user manual, monitor its operation in order to be able to detect any risk, and more generally ensure human control by competent, trained people with the necessary authority and support. Deployers must also inform those concerned, including their own employees, that they are subject to the high-risk AI system.

Representatives

Providers established in third countries must appoint, by means of a written mandate, a representative established in the EU. They must authorise the representative to act as an interlocutor for the competent authorities in all matters relating to compliance with the AIA, and carry out at least the tasks relating to conformity assessment, keeping of information and documents required by the competent authorities, and of course cooperate with such national competent authorities (Article 22, AIA). 

Importers and distributors

As a reminder, the importer is the person who first places on the market in the Union an AI system bearing the name or trademark of a person established in a third country, while the distributor makes available an AI system or a model for general use on the market, in the course of a commercial activity, whether in return for payment or free of charge. Subject to this distinction, their respective obligations largely converge (Articles 23 and 24, AIA). First, they are personally responsible for checking that the high-risk AI system has, before being placed on the market, passed the conformity assessment procedure or bears the required CE marking, that the technical documentation has been drawn up and that the system bears the appropriate name or trademark. Should they find that the system does not comply with the AIA, they must not place the AI system on the market until it has been brought into conformity. Similarly, if the relevant market surveillance authority finds that the system presents a risk to health, safety or fundamental rights, the importer and distributor must inform the supplier (or importer as the case may be). Finally, they must cooperate with the relevant national competent authorities and provide them with all necessary information.

Contractual chains

As it can be seen, the AIA defines the specific roles and obligations of all operators concerned throughout the AI value chain, on the basis that they will often find themselves in a situation of close interdependence. As a result, each of these operators must not only carefully analyse the duties incumbent upon them, but also ensure that they include in contracts with their business partners the appropriate commitments and clauses needed for compliance with their own obligations. For example, a provider who procures tools, services, components or processes from a third party and uses them or integrates them into its AI system must specify, in a written agreement with this third party, the capabilities, technical access and any other assistance required to enable the provider to comply with all its obligations under the AIA. The importance of these contractual “chains” in managing respective AIA value chain obligations cannot be overstated.

Flexibility

The roles and obligations described in the AIA are not set in stone, but they can change based on the circumstances: in several cases, one of the operators may find itself having to take on all the corresponding obligations on its own (Article 25(1), AIA; see also recitals 83 and 84, AIA). 

This is the case for any distributor, importer, deployer or other third party who markets a high-risk AI system already placed on the market or put into service under its own name or brand: it is then considered to be a supplier and is subject to the obligations laid down in Article 16 of the AIA, without prejudice, however, to contractual stipulations providing for a different division of obligations. A similar situation may arise where a high-risk AI system is a safety component of a product covered by the harmonisation legislation referred to in Annex 1: the manufacturer of that product will be considered to be the supplier of the AI system if it affixes its name or trademark to it, at the same time as or after the product is placed on the market (Article 25(3), AIA).

Similarly, a supplier of a high-risk AI system is anyone who makes a substantial modification to a high-risk AI system already placed on the market or in service, such that it remains a high-risk system under Article 6 of the AIA. This notion of substantial modification implies a change that is not foreseen or planned in the initial conformity assessment and has the effect of impairing the compliance of the high-risk AI system with the requirements of the AIA or leads to a change in the purpose for which the system was assessed (Article 2(23), AIA). 

Lastly, where an AI system, including a general-purpose AI system, has already been placed on the market but has not been classified as high risk, anyone who changes its purpose in such a way that it becomes a high-risk system shall also be considered to be the supplier of that system and shall be subject to all the obligations referred to in Article 16. 

3.8 Prohibited practices

General points

Article 5 of the AIA prohibits eight practices and techniques in the field of artificial intelligence (see also recitals 29 to 45, AIA). These prohibitions are generally justified by the specific context or purpose of the AI system concerned and may therefore be accompanied by certain strictly defined exceptions. A priori, the interpretation of these rules and their exact scope will be a matter for the competent authorities. Attention should be paid to the fact that these prohibitions are among the first provisions of the regulation to come into force. It is therefore essential to carry out an audit and evaluation of the tools and systems in place within an organisation in order to identify any practices that are explicitly prohibited (so that the relevant AI systems may be deprecated or modified to avoid these), or AI systems that could potentially be interpreted as such and for which a more in-depth analysis may be necessary. In the limited context of this contribution, we will only briefly mention some of these prohibitions. 

Overview

Article 5 prohibits the placing on the market, putting into service and use of AI systems that could be described as “manipulative”, in the sense that they can alter a person’s behaviour, lead them to take a decision they would not otherwise have taken or cause them significant harm. This includes the use of subliminal or other deliberately deceptive or manipulative techniques, as well as the exploitation of vulnerabilities due to age, disability or the specific economic or social situation of a person. These include subliminal stimuli, whether audio, visual or video, or other forms of “dark patterns” or manipulation of free will through repeated and continuous exposure to certain types of content on social networks, or online recommendation systems, and so on. However, it remains to be seen how these provisions will be applied in practice (see for example R. J. Neuwirth, The EU Artificial Intelligence Act. Regulating Subliminal AI Systems, Routledge Research in the Law of Emerging Technologies, Routledge, 2023). According to recital 29, there is no need to demonstrate intent to distort behaviour or cause significant harm, provided such harm occurs. 

Also prohibited are biometric categorisation systems for the purpose of arriving at deductions or inferences concerning race, political opinion, trade union, religious or philosophical affiliation, sexual life or orientation, as well as emotion recognition systems in the workplace and in education, except for medical or security reasons. It should be noted that biometric categorisation, real-time biometric identification and emotion recognition systems, when not covered by the prohibition in Article 5 of the AIA, nevertheless constitute high-risk AI systems, as autonomous systems covered by Annex 3 of the AIA. 

Another important prohibition concerns real-time remote biometric identification in publicly accessible areas for law enforcement purposes, which is however permitted to the extent strictly necessary for certain purposes in connection with clearly and strictly defined categories of serious crime, and subject to compliance with a number of other substantive and procedural conditions, including, in particular, prior judicial authorisation. 

On the basis of the definitions of biometric verification, identification and categorisation (see earlier in this chapter), three types of AI systems based on the use of biometric data are defined to be either prohibited or authorised in certain very specific cases and subject to strict conditions. These are systems for recognising emotions, biometric categorisation and remote biometric identification (either in real time or a posteriori). 

Emotion recognition systems enable “the recognition or inference of emotions or intentions of natural persons on the basis of their biometric data” (Article 3(39), AIA and recitals 18 and 44). Recital 18 provides some indication of the type of emotions or intentions concerned (happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement), and specifies that these systems do not include tools for detecting the state of fatigue of pilots or drivers for the purpose of accident prevention, or systems for detecting expressions, gestures or movements of immediate appearance (moving the arms or head, speaking loudly or whispering, and so on) which are not used to identify or deduce emotions. 

Biometric categorisation systems include all systems for classifying people according to their biometric data in categories such as those listed above. However, the AIA excludes from this concept systems that are strictly ancillary to another commercial service and necessary for objective technical reasons, i.e. they cannot be used without the main service. For example, filters used on online marketplaces or social networks to display a preview of the product or to help in the purchasing decision, to add or modify photos or videos, etc., and their use appears to be fairly anecdotal compared with the abuses that the AIA is intended to prohibit. 

Remote biometric identification systems are intended to identify natural persons without their active participation, generally at a distance, by comparison with existing biometric databases, without distinguishing between the technology, processes or type of biometric data used for this purpose. Recital 17 specifies that this type of tool is generally used to facilitate the identification of individuals, either in real time or a posteriori. As mentioned above, so-called biometric verification systems, whose sole purpose is to confirm a person’s identity in order to grant access or unlock a device, are considered to have a much lesser impact on fundamental rights. They will therefore not be treated in the same way as biometric identification systems (see recital 17, AIA).

3.9 Horizontal rules: transparency, control of AI and right to an explanation

Systems subject to a transparency obligation

Article 50 provides for an obligation of transparency with regard to certain AI systems considered to present only a moderate risk, which can be mitigated by ensuring that data subjects are aware that they are dealing with an artificial intelligence system. These are, first, systems intended to interact directly with natural persons: they must be designed and developed in such a way that users are informed that they are interacting with an AI system, unless this is clear from the context and concrete circumstances, from the point of view of an ordinarily informed and reasonably attentive person. Secondly, providers of AI systems (including general-purpose AI systems) that generate synthetic audio, image, video or text content must ensure that they mark the output of these systems in a machine readable format and make it identifiable as having been generated or manipulated by AI. The practical and operational implementation of this obligation will obviously require the development of solid and reliable technical solutions. Thirdly, deployers of biometric categorisation or emotion recognition systems must inform the people exposed to them about how the system works, and process personal data in compliance with the GDPR. Finally, the deployers of a system that generates or manipulates content constituting a “deep fake” must declare that the content has been generated or manipulated by artificial intelligence. 

3.10 Requirements for high-risk systems 

Introduction

The main specific requirements applicable to high-risk AI systems are set out in Articles 8 to 15 of the AIA. We will present them here briefly, without addressing the issue of conformity assessment and the applicable procedures. 

Risk management system

High-risk AI systems must be equipped with a risk management system (Article 9, AIA) including the identification and analysis of risks, in particular those that may arise from use in accordance with its intended purpose as well as from reasonably foreseeable misuse, and the adoption of appropriate and targeted measures to manage these risks. Tests must be carried out to determine the best risk management measures before the product is placed on the market or put into service. In an attempt to lessen the already considerable EU compliance burden across multiple (and often overlapping) digital laws, providers subject to other risk management requirements under other provisions of Union law may include or combine the requirements of the AIA with those resulting from those other provisions. 

Data governance

Article 10 of the AIA sets out a number of rules and criteria concerning the data used to train AI models. For example, training, validation and test datasets must follow good practice, including appropriate design choices, screening for bias, etc. The data must be relevant, representative and error free, and it must be possible to use the data in the most appropriate way. The data must be relevant, representative, error free and complete for its intended purpose. It is important to note that this is an ongoing obligation which must be adhered to for the lifecycle of the relevant AI system and not just on its initial launch (see Recital 67).

Technical documentation

The technical documentation for an AI system must be drawn up before it is placed on the market or put into service. It must enable compliance with the requirements of the AIA to be demonstrated. Annex 4 lists the elements that it must, as a minimum, contain. 

Logging

High-risk AI systems must allow automatic logging of events throughout their lifetime. In particular, logging should be able to record elements relevant for identifying situations that may present a risk or lead to a substantial change, as well as for monitoring the operation of the high-risk AI system on the part of the deployer. Logging for biometrics-based systems must have even more specific functionalities. 

Transparency and provision of information to deployers

High-risk AI systems must be accompanied by a user manual to enable deployers to interpret the results and use them appropriately. In particular, these instructions must describe the purpose of the AI system, its level of accuracy, known robustness and cybersecurity indicators, known or foreseeable circumstances that may have an impact in this respect or that may give rise to risks, the system’s performance with regard to specific individuals or groups, human control measures, etc. 

Human control

The AI system should be designed to allow effective control by a natural person during its period of use, in order to prevent or minimise risks to health, safety and fundamental rights. The human control measures must be built into the AI system by the supplier before it is placed on the market and/or be suitable for implementation by the deployer. The system must be provided to the deployer in such a way that the persons concerned are able to understand, interpret and intervene in the operation of the AI system. 

Accuracy, robustness and cybersecurity

Article 15 sets out certain requirements in terms of resilience, accuracy, resistance to attacks and vulnerabilities, etc. 

3.11 Requirements for general-purpose AI models (with systemic risks)

Requirements for general-purpose AI models

The AIA primarily requires providers of general-purpose AI models to prepare and communicate certain information to competent authorities and downstream providers. Technical documentation of the model and its training, testing and evaluation process must be prepared and maintained, and provided to the AI Office and competent national authorities on request. Annex 11 of the AIA details the information that must be included in this technical documentation. In addition, downstream providers must be able to access documentation enabling them to integrate the general-purpose AI model into their AI systems, giving them a clear understanding of the model’s capabilities and limitations and enabling them to comply with their obligations. Annex 12 of the AIA specifies the elements that must be included in this documentation.

In addition, providers of general-purpose AI models must put in place a policy to comply with EU copyright and related rights law, including implementing a provision allowing content holders to object to the web-scraping of their copyright protected information (see Article 4(3) of the DSM Directive). Interestingly, this may presume another element of extra-territoriality in the AIA given that other non-EU Copyright laws, such as those in the US and UK, do not contain such a specific reservation of rights. 

Finally, providers established in a third country must appoint an authorised representative established in the EU (Article 54, AIA). 

We have already commented at length above on the relevant exemption from some of these obligations for models published under a free and open-source licences. However, this exemption does not apply if the general-purpose AI model is of a systemic nature. 

Requirements for models presenting systemic risks

Article 55 of the AIA sets out only a few fairly general obligations for providers of so-called systemic risk models: they must mainly assess their models on the basis of standardised protocols and tools reflecting the state of the art, assess and mitigate any systemic risks at EU level, and guarantee an appropriate level of cybersecurity for the model and its physical infrastructure. In practice, however, it can be expected that the AI Office will develop codes of conduct, compliance with which will lead to a presumption of compliance with their obligations. The adoption of these codes of good practice is specified in Article 56 of the AIA. 

EXPERT ANALYSIS

Chapters

Australia

Kit Lee
Philip Catania

Austria

Sonja Dürager

Belgium

Benjamin Docquir

Canada

Charles Morgan
Daniel Glover
Dominic Thérien
Erin Keogh
Francis Langlois
Jonathan Adessky
Karine Joizil
David Tait
Eugen Miscoi
Kendra Levasseur

China

Lewis Chen
Xinyao Zhao

Germany

Alexander Tribess

Iceland

Lára Herborg Ólafsdóttir

Ireland

Barry Scannell
David Cullen
Jordie Sattar
Leo Moore

Italy

Enrico Fabrizi
Federico Ferrara
Gianluigi Marino

Netherlands

Coen Barneveld Binkhuysen
Joanne Zaaijer

Spain

Rafael García del Poyo

Switzerland

Martina Arioli

Turkey

Begüm Alara Şahinkaya
Burak Özdağıstanli
Göksu Tuğrul
Hatice Ekici Tağa
Sümeyye Uçar

United Kingdom

Amy Moylett
David Cubitt
Joachim Piotrowski
John Buyers
Katherine Kirrage
Tamara Quinn
Tom Sharpe
Emily Tombs

United States

David V. Sanker, Ph.D

Powered by SimSage

Jobs from Nicholas Scott

3-6 PQE Corporate M&A Associate

Job location: London

Projects/Energy Associate

Job location: London

Popular Articles

Latest Articles

Bonus season gets underway as quartet of Big Law firms fall in line with Milbank

4h

Weil loses trio of high-profile partners to Paul Weiss and Latham & Watkins

7h

US restaurant franchisor Craveworthy Brands hires inaugural general counsel

22h

Lawyers among 45 pro-democracy activists sentenced to prison by Hong Kong court

22h

IP firms AA Thornton and Venner Shipley to join forces

1d