March, 2024

EU Artificial Intelligence Act: different rules for different risk levels

On March 13, the members of the European Parliament approved the Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act – AI Act). This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union, i.e. most likely this May, while shall apply from 24 months following the entering into force, with slightly shorter deadlines for some elements, namely 6 months for prohibitions of certain AI practices and 12 months for provisions concerning notifying authorities and notified bodies, governance, general purpose AI models, confidentiality and penalties, and a slightly longer deadline of 36 months for high-risk AI systems covered by Annex II.

Subject matter. The purpose of this Regulation is to promote the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental human rights and environment against harmful effects of artificial intelligence systems in the Union, as well as supporting innovation. Specifically, the Regulation lays down:

 – harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;

 – prohibitions of certain artificial intelligence practices;

 – specific requirements for high-risk AI systems and obligations for operators of such systems; harmonised transparency rules for certain AI systems;

 – harmonised rules for the placing on the market of general-purpose AI models;

 – rules on market monitoring, market surveillance governance and enforcement;

 – measures to support innovation, with a particular focus on SMEs, including start-ups.

Scope. This Regulation applies to:

  1. providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country;
  2. deployers of AI systems that have their place of establishment or who are located within the Union;
  3. providers and deployers of AI systems that have their place of establishment or who are located in a third country, where the output produced by the system is used in the Union;
  4. importers and distributors of AI systems;
  5. product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
  6. authorised representatives of providers, which are not established in the Union.
  7. affected persons that are located in the Union.

Definition. For the purpose of this Regulation, the following definitions apply.

‘AI system‘ is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

‘Provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.

Deployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

‘General purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.

The new rules establish obligations for providers and deployers  depending on the level of risk from artificial intelligence. Under the Regulation, AI systems will be divided into four main categories according to the potential risk they pose to society.

                     

                                                         Unacceptable risk: Prohibited AI practices (Chapter II, Art. 5)

 

The Regulation prohibits the following AI practices:

the placing on the market, putting into service or use of an AI system that

  • deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques,
  • exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation,
  • categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
  • for the evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics,

the use of

  • ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and insofar as such use is strictly necessary for one of the following objectives:
    • (i) the targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as search for missing persons;
    • (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
    • (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences,

Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority whose decision is binding of a Member State.

the placing on the market, putting into service for this specific purpose, or use of an AI system

  • for making risk assessments of natural persons in order to assess or predict the risk of a natural person to commit a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
  • that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • to infer emotions of a natural person in the areas of workplace and education institutions except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

High-risk AI systems (Chapter III)

 

AI use cases that can pose serious risks to health, safety or fundamental rights are classified as high-risk. The Regulation distinguishes between two categories of high-risk AI systems.

  1. Irrespective of whether an AI system is placed on the market or put into service independently from the relevant products, it shall be considered high-risk where both of the following conditions are fulfilled:
    • the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation[1] listed in Annex I (e.g. medical devices, toys, aviation, cars, lifts…);
    • the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I

2. In addition, systems deployed in eight specific areas identified in Annex III, which the Commission could update as necessary through delegated acts, shall also be considered high-risk:

Non-banned biometrics: Remote biometric identification systems, excluding biometric verification that confirm a person is who they claim to be. Biometric categorisation systems inferring sensitive or protected attributes or characteristics. Emotion recognition systems.
Critical infrastructure: Safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.
Education and vocational training: AI systems determining access, admission or assignment to educational and vocational training institutions at all levels. Evaluating learning outcomes, including those used to steer the student’s learning process. Assessing the appropriate level of education for an individual. Monitoring and detecting prohibited student behaviour during tests.
Employment, workers management and access to self-employment: AI systems used for recruitment or selection, particularly targeted job ads, analysing and filtering applications, and evaluating candidates. Promotion and termination of contracts, allocating tasks based on personality traits or characteristics and behaviour, and monitoring and evaluating performance.
Access to and enjoyment of essential public and private services: AI systems used by public authorities for assessing eligibility to benefits and services, including their allocation, reduction, revocation, or recovery. Evaluating creditworthiness, except when detecting financial fraud. Evaluating and classifying emergency calls, including dispatch prioritising of police, firefighters, medical aid and urgent patient triage services. Risk assessments and pricing in health and life insurance.
Law enforcement: AI systems used to assess an individual’s risk of becoming a crime victim. Polygraphs. Evaluating evidence reliability during criminal investigations or prosecutions. Assessing an individual’s risk of offending or re-offending not solely based on profiling or assessing personality traits or past criminal behaviour. Profiling during criminal detections, investigations or prosecutions.
Migration, asylum and border control management: Polygraphs. Assessments of irregular migration or health risks. Examination of applications for asylum, visa and residence permits, and associated complaints related to eligibility. Detecting, recognising or identifying individuals, except verifying travel documents.
Administration of justice and democratic processes: AI systems used in researching and interpreting facts and applying the law to concrete facts or used in alternative dispute resolution. Influencing elections and referenda outcomes or voting behaviour, excluding outputs that do not directly interact with people, like tools used to organise, optimise and structure political campaigns.

Exceptionally, AI systems deployed in these eight areas shall not be considered as high risk if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This shall be the case if one or more of the following criteria are fulfilled:

  • (a) the AI system is intended to perform a narrow procedural task;
  • (b) the AI system is intended to improve the result of a previously completed human activity;
  • (c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
  • (d) the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

However, an AI system shall always be considered high-risk if it performs profiling of natural persons.

The Commission shall, after consulting the AI Board, and no later than 18 months after the entry into force of this Regulation, provide guidelines specifying the practical implementation of this article completed by a comprehensive list of practical examples of high risk and non-high risk use cases on AI systems in accordance with the conditions set out this Regulation.

A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in this Regulation.. Upon request of national competent authorities, the provider shall provide the documentation of the assessment.

 

Requirements for high-risk AI systems (Section 2)

High-risk AI systems shall comply with the requirements established in this Section, taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI related technologies. Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements required under the Union harmonisation legislation. In order to ensure consistency of compliance with these two groups of requirements, avoid duplications and minimise additional burdens, providers shall have a choice to integrate, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service.

High-risk AI systems shall comply with the following requirements:

  • risk management system shall be established, implemented, documented and maintained, as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating (Article 9);
  • appropriate data governance and management practices for training, validation and testing data sets, dependent of the intended purpose of the AI system (Article 10);
  • the technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date (Article 11);
  • record-keeping by technically allowing for the automatic recording of events (‘logs’) over the duration of the lifetime of the system, in order to ensure a level of traceability of the AI system’s functioning that is appropriate to the intended purpose of the system (Article 12);
  • transparency and provision of information to enable deployers to interpret the system’s output and use it appropriately (Article 13);
  • human oversight with the aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse (Article 14);
  • an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

Obligations of providers of high-risk AI systems (Section 3, Articles 16-23)

A natural or legal person, defined as the provider, takes the responsibility for placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system. Accordingly, the Regulation prescribes obligations of providers of high-risk AI systems to:

  • ensure that their high-risk AI systems are compliant with the requirements set out in the Regulation (Section 2);
  • indicate their name, registered trade name or registered trade mark, the address at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable;
  • have a quality management system in place that ensures compliance with this Regulation, which shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions (Article 17);
  • keep at the disposal of the national competent authorities the documentation concerning the quality management system and the changes approved by notified bodies where applicable, the decisions and other documents issued by the notified bodies where applicable, the EU declaration of conformity (Article 18);
  • keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control (article 20);
  • ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service (Article 43);
  • draw up an EU declaration of conformity for each high-risk AI system, which states that the high-risk AI system meets the requirements set out in Section 2,  and keep it at the disposal of the national competent authorities for 10 years after the AI high-risk system has been placed on the market or put into service (Article 48);
  • affix the CE marking to the high-risk AI system to indicate conformity with this Regulation, and where high-risk AI systems are subject to other Union law the CE marking, which shall indicate that the high-risk AI system also fulfils the requirements of that other law (Article 49);
  • before placing on the market or putting into service a high-risk AI system  register themselves and their system in the EU database (Article 51);
  • take the necessary corrective actions to bring a high-risk AI system into conformity, to withdraw it, to disable it, or to recall it, as appropriate, if consider or have reason to consider that this system is not in conformity with this Regulation, as well as  inform the distributors and, where applicable, the deployers, the authorised representative and importers (Article 21);
  • upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2;
  • ensure that the high-risk AI system complies with accessibility requirements, in accordance with Directive 2019/882 on accessibility requirements for products and services and Directive 2016/2102 on the accessibility of the websites and mobile applications of public sector bodies.

In light of the nature and complexity of the value chain for AI systems, the Regulation clarifies the role and the specific obligations of relevant operators along the value chain, such as importers and distributors who may contribute to the development of AI systems (articles 26 and 27).

 

Obligations of deployers of high-risk AI systems (Section 3, Article 26)

Deployers of high-risk AI systems shall:

  • take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems,
  • assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support,
  • ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (to the extent of exercising control over the input data),
  • monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers,
  • without undue delay, inform the provider or distributor and the relevant market surveillance authority, and suspend the use of that system, where have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1),
  • where they have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities,
  • keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data,
  • before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system.
  • cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation.

Limited risk (Chapter IV)

 

Limited risks are associated with the transparency obligations for providers and deployers of certain AI systems and general-purpose AI models

Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that

  • the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated,
  • their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards.

Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated.

 

     Minimal or no risk

 

The AI Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category. This includes applications such as AI-enabled video games or spam filters.

 

       General purpose AI models

 

The notion of general purpose AI models is clearly defined and set apart from the notion of AI systems to enable legal certainty. These models are typically trained on large amounts of data, through various methods, such as self-supervised, unsupervised or reinforcement learning. General purpose AI models may be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct download, or as physical copy. These models may be further modified or fine-tuned into new models. Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content (such as in the form of text, audio, images or video) that can readily accommodate a wide range of distinctive tasks.

Although AI models are essential components of AI systems, they do not constitute AI systems on their own. AI models require the addition of further components, such as for example a user interface, to become AI systems. AI models are typically integrated into and form part of AI systems. When a general-purpose AI model is integrated into or forms part of an AI system, this system should be considered a general-purpose AI system when, due to this integration, this system has the capability to serve a variety of purposes. A general-purpose AI system can be used directly, or it may be integrated into other AI systems.

This Regulation provides specific rules for general purpose AI models and for general purpose AI models that pose systemic risks, which should apply also when these models are integrated or form part of an AI system.

A general purpose AI model shall be classified as general-purpose AI model with systemic risk if it meets any of the following criteria:

(a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks. The threshold for high impact capabilities presumption is reached when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25 (and can be amended by delegated acts of the Commission).

(b) based on a decision of the Commission, ex officio or following a qualified alert by the scientific panel that a general purpose AI model has capabilities or impact equivalent to those of point (a).

The Commission may designate a general purpose AI model presenting systemic risks, based on notification of the relevant provider that a general purpose AI  model meets the requirements regarding high impact capabilities, or ex officio, as well as following a qualified alert of the scientific panel. In any case the Commission shall ensure that a list of general purpose AI models with systemic risk is published and shall keep that list up to date.

Obligations for providers of general purpose AI models

Providers of general purpose AI models shall:

(a) draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the elements set out in Annex IXa for the purpose of providing it, upon request, to the AI Office and the national competent authorities;

(b) draw up, keep up-to-date and make available information and documentation to providers of AI systems (downstream providers) who intend to integrate the general purpose AI model in their AI system . Without prejudice to the need to respect and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall:

(i) enable providers of AI systems to have a good understanding of the capabilities and limitations of the general purpose AI model and to comply with their obligations pursuant to this Regulation; and

(ii) contain, at a minimum, the elements set out in Annex IXb.

The exception: these obligations shall not apply to providers of AI models that are made accessible to the public under a free and open licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general purpose AI models with systemic risks.

(c) put in place a policy to respect Union copyright law in particular to identify and respect, including through state of the art technologies, the reservations of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790[2];

(d) draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.

 

General purpose models, in particular large generative models, capable of generating text, images, and other content, present unique innovation opportunities but also challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed. The development and training of such models require access to vast amounts of text, images, videos, and other data. Text and data mining techniques may be used extensively in this context for the retrieval and analysis of such content, which may be protected by copyright and related rights. Any use of copyright protected content requires the authorization of the rightholder concerned unless relevant copyright exceptions and limitations apply. Directive (EU) 2019/790 introduced exceptions and limitations allowing reproductions and extractions of works or other subject matter, for the purposes of text and data mining, under certain conditions. Under these rules, rightholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research. Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorisation from rightholders if they want to carry out text and data mining over such works.

 

The obligations for the providers of general purpose AI models shall apply once the general purpose AI models are placed on the market, including when the provider of a general purpose AI model integrates its own model into its own AI system made available on the market or put into service. In contrast, these obligations, in any case, do not apply when an own model is used for purely internal processes that are not essential for providing a product or a service to third parties and the rights of natural persons are not affected.

 

Obligations for providers of general purpose AI models with systemic risk

Considering their potential significantly negative effects, the general-purpose AI models with systemic risk shall always be subject to the relevant obligations under this Regulation.

There are some additional requirements for models with systemic risks, i.e., obligations of their providers:

(a) perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identify and mitigate systemic risk;

(b) assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, placing on the market, or use of general purpose AI models with systemic risk;

(c) keep track of, document and report without undue delay to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;

(d) ensure an adequate level of cybersecurity protection for the general purpose AI model with systemic risk and the physical infrastructure of the model.

 

 

 

 

[1] Community legislation harmonising the conditions for the marketing of products Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC.
[2] Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC;