Final Draft of EU AI Act Leaked
Friday, February 2, 2024

On January 22, 2024, a draft of the final text of the EU Artificial Intelligence Act (“AI Act”) was leaked to the public. The leaked text substantially diverges from the original proposal by the European Commission, which dates back to 2021. The AI Act includes elements from both the European Parliament’s and the Council’s proposals.

Key Definitions

  • “AI system” is defined as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This follows the definition proposed by the European Parliament, which is aligned with the Organization for Economic Co-operation and Development’s definition of AI.
  • “General-purpose AI system” is separately defined under the AI Act as an “AI system which is based on a general-purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”
  • An AI “provider” is defined as the entity that “develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.” Providers will be subject to the majority of the AI Act’s requirements.
  • A “deployer” is defined as an entity under whose authority an AI system is used. Deployers will be subject to a more limited set of requirements under the AI Act.

Classification of AI Systems

The AI Act will introduce a risk-based legal framework for AI in the European Union that classifies AI systems as follows:

  • Prohibited AI Systems. AI systems that present unacceptable risks to the fundamental rights of individuals would be prohibited under the AI Act. Examples include AI systems that are used for social scoring based on social behavior or personal characteristics; AI systems designed to explore vulnerabilities that result in significant harm and the material distortion of behavior; and AI systems that engage in the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
  • High Risk AI Systems. AI systems that present a high risk to the rights and freedoms of individuals will be subject to the most stringent rules under the AI Act.
  • Transparency Risks. AI systems that are not high-risk but pose transparency risks will be subject to specific transparency requirements under the AI Act. Examples include AI systems that are intended to directly interact with individuals and act like a human, or AI systems designed to generate content (e.g., to prepare news articles).

In addition to the above categories of AI systems, the AI Act will impose specific obligations on providers of generative AI models on which general purpose AI systems, like ChatGPT, are based (e.g., an obligation to make a summary of the content used to train the models publicly available). Providers of generative AI models that present a systemic risk will be subject to additional, more stringent, requirements, such as an obligation to ensure an adequate level of cybersecurity protection and to assess and mitigate possible systemic risks at an EU level.

High-Risk AI Systems

The AI Act divides high-risk AI systems into two subsets:

  • Annex II of the AI Act (EU Harmonization Legislation): Annex II of the AI Act sets forth AI systems considered to be high-risk because they are covered by certain EU harmonization legislation. An AI system in this category will be considered high-risk when 1) it is intended to be used as a safety component of a product, or the AI system is itself a product covered by the EU harmonization legislation; and 2) the product or system has to undergo a third-party conformity assessment under the EU harmonization legislation. The list under Annex II is fairly long, but it includes laws on matters such as the safety of toys, machinery, radio equipment, civil aviation and motor vehicles, among others; and
  • Annex III of the AI Act: high-risk systems under Annex III of the AI Act, which are considered to be high-risk because they are classified as such by the AI Act itself.

AI systems under Annex III include (per the current wording of the Annex of the leaked text):

  • Biometrics. Remote biometric identification systems (except for AI systems intended to be used for biometric verification that have as their sole purpose to confirm that a specific individual is the person he or she claims to be); AI systems intended to be used for biometric categorization, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics; and AI systems intended to be used for emotion recognition.
  • Critical Infrastructure. AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating or electricity.
  • Education and Vocational Training. AI systems intended to be used:
    • to determine access or admission to, or to assign individuals to, educational and vocational training institutions at all levels;
    • to evaluate learning outcomes, including when those outcomes are used to steer the learning process of individuals in educational and vocational training institutions at all levels;
    • to assess the appropriate level of education that an individual will receive or will be able to access, in the context of/within education and vocational training institutions; or
    • to detect and monitor prohibited behavior of students during tests in the context of/within education and vocational training institutions.
  • Employment, Workers Management and Access to Self-Employment. AI systems intended to be used:
    • for recruitment or selection of individuals, notably to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates; to make decisions affecting terms of the work relationship, promotion and termination of work-related contractual relationships; to allocate tasks based on individual behavior or personal traits or characteristics; or
    • to monitor and evaluate performance and behavior of individuals in such relationships.
  • Access to and Enjoyment of Essential Private Services and Essential Public Services and Benefits. AI systems intended to be used:
    • by public authorities or on behalf of public authorities to evaluate the eligibility of individuals for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
    • to evaluate the creditworthiness of individuals or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
    • to evaluate and classify emergency calls or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems; or
    • for risk assessment and pricing in relation to individuals in the case of life and health insurance.

In addition to the systems listed above, certain AI systems that are used in the areas of law enforcement, migration, asylum and border management, and administration of justice and democratic processes are also considered high-risk.

Obligations Applicable to High-Risk AI Systems

The AI Act subjects providers of high-risk AI systems to the strictest requirements, including:

  • establishing, implementing, documenting and maintaining a risk management system and quality management system;
  • data governance requirements, including bias mitigation;
  • drafting and maintaining technical documentation with respect to the high-risk system;
  • record-keeping, logging and traceability obligations;
  • designing the systems in a manner that allows effective human oversight;
  • designing the systems in a manner that ensures an appropriate level of accuracy, robustness and cybersecurity;
  • complying with registration obligations;
  • ensuring that the AI system undergoes the relevant conformity assessment procedure;
  • making the provider’s contact information available on the AI system, packaging or accompanying documentation;
  • drawing up the EU declaration of conformity in a timely manner; and
  • affixing the “CE marking of conformity” to the AI system.

Deployers of high-risk AI systems will also have a significant number of direct obligations under the AI Act, although these are more limited in scope than the providers’ obligations. The deployers’ obligations include:

  • assigning the human oversight of the AI system to a person with the necessary competence, training, authority and support;
  • if the deployers control input data, ensuring that the data is relevant and sufficiently representative in light of purpose of the AI system;
  • informing impacted individuals when the deployer plans to use a high-risk AI system to make decisions or assist in making decisions relating to such individuals. Deployers of high-risk AI systems who are employers must inform workers representatives and the impacted workers that they will be subject to a high-risk AI system;
  • using information provided by providers to carry out a Data Protection Impact Assessment (if required);
  • conducting a fundamental rights impact assessment for certain deployers and high-risk systems. This requirement will be particularly applicable to deployers using AI systems to evaluate the creditworthiness of individuals or establish their credit score; and for risk assessment and pricing in relation to individuals in the case of life and health insurance; and
  • when a decision generated by the AI system results in legal or similarly significantly effects, providing a clear and meaningful explanation of the role of the AI system in the deployer’s decision-making procedure and the main elements of the decision.

The AI Act sets forth certain cases where a deployer will be considered a provider, and subject to provider obligations, e.g., if a deployer puts its trademark on a high-risk system already placed on the market or put into service without implementing contractual arrangements to prevent the change in allocation of obligations. Note that there are also obligations for distributors and importers of high-risk AI systems.


Non-compliance with the AI Act may lead to significant fines. Penalties range from €35 million or 7% of annual global turnover for violations with respect to prohibited AI systems, €15 million or 3% of annual global turnover for other AI Act violations, and €7.5 million or 1.5% of annual global turnover for providing incorrect information to regulators.


The AI Act has been formally approved by the Council on February 2, 2024 and is expected to be approved by the European Parliament within the next month. The AI Act will apply to regulated entities in stages, following a transition period. The length of the transition period will vary depending on the type of AI system:

  • six months for prohibited AI systems;
  • 12 months for specific obligations regarding general purpose AI systems;
  • 24 months for most other obligations, including the rules for high-risk AI systems included in Annex III; and
  • 36 months for obligations related to high-risk systems included in Annex II (list of Union harmonization legislation).

Listen to this article here


NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins


Sign Up for e-NewsBulletins