HB Ad Slot
HB Mobile Ad Slot
European Parliament Agrees on Position on the AI Act
Friday, June 16, 2023

On June 14, 2023, the European Parliament (“EP”) approved its negotiating mandate (the “EP’s Position”) regarding the EU’s Proposal for a Regulation laying down harmonized rules on Artificial Intelligence (the “AI Act”). The vote in the EP means that EU institutions may now begin trilogue negotiations (the Council approved its negotiating mandate on December 2022). The final version of the AI Act is expected before the end of 2023.

The EP proposes a number of significant amendments to the original Commission text, which dates back to 2021. Below we outline some of the key changes introduced by the EP:

Amendments to Key Definitions

The EP introduced a number of meaningful changes to the definitions used in the AI Act (Article 3). Under the EP’s Position:

  • The definition of “AI system” is aligned with the OECD’s definition of AI system. An AI system is now defined as a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
  • Users of AI systems are now called “deployers.”
  • The EP’s text further contains a number of new definitions, including:
    • “Affected persons,” which are “any natural person or group of persons who are subject to or otherwise affected by an AI system.”
    • “Foundation model,” which means an “AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.” Providers of foundation models are now subject to a number of specific obligations under the AI Act.
    • A “general purpose AI system,” which is an “AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.”

General Principles Applicable to AI Systems

The EP’s Position establishes a set of six high-level core principles that are applicable to all AI systems regulated by the AI Act. These principles are: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; and (6) social and environmental well-being.

Classification of AI Systems

The EP proposes significant amendments to the list of prohibited AI practices/systems under the AI Act. New prohibitions include: (1) biometric categorization systems that categorize natural persons according to sensitive or protected attributes or characteristics, or based on the inference of those attributes or characteristics; and (2) AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage.

Furthermore, the EP has expanded the list of AI systems and applications that should be considered high risk. The list of high-risk systems in the EP’s Position, for example, includes certain AI systems used by large social media platforms to recommend content to users.

Under the rules proposed by the EP, providers of certain AI systems may rebut the presumption that the system should be considered a high-risk AI system. This would require that a notification be submitted to a supervisory authority or the AI Office (the latter if the AI System is intended to be used in more than one Member State), which shall review and reply, within three months, to clarify whether they deem the AI system to be high risk.

The EP’s Position further imposes specific requirements on generative AI systems, such as an obligation to disclose that content was generated by AI, designing the AI system in a way that prevents it from generating illegal content, and publishing summaries of copyrighted data used for training.

Adjustments to the Obligations in the Context of High-Risk AI Systems

The EP also introduces significant changes to the obligations on providers of high-risk AI systems by, for example, requiring them to:

  • Ensure that natural persons responsible for human oversight of high-risk AI systems are specifically made aware of the risk of automation or confirmation bias.
  • Provide specifications for input data, or any other relevant information in terms of the datasets used, including their limitation and assumptions, taking into account the intended purpose and the foreseeable and reasonably foreseeable misuse of the AI system.
  • Ensure that the high-risk AI system complies with accessibility requirements.

In addition, the obligations for deployers of high-risk AI systems have been significantly broadened and now include:

  • For certain AI systems, informing natural persons that they are subject to the use of high-risk AI systems and that they have the right to obtain an explanation about the output of the system.
  • Prior to putting into service or using a high-risk AI system at the workplace, deployers shall consult workers’ representatives and inform employees that they will be subject to the system.
  • Carry out the Fundamental Rights Impact Assessment (see below).

Obligation to Carry Out a Fundamental Rights Impact Assessment

As mentioned above, prior to using a high-risk AI system, certain deployers will be required to conduct a Fundamental Rights Impact Assessment. This assessment should include, at a minimum, the following elements: (1) a clear outline of the intended purpose for which the system will be used; (2) a clear outline of the intended geographic and temporal scope of the system’s use; (3) categories of natural persons and groups likely to be affected by the use of the system; (4) verification that the use of the system is compliant with relevant Union and national laws on fundamental rights; (5) the reasonably foreseeable impact on fundamental rights of using the high-risk AI system; (6) specific risks of harm likely to impact marginalized persons or vulnerable groups: (7) the reasonably foreseeable adverse impact of the use of the system on the environment; (8) a detailed plan as to how the harms and the negative impact on fundamental rights identified will be mitigated; and (9) the governance system the deployer will put in place, including human oversight, complaint-handling and redress.

In the process of preparing the Fundamental Rights Impact Assessment, deployers may be required to engage with supervisory authorities and external stakeholders, such as consumer protection agencies and data protection agencies.

Exclusion of Certain Unfair Contractual Terms in AI Contracts with SME or Startups

The EP’s Position introduces a new provision restricting the ability of a contracting party to unilaterally impose certain unfair contractual terms related to the supply of tools, services, components or processes that are used or integrated in a high-risk AI system, or the remedies for the breach or the termination of obligations related to these systems in contracts with SMEs or startups. Examples of prohibited provisions include contractual terms that: (i) exclude or limit the liability of the party that unilaterally imposed the term for intentional acts or gross negligence; (ii) exclude the remedies available to the party upon whom the term has been unilaterally imposed in the case of non-performance of contractual obligations or the liability of the party that unilaterally imposed the term in the case of a breach of those obligations; and (iii) give the party that unilaterally imposed the term the exclusive right to determine whether the technical documentation and information supplied are in conformity with the contract or to interpret any term of the contract.

Measures to Support Innovation

Tittle V of the AI Act, which contains measures in support of innovation (including AI regulatory sandboxes), is expanded and clarified by the EP’s Position. One of the new provisions requires EU Member States to promote research and development of AI solutions which support socially and environmental beneficial outcomes, such as (i) solutions to increase accessibility for persons with disabilities; (ii) tackle socio-economic inequalities, and (iii) meet sustainability and environmental targets.

Fines

The EP’s Position substantially amends the fines that can be imposed under the AI Act. The EP proposes that: 

  • Non-compliance with the rules on prohibited AI practices shall be subject to administrative fines of up to 40,000,000 EUR or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Non-compliance with the rules under Article 10 (data and data governance) and Article 13 (transparency and provision of information to users) shall be subject to administrative fines of up to 20,000,000 EUR or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Non-compliance with other requirements and obligations under the AI Act shall be subject to administrative fines of up to 10,000,000 EUR or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 5,000,000 EUR or, if the offender is a company, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

It is also important to note that the EP’s Position proposes that the penalties (including fines) under the AI Act, as well as the associated litigation costs and indemnification claims, may not be subject to contractual clauses or other forms of burden-sharing agreements between providers and distributors, importers, deployers, or any other third parties.

Reinforced Remedies

A new chapter was introduced in the AI Act concerning remedies available to affected persons when faced with potential breaches of the rules under the AI Act. Particularly relevant is the introduction of a GDPR-like right to lodge a complaint with a supervisory authority. Read the EP’s Position.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins