Skip to main content

Client Alert: The Coming European Union Artificial Intelligence Act – What it is and What it Means for Your Business

Client Alert: The Coming European Union Artificial Intelligence Act – What it is and What it Means for Your Business
Wednesday, April 17, 2024

The European Union is currently developing a comprehensive regulatory framework aiming to address the deployment and potential risks associated with artificial intelligence (AI) systems. The European Union Artificial Intelligence Act (the “AI Act”) has yet to be finalized, but some members of the European Parliament have estimated that a final draft could be ready by Summer 2025, with enactment shortly thereafter. In the meantime, the European Commission issued a December 2023 press release, which gives insight into the coming regulations. Once enacted, “Prohibited AI” systems will be phased out after 6 months, compliance with obligations for general AI governance will be required after 12 months, and all rules, including obligations for high-risk AI systems, will go into effect within 24 to 36 months.

Who is Impacted?

The AI Act will apply to both public and private entities that make their AI systems available on the EU markets or whose use of an AI system affects people located in the EU (this includes internationally based entities that do business in the EU). As such, both AI developers, and those implementing an AI system to use within the EU, will have the responsibility of ensuring the AI system conforms to the AI Act.

Exemptions will be available for prototyping and development activities preceding the AI system’s release to market and for military or national security purposes.

What Does the AI Act Require?

The AI Act introduces a risk-based approach with four levels:

  • Unacceptable Risk/Prohibited AI – This applies to AI systems that pose risk to safety and fundamental rights, including applications that manipulate cognitive behavior, biometric identification by law enforcement, biometric categorization (e.g., race, sexual orientation, religious views), emotion recognition in the workplace or educational settings, and scraping for facial images.As the name implies, these AI systems will be prohibited in the EU.
  • High Risk – Applies to AI systems with the potential to have adverse impacts on safety or fundamental rights. This category includes AI systems that manage or operate critical infrastructure, medical devices, or vehicles; assess eligibility for employment, benefits, or creditworthiness; provide risk assessments for law enforcement; and assist in judicial decision making.

    High Risk AI systems will require:

    • Completion of a fundamental rights impact assessment and conformity assessment before placing these systems on the market. Such assessments will include a description of processes, risks, and human oversight measures;
    • Registration in a public EU database;
    • Implementation of risk management and quality management systems;
    • Data governance such as bias mitigation and representative training data;
    • Transparency requirements such as instructions for use and technical documentation;
    • Human oversight such as audit logs;
    • Testing and monitoring to maintain accuracy, robustness, and cyber security.
  • Transparency Risk – AI systems with a clear risk of manipulation (e.g., chatbots) will require various disclosures to inform users they are interacting with a machine.
  • Minimal Risk – All other AI systems.

Minimal Risk and Transparency Risk AI systems fall into the category of General Purpose AI and may require:

  • Transparency such as technical documentation, training data summaries, and copyright and IP safeguards;
  • Evaluations, risk assessments, adversarial testing, and incident reporting for high-impact models that carry systematic risks (currently defined as any AI system with a total computing power of more than 10^25 FLOPs).

What are the Penalties?

  • Up to €35 million or 7% of the total worldwide annual turnover (whichever is higher) for prohibited AI violations.
  • Up to €15 million or 3% of the total worldwide annual turnover for most other violations.
  • Up to €7.5 million or 1.5% of the total worldwide annual turnover for supplying incorrect information.

For each category of noncompliance, the penalty is the lower of the two amounts for small and midsize enterprises (SME), and the higher of the two amounts for larger companies.

Notably, the EU’s General Data Protection Regulation (GDPR) also contains notice requirements for automated decision-making and the European Parliament has opined that GDPR’s notice requirements require informing data subjects when their personal information is used for AI training. Given the AI Act will have its own notice and transparency requirements there may be risk of combined penalties if these requirements are not met. Failure to meet these transparency requirements under the GDPR may result in fines up to €20 million, or 4% of the firm’s worldwide annual revenue from the preceding financial year, whichever amount is higher.

What Can You Do to Prepare?

If your business operates in the EU and has or is planning to develop or implement AI, you can begin preparing for the AI Act by keeping the four categories of AI systems in mind. You should also begin keeping records of the types of data your AI will utilize and the purposes for which the data will be used. This information will almost certainly be critical when the time comes to draft policies and disclosures to comply with the AI Act’s transparency requirements and any notice requirements that may be required under the GDPR. And, as we have seen with the GDPR, the AI Act may be a preview of future AI regulation in the U.S.

© Copyright 2024 Stubbs Alderton & Markiles, LLP