May 15, 2021

Volume XI, Number 135

Advertisement

May 14, 2021

Subscribe to Latest Legal News and Analysis

May 13, 2021

Subscribe to Latest Legal News and Analysis

European Commission Publishes Proposal for Artificial Intelligence Act

On April 21, 2021, the European Commission (the “Commission”) published its Proposal for a Regulation on a European approach for Artificial Intelligence (the “Artificial Intelligence Act”). The Proposal follows a public consultation on the Commission’s white paper on AI published in February 2020. The Commission simultaneously proposed a new Machinery Regulation, designed to ensure the safe integration of AI systems into machinery.

The Artificial Intelligence Act would prohibit the use of AI systems that are considered a clear threat to the safety, livelihoods and rights of people, such as systems that are designed to manipulate human behavior through subliminal techniques and systems that allow the government to conduct “social scoring” resulting in unfavorable treatment.

“High-risk” AI systems are not prohibited under the proposed Act, but are subject to restrictions. High-risk systems include those used:

  • For management and operation of critical infrastructure that could endanger individuals, such as road traffic and electricity;

  • In education or vocational training, e.g., determining access to education;

  • As product safety components;

  • In employment, such as during the recruitment, promotion or termination process;

  • For essential private and public resources, including evaluating access to benefits and services;

  • By law enforcement, such as for assessing risks of re-offending by individuals;

  • For immigration and border control, including to verify the authenticity of travel documentation; and

  • For justice and democracy, such as by using the system to apply the law to a set of facts.

Any system using “real-time” remote biometric identification, such as facial recognition, also is considered automatically high-risk, and its use in public for law enforcement purposes is prohibited with limited exceptions, such as with regard to missing children and terrorist threats. Even in these instances, authorization is required by a relevant body or judicial authority.

Before providers place AI products designed for these spheres on the EU market (regardless of the location of providers) they are subject to certain obligations. The same obligations apply if the system itself is operated outside of the EU but its output is used in the EU. For instance, such systems must undergo an adequate risk assessment and implement mitigation measures. The quality of the datasets used to train, validate and test the system must also be sufficiently high, and their activity must be logged to ensure that their functioning can be traced and monitored. Providers must retain documentation that allows authorities to assess compliance with these measures, and clear and adequate information also must be provided to users of the system. In addition, systems must be designed to ensure that there is appropriate human oversight, and a high level of robustness, security and accuracy in their performance.

With respect to AI systems that are considered to pose only a limited risk, the Artificial Intelligence Act would impose transparency obligations, requiring that providers make users aware that they are interacting with a machine, while systems posing a minimal risk are not regulated by the proposed Act and may be used freely. The Commission commented that the vast majority of AI systems currently used would fall into this final category.

The Proposal also creates a European Artificial Intelligence Board composed of representatives from EU Member States and the Commission, which would facilitate a harmonized implementation of the Artificial Intelligence Act, provide advice to the Commission and share best practices between EU Member States. If adopted by the European Parliament and Council, the Artificial Intelligence Act would apply directly across the EU.

In the Commission’s press release, Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said, “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.”

Advertisement
Copyright © 2021, Hunton Andrews Kurth LLP. All Rights Reserved.National Law Review, Volume XI, Number 112
Advertisement
Advertisement

TRENDING LEGAL ANALYSIS

Advertisement
Advertisement

About this Author

In today’s digital economy, companies face unprecedented challenges in managing privacy and cybersecurity risks associated with the collection, use and disclosure of personal information about their customers and employees. The complex framework of global legal requirements impacting the collection, use and disclosure of personal information makes it imperative that modern businesses have a sophisticated understanding of the issues if they want to effectively compete in today’s economy.

Hunton Andrews Kurth LLP’s privacy and cybersecurity practice helps companies manage data and...

212 309 1223 direct
Advertisement
Advertisement