In a move that could significantly alter the legal landscape surrounding artificial intelligence, the European Parliament recently advanced the “AI Act,” a proposed law aiming to restrict and regulate commercial use of AI in the EU.
While the bill is awaiting final negotiation and approval by the European Council before possible enactment, its progress highlights efforts by global lawmakers to address concerns over the rapidly developing technology. Indeed, the AI Act targets particular uses for AI that EU lawmakers perceive to pose a threat to “fundamental rights” recognized in the Union. It also would impose safeguards and transparency requirements on AI companies.
For example, in response to widespread concerns over the impact of generative AI on intellectual property rights, the Act requires generative AI companies to publish summaries of what copyrighted data was used in training their AI models. While proponents of the technology assert that this and other requirements could harm development and rollout of the technology, such mandates could bolster copyright and other IP claims against AI companies. In the United States, for instance, courts have already encountered copyright infringement claims against artificial intelligence companies. In January, a group of artists filed a proposed class action on behalf of similarly situated artists against various generative AI companies alleging that the companies used the artists’ protected work in training their AI models. The following month, Getty Images filed a lawsuit in the Delaware Federal District Court against Stability AI, Inc. alleging that the company “scraped” more than 12 million copyrighted images. See Andersen, et al. v. Stability AI Ltd., et al., Case No. 3:23-cv-00201 (N.D. Cal. filed Jan 13, 2023); Getty Images (US), Inc. v. Stability AI, Inc., Case No. 1:23-cv-001350UNA (D. Del. filed Feb. 3, 2023).
In addition to addressing concerns over AI’s potential infringement of IP rights, the EU bill also speaks to fears about AI’s potential impact on privacy and human rights. The Act classifies systems that rely on the use of biometric data as “high-risk,” which would subject them to additional limits and regulatory requirements under EU law. Under this framework, the utilization of biometric data (such as facial recognition) poses “a significant risk of harm to the health, safety and fundamental rights,” unless the AI is being used for the sole purpose of cybersecurity or personal data protection. Notably, the goal of protecting biometric data tracks closely with the goals of U.S. laws such as the Illinois Biometric Information Privacy Act (“BIPA”), which has generated substantial litigation in the past several years.
And in what is likely the least clearly defined aspect of the AI Act, the bill also would require generative AI foundation model providers to train and design models to “ensure adequate safeguards against the generation of content in breach of Union law[.]” Presumably, this covers many potential violations of law, such as “bots” generating content that is defamatory, exploitative or otherwise not protected. However, assessing what may be protected content versus what is unprotected in the EU may be difficult for companies and their users to determine. Further, while the bill specifies that this requirement should not “prejudice” rights such as freedom of expression, such restrictions may lead to censorship.
If enacted, the bill could foster a disjointed international regulatory framework for AI as the United States and other countries grapple with ways to approach the complex and dynamic technology. And as AI proliferates in the boundary less online sphere, providers and users will have to navigate without a clear legal map conflicting rights and responsibilities.