September 23, 2023

Volume XIII, Number 266

Advertisement
Advertisement

September 22, 2023

Subscribe to Latest Legal News and Analysis

September 21, 2023

Subscribe to Latest Legal News and Analysis

September 20, 2023

Subscribe to Latest Legal News and Analysis

US Federal Agencies Commit to Regulatory Enforcement of AI Systems

In a recent joint statement, several federal agencies warned that they will commit to enforcing their separate regulations against developers, deployers, and users of AI systems, specifically citing civil rights, fair competition, consumer protection, and equal opportunity concerns. Federal Trade Commission (FTC) Chair Lina Khan and officials from the US Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB) and the US Equal Employment Opportunity Commission (EEOC) each reinforced their concerns about automated systems. Their serious language, joint public commitment, and previous enforcement actions in this area make this statement no simple theater.

IN DEPTH


Recent attention and increasingly widespread use of AI have led the FTC to issue a series of warnings this year about AI advertising. These warnings follow the FTC’s guidance from April 2021 on fairness and equity and its June 2022 report to study “how artificial intelligence (AI) may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified online harms.” The FTC’s new Office of Technology, designed to “strengthen the FTC’s ability to keep pace with the technological challenges in the digital marketplace by supporting the agency’s law enforcement and policy work,” is expected to take the lead in this area.

The other federal agencies involved have each issued separate guidance in their respective fields. The use of automated decision-making has been a topic of legislative action from a variety of state and municipal actors, including California and New York City. The DOJ and EEOC have repeatedly warned about disability and employment discrimination using AI tools.

More recently, the EEOC has published a new guidance document in support of its Artificial Intelligence and Algorithmic Fairness Initiative (launched in 2021). The document released on May 18, 2023, is titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” and is intended to aid employers and developers as they design and adopt new AI-enabled technologies.

Companies considering the use of automated decision-making tools in their hiring or employment practices are advised to pay careful attention to these regulations and legislation, which may affect existing hiring practices previously considered industry standard.

In a press release accompanying the statement, FTC Chair Khan said:

“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats[.] Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”

Notably, the joint statement includes explicit warnings about deploying AI in a variety of contexts, noting:

“AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The FTC has [ ] warned market participants that it may violate the FTC Act to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks. Finally, the FTC has required firms to destroy algorithms or other work product that were trained on data that should not have been collected.”

In a related comment, CFPB Director Rohit Chopra argued that “Unchecked ‘AI’ poses threats to fairness and to our civil rights in ways that are already being felt.”

This federal warning is a shot across the bow for a variety of industries that may have been considering using AI in a broad set of circumstances, increasing uncertainty related to compliance in an area already fraught by concerns regarding copyright, professional ethics, and simple questions to do with the effectiveness of products which may be plagued with falsehoods.

However, the tone of the publication is not simply barring these technologies, rather framing the regulatory attention as geared toward ensuring that responsible innovation occurs. As researchers focus on how to construct such responsible AI-by-design products, companies are advised to carefully navigate this space while regulators seem to be seeking examples to be made.

Here are a few related previous publications by the agencies in question:

As mentioned above, the FTC has historically and recently been very active in this space, publishing:

On two occasions in the past two years, the FTC has enforced such attention, requiring firms to “destroy algorithms or other work product that were trained on data that should not be collected.”

When the agencies are marching in tandem, industry members and AI users would do well to take careful steps themselves.

© 2023 McDermott Will & EmeryNational Law Review, Volume XIII, Number 156
Advertisement
Advertisement
Advertisement

About this Author

Alya Sulaiman Healthcare Lawyer McDermott Will
Partner

Alya Sulaiman works with clients to navigate complex healthcare regulatory, privacy and transactional matters, with a focus on digital health and data use strategy. Alya has substantial experience with product counseling and provides guidance during the conception, development, launch and support of new digital health products and services.

Alya advises on the legal and business frameworks for innovative technologies, including predictive analytics, artificial intelligence and machine learning, electronic health records, interoperability tools, health data platforms and digital...

1 310 788 6017
Jason D. Krieser Partner  Dallas Corporate & Transactional  Mergers & Acquisitions  Outsourcing  Technology & Commercial Transactions  Telecommunications Transactions  Technology  Telecommunications, Media & Technology
Partner

Jason Krieser is the co-head of the Firm’s Technology & Outsourcing Practice, and the office managing partner for Dallas and Houston. He advises clients on all aspects of technology transactions, outsourcing matters, telecommunications and other complex commercial contracts. He is an internationally recognized advisor on outsourcing matters, including in relation to information technology (IT), business process and offshore issues. He also handles technology development and licensing matters, joint ventures, strategic alliances, manufacturing agreements, key supply and distribution...

214 295 8093
Lesli C. Esposito Washington D.C. Partner antitrust consumer protection Lawyer McDermott Will & Emery Law
Partner

For more than 20 years, Lesli C. Esposito has helped clients around the globe navigate complex antitrust and consumer protection matters. She has deep experience handling government investigations, litigation, compliance, and global merger control on behalf of clients in diverse industries, including consumer products, retail, technology, pharmaceuticals, healthcare, telemarketing, oil and gas, mortgage lending and professional services.

202-756-8445
Shawn C. Helms Partner  Dallas Corporate & Transactional  Autonomous Vehicles  Consumer Data & Digital Marketing
Partner

Shawn C. Helms is co-head of the Firm’s Technology & Outsourcing Practice. Shawn has broad experience in the areas of information technology, outsourcing and telecommunications. He focuses his practice on complex transactions involving technology and intellectual property, including business process outsourcing (BPO) and information technology outsourcing (ITO), licensing, cloud computing arrangements (infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS)), technology maintenance and services, technology development/customization (including...

214-295-8090
Partner

Todd S. McClelland advises companies on complex, international legal issues associated with cybersecurity breaches and compliance, data privacy compliance, and data, technology, cloud and outsourcing transactions. Todd counsels clients in many industries, including payment processors, cybersecurity product providers, retailers, petro companies, financial institutions and traditional brick-and-mortar companies.

Prior to his legal career, Todd was an engineer designing and programming industrial control, robotics and automation systems. This background gives him unique perspective and...

404-260-8550