US Federal Agencies Commit to Regulatory Enforcement of AI Systems
In a recent joint statement, several federal agencies warned that they will commit to enforcing their separate regulations against developers, deployers, and users of AI systems, specifically citing civil rights, fair competition, consumer protection, and equal opportunity concerns. Federal Trade Commission (FTC) Chair Lina Khan and officials from the US Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB) and the US Equal Employment Opportunity Commission (EEOC) each reinforced their concerns about automated systems. Their serious language, joint public commitment, and previous enforcement actions in this area make this statement no simple theater.
Recent attention and increasingly widespread use of AI have led the FTC to issue a series of warnings this year about AI advertising. These warnings follow the FTC’s guidance from April 2021 on fairness and equity and its June 2022 report to study “how artificial intelligence (AI) may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified online harms.” The FTC’s new Office of Technology, designed to “strengthen the FTC’s ability to keep pace with the technological challenges in the digital marketplace by supporting the agency’s law enforcement and policy work,” is expected to take the lead in this area.
The other federal agencies involved have each issued separate guidance in their respective fields. The use of automated decision-making has been a topic of legislative action from a variety of state and municipal actors, including California and New York City. The DOJ and EEOC have repeatedly warned about disability and employment discrimination using AI tools.
More recently, the EEOC has published a new guidance document in support of its Artificial Intelligence and Algorithmic Fairness Initiative (launched in 2021). The document released on May 18, 2023, is titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” and is intended to aid employers and developers as they design and adopt new AI-enabled technologies.
Companies considering the use of automated decision-making tools in their hiring or employment practices are advised to pay careful attention to these regulations and legislation, which may affect existing hiring practices previously considered industry standard.
In a press release accompanying the statement, FTC Chair Khan said:
“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats[.] Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”
Notably, the joint statement includes explicit warnings about deploying AI in a variety of contexts, noting:
“AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The FTC has [ ] warned market participants that it may violate the FTC Act to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks. Finally, the FTC has required firms to destroy algorithms or other work product that were trained on data that should not have been collected.”
In a related comment, CFPB Director Rohit Chopra argued that “Unchecked ‘AI’ poses threats to fairness and to our civil rights in ways that are already being felt.”
This federal warning is a shot across the bow for a variety of industries that may have been considering using AI in a broad set of circumstances, increasing uncertainty related to compliance in an area already fraught by concerns regarding copyright, professional ethics, and simple questions to do with the effectiveness of products which may be plagued with falsehoods.
However, the tone of the publication is not simply barring these technologies, rather framing the regulatory attention as geared toward ensuring that responsible innovation occurs. As researchers focus on how to construct such responsible AI-by-design products, companies are advised to carefully navigate this space while regulators seem to be seeking examples to be made.
Here are a few related previous publications by the agencies in question:
- The CFPB published a circular in May 2022 “confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology being used.”
- The DOJ filed a statement of interest in federal court in January 2023 applying the Fair Housing Act to “algorithm-based tenant screening services.”
- The EEOC issued a technical assistance document in May 2022 “explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees.”
As mentioned above, the FTC has historically and recently been very active in this space, publishing:
- Emphasis on the importance of consumer trust in AI technology and concerns about how companies deploy AI technology, including generative AI tools;
- Warnings on AI involvement in unfair business practices;
- Significant concerns related to inaccuracy, bias, and discrimination; and
- Warnings on two occasions this year about potential violation of the FTC Act from use of automated tools with discriminatory impact or overselling AI products.
When the agencies are marching in tandem, industry members and AI users would do well to take careful steps themselves.