August 11, 2020

Volume X, Number 224

August 10, 2020

Subscribe to Latest Legal News and Analysis

FTC Provides Direction on AI Technology

The FTC recently issued comments on how companies can use artificial intelligence tools without engaging in deceptive or unfair trade practices or running afoul of the Fair Credit Reporting Act. The FTC pointed to enforcement it has brought in this area, and recommended that companies keep in mind four key principles when using AI tools. While much of their advice draws on requirements for those that are subject to the Fair Credit Reporting Act (FCRA), there are lessons that may be useful for many.

The recommendations from the FTC include:

  • Transparency: the FTC encourages companies to tell people if they are making automated decisions using AI tools. Such disclosures may be mandated under laws like the FCRA, if the entity is automating decision-making about credit eligibility, for example. The FTC also reminds companies not to be deceptive or secretive about use of AI tools (pointing to its Ashley Madison decision, where the company was found to have deceptively used false profiles to encourage sign-ups). In order to be transparent, the FTC stressed that companies need to know “what data is used in [the company’s] model and how that data is used.” The FTC cautioned companies to think about how they would describe to consumers the AI decisions made about them.

  • Fairness: Here, the FTC reminded companies not to discriminate against protected classes by, for example, making decisions about credit based on zip codes, when those decisions have a “disparate impact” on groups protected under the Civil Rights Act. The FTC in its comments also instructed companies to ensure fairness by giving people the ability to both access and correct information, something required when the FCRA applies.

  • Accuracy: The FCRA has requirements for accuracy. The FTC reminded companies that even if they are not providing consumer reports, they should still be concerned about accuracy, as information they compile may be used for consumer reporting purposes, and as such the FCRA may apply. The FTC also pointed to the world of consumer lending when looking for “lessons” on the accuracy front, recommending that companies ensure that AI models work, are validated, and are retested to ensure that they work as the company had originally intended.

  • Accountability: The FTC stresses that companies should think about the impact their use of AI will have on consumers. As a resource, they direct companies to the FTC’s 2016 Big Data report. Questions to ask include whether or not the data set being used is appropriately representative and if the model takes into account potential biases. The FTC suggests companies consider using independent standards or outside experts to hold themselves accountable.

Putting it Into Practice: As automation tools become more common, these recommendations from the FTC can be helpful for companies to keep in mind. They signal expectations from the FTC, which are often enforced by the Commission after issuing signaling commentary like this to the industry.

Copyright © 2020, Sheppard Mullin Richter & Hampton LLP.National Law Review, Volume X, Number 126

TRENDING LEGAL ANALYSIS


About this Author

Jonathan E. Meyer, Sheppard Mullin, International Trade Lawyer, Encryption Technology Attorney
Partner

Jon Meyer is a partner in the Government Contracts, Investigations & International Trade Practice Group in the firm's Washington, D.C. office.

Mr. Meyer was most recently Deputy General Counsel at the United States Department of Homeland Security, where he advised the Secretary, Deputy Secretary, General Counsel, Chief of Staff and other senior leaders on law and policy issues, such as cyber security, airline security, high technology, drones, immigration reform, encryption, and intelligence law. He also oversaw all litigation at DHS,...

202-747-1920
Liisa Thomas, Sheppard Mullin Law Firm, Chicago, Cybersecurity Law Attorney
Partner

Liisa Thomas, a partner based in the firm’s Chicago and London offices, is Co-Chair of the Privacy and Cybersecurity Practice. Her clients rely on her ability to create clarity in a sea of confusing legal requirements and describe her as “extremely responsive, while providing thoughtful legal analysis combined with real world practical advice.” Liisa is the author of the definitive treatise on data breach, Thomas on Data Breach: A Practical Guide to Handling Worldwide Data Breach Notification, which has been described as “a no-nonsense roadmap for in-house and external practitioners alike.”

She is known as an industry leader in the privacy and data security space and is consistently recognized by Leading Lawyers Network, Chambers and The Legal 500, and leading publications and organizations for her work in this area of law. Liisa was recently recognized as the 2017 Data Protection Lawyer of the Year - USA by Global 100, the 2017 U.S. Data Protection Lawyer of the Year by Finance Monthly, and the “Best in Data Security Law Services” at Corporate LiveWire’s 2017 Global Awards.

312-499-6335