A New Frontier or Back to Basics? FTC Issues New Guidance on Artificial Intelligence Technology
Wednesday, April 15, 2020

In the latest piece to come out of the FTC’s new focus on emerging technologies, the FTC Bureau of Consumer Protection issued new guidance on the use of artificial intelligence (“AI”) and algorithms. The guidance follows up on a 2018 hearing where the FTC explored AI, algorithms, and predicative analysis. As the FTC recognizes, these technologies already pervade the modern economy. They influence consumer decision making – from what video to watch next, to what ad to click on, or what product to purchase. They make investment decisions, credit decisions, and, increasingly, health decisions, which has also sparked the interest of State Attorneys General and the Department of Health & Human Services. But the promise of new technologies also comes with risk. Specifically, the FTC cites an instance in which an algorithm designed to allocate medical interventions ended up funneling resources to healthier, white populations.

While the technologies may be new, the FTC’s guidance serves as a reminder of some of the golden rules of consumer protection: be transparent, be fair, and be secure.

Be Transparent and Explain Your (or the AI’s) Decisions

Transparency issues can arise when automated tools interact with consumers, when sensitive data is collected from consumers, or when automated decisions are being made that impact consumers. The FTC recognizes that many factors affect algorithmic decision-making, but states that if companies use AI to make decisions about consumers, they must know what and how the data was used, and be able to explain their decisions to consumers.

  • Consumer Interactions. While AI often operates in the background of consumer activity, the FTC cautions that companies must be vigilant that the tool does not mislead consumers, particularly when an AI platform is directly interacting with consumers. For example, if an AI chatbot misleads consumers, the FTC may deem it a deceptive practice under the FTC Act. Also be transparent. For example, inform consumers if the terms of a deal can be altered based on AI tools.

  • Data Collection. According to the guidance, data should not be collected secretly. The FTC recommends that companies looking for data to feed their algorithms clearly disclose what data is collected, how it is collected, and how it will be used.

  • Automated Decisions.  The Fair Credit and Reporting Act (“FCRA”) employs a relatively broad definition of “consumer reporting agency,” and companies using AI to make automated decisions about eligibility for credit, employment insurance, housing or other similar benefits may be required to provide “adverse action” notices after certain automated decisions. More generally, the FTC advises companies calculating consumer risk scores with algorithms to disclose key factors that affect the score.

Be Fair and Empirically Sound

Ensuring algorithms and AI tools are behaving fairly requires care and attention. An algorithm designed with the best intentions could, for example, “result in discrimination against a protected class.” Additionally, the FCRA imposes accuracy obligations on consumer reporting agencies as well as “furnishers” that provide data about their customers to others for use in automated decision-making. The FTC provides a few guidelines that should be worked into any protocols and procedures for maintaining AI tools:

  • Rigorously test algorithms, including inputs and outputs, both before and while in use.

  • Particularly in areas covered by the FCRA, provide consumers with the information used to make important decisions and allow consumers to dispute the accuracy of the information.

  • Establish written policies and procedures to ensure the accuracy and integrity of the data used in AI models.

  • To help avoid biased results, ask “four key questions”: (1) how representative is the data set; (2) does the model account for biases; (3) how accurate are the predictions; and (4) does reliance on data raise ethical or fairness concerns?

  • Derive data from an empirical comparison of representative sample groups developed using accepted statistical principles and methodology, and periodically revalidated and adjust as needed.

Be Secure

AI tools run on data. Accordingly, companies developing AI tools to sell to others should build in data security testing and protocols to help avoid unauthorized access and use. In addition to technical security, companies should consider protocols to vet users and/or keep their technology on their own servers to maintain control over how the tools are used and secured.

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins