FTC Reiterates AI Best Practices
Friday, April 23, 2021

Building upon its April 2020 business guidance on Artificial Intelligence and algorithms, on April 19, 2021, the FTC published new guidance focused on how businesses can promote truth, fairness and equity in their use of AI.

In the guidance, the FTC recognizes the potential benefits of AI, but stresses the need to harness these benefits without inadvertently introducing bias or other unfair outcomes. The FTC cites its work, including a report on big data analytics and machine learning, a hearing on algorithms, AI and predictive analytics, the abovementioned business guidance on AI and algorithms and FTC enforcement actions as the bases for its best practices and lessons learned with respect to using AI truthfully, fairly and equitably.

In its series of best practices, the FTC advises businesses to:

  • Start with the right foundation. From the start, think about ways to improve your data set, design your model to account for data gaps and—in light of any shortcomings—limit where or how you use the model.

  • Watch out for discriminatory outcomes. It’s essential to test your algorithm—both before you use it and periodically after that—to make sure that it doesn’t discriminate on the basis of race, gender or other protected classes.

  • Embrace transparency and independence. Think about ways to embrace transparency and independence—for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits and by opening your data or source code to outside inspection.

  • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive and backed up by evidence. In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver.

  • Tell the truth about how you use data. Be careful about how you get the data that powers your AI model. Make note of the FTC’s recent enforcement actions against (1) Facebook, alleging that Facebook misled consumers by telling them they could opt in to the company’s facial recognition algorithm, when in fact Facebook was using their photos by default, and (2) app developer Everalbum, alleging that Everalbum used photos uploaded by app users to train its facial recognition algorithm and deceived users about their ability to control the app’s facial recognition feature and delete their photos and videos upon account deactivation.

  • Do more good than harm. If your model causes more harm than good—that is, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition—the FTC can challenge the use of that model as unfair.

  • Hold yourself accountableor be ready for the FTC to do it for you. It’s important to hold yourself accountable for your algorithm’s performance. Keep in mind that if you don’t hold yourself accountable, the FTC may do it for you.

The FTC’s guidance comes at the same time the European Commission published its Proposal for a Regulation on a European approach for Artificial Intelligence.

 

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins