September 24, 2021

Volume XI, Number 267

Advertisement

September 24, 2021

Subscribe to Latest Legal News and Analysis

September 23, 2021

Subscribe to Latest Legal News and Analysis

September 22, 2021

Subscribe to Latest Legal News and Analysis

Wondering How To Use AI? The FTC Has Some Thoughts

The FTC recently provided guidance to companies on how to use artificial intelligence with an aim for “truth, fairness and equity.” The FTC reminded companies of three laws it enforces which have lessons for those in the AI space: Section 5 of the FTC Act (which would prohibit unfair algorithms, for example); the Fair Credit Reporting Act (which would prohibit algorithms that might deny housing, as an example); and the Equal Credit Opportunity Act (which would prohibit algorithms that might result in credit discrimination on the basis of race, as an example).

These comments come almost a year after the FTC’s recommendations about AI, and show that the topic remains a priority for the FTC. In these recent comments, the FTC now provides more detail for developers of AI. These include:

  • Start with the right foundation. For example, is your data set missing information from particular populations? If so, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.

  • Watch out for discriminatory outcomes. Testing your algorithm before and during use is needed to make sure that it doesn’t discriminate on the basis of race, gender, or another protected class.

  • Embrace transparency and independence. Transparency frameworks and other actions such as publishing the results of independent audits or opening data or source code can help increase transparency.

  • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Be careful not to overpromise what your algorithm can deliver, or else, risk running into FTC Act territory.

  • Tell the truth about how you use data. Pay attention to the statements made about how data is used and the control users will have over that data.

  • Do more good than harm. If your model causes more harm than good – i.e., if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair.

  • Hold yourself accountable – or be ready for the FTC to do it for you. As an example, if your algorithm results in credit discrimination against a protected class, a company may face a complaint alleging violations of the FTC Act and Equal Credit Opportunity Act.

Putting it Into Practice. With these new comments, the FTC provides companies with more concrete examples of ways to meet its transparency, fairness, and accuracy guidance from last year. It also signals the focus the FTC will give to AI under the new administration.  The FTC is not the only regulator focusing on AI. For example, federal financial agencies recently requested comments about the use of AI, and the European Commission has just issued a proposed Artificial Intelligence Act (following a white paper and resolution on the topic issued last year).

Copyright © 2021, Sheppard Mullin Richter & Hampton LLP.National Law Review, Volume XI, Number 118
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

About this Author

Liisa Thomas, Sheppard Mullin Law Firm, Chicago, Cybersecurity Law Attorney
Partner

Liisa Thomas, a partner based in the firm’s Chicago and London offices, is Co-Chair of the Privacy and Cybersecurity Practice. Her clients rely on her ability to create clarity in a sea of confusing legal requirements and describe her as “extremely responsive, while providing thoughtful legal analysis combined with real world practical advice.” Liisa is the author of the definitive treatise on data breach, Thomas on Data Breach: A Practical Guide to Handling Worldwide Data Breach Notification, which has been described as “a no-nonsense roadmap for in-house and...

312-499-6335

Julia Kadish is an attorney in the Intellectual Property Practice Group in the firm's Chicago office.

Areas of Practice

Julia's practice focuses on data breach response and preparedness, reviewing clients' products and services for privacy implications, drafting online terms and conditions and privacy policies, and advising clients on cross-border data transfers and compliance with US and international privacy regulations and standards. She also workes on drafting and negotiating software licenses, data security exhibits, big data licenses, professional...

312.499.6334
Advertisement
Advertisement
Advertisement