October 25, 2020

Volume X, Number 299


October 23, 2020

Subscribe to Latest Legal News and Analysis

NIST Seeking Comments on Draft AI Principles

The National Institute of Standards and Technology has issue a set of draft principles for “explainable” artificial intelligence and is accepting comments until October 15, 2020. The authors of the draft principles outline four ways that those who develop AI systems can ensure that consumers understand the decisions reached by AI systems. The four principles are:

  1. Explanation: Delivering evidence and reasons for the decisions, which will vary depending on the consumer and may include (a) user benefit explanations, (b) those that attempt to garner support by society, (c) those that assist with compliance with laws, regulations, and safety standards, or (d) those that explain a benefit to the system operator (recommending a list of movies to watch).

  2. Meaningful: Having systems that provide meaningful and understandable explanations to users, which will vary by context and by user.

  3. Explanation Accuracy: Those explanations being correct reflections of the system’s process for creating its outputs, which the authors analogize to an explanation by an individual that shows the mental processes the person took to reach the decision.

  4. Knowledge Limits: Having the AI system work only when the conditions for which it was designed exist, and thus avoids giving results that are not reliable.

These principles follow similar guidance issued earlier this year by the FTC, as well as the European Parliament. As a non-regulatory federal agency (which sits within the US Department of Commerce), NIST’s goal is to promote US commerce by advancing standards such as those set out in these principles. For this draft, NIST indicates that it is seeking to improve the level of trust users have in AI so that the systems are more easily and readily adopted and used.

Putting it Into Practice: Companies that are developing AI systems will find these principles a helpful preview of what may become industry standard, and may want to submit comments (by email to explainable-AI@nist.gov) prior to the October 15, 2020 deadline. In the meantime, companies should keep in mind the existing direction from the FTC and the EU, which include human oversight and transparency of how AI systems reach their decisions.

Copyright © 2020, Sheppard Mullin Richter & Hampton LLP.National Law Review, Volume X, Number 237



About this Author

Liisa Thomas, Sheppard Mullin Law Firm, Chicago, Cybersecurity Law Attorney

Liisa Thomas, a partner based in the firm’s Chicago and London offices, is Co-Chair of the Privacy and Cybersecurity Practice. Her clients rely on her ability to create clarity in a sea of confusing legal requirements and describe her as “extremely responsive, while providing thoughtful legal analysis combined with real world practical advice.” Liisa is the author of the definitive treatise on data breach, Thomas on Data Breach: A Practical Guide to Handling Worldwide Data Breach Notification, which has been described as “a no-nonsense roadmap for in-house and...