June 5, 2023

Volume XIII, Number 156


June 04, 2023

Subscribe to Latest Legal News and Analysis

June 03, 2023

Subscribe to Latest Legal News and Analysis

June 02, 2023

Subscribe to Latest Legal News and Analysis

Looking for Guidance on AI Governance? NIST Releases AI Risk Management Framework 1.0 (and Companion Documents)

In the National Defense Authorization Act, Congress directed the National Institute of Standards and Technology (NIST) to work with public and private organizations to create a voluntary risk management framework for trustworthy artificial intelligence systems. Following up on that Congressional directive, NIST has released Artificial Intelligence Risk Management Framework 1.0 (AI RMF 1.0) to “offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” And the framework could not be any timelier since the discourse surrounding AI recently reached an inflection point following the introduction of ChatGPT, an advanced, conversational AI chatbot. While ChatGPT has made AI, and its potential, more tangible to the masses, AI is already a pervasive and heavily utilized technology in virtually every industry. In recognition of the seemingly limitless potential of AI, but also the risks that come along with it, governmental authorities across the world, over the last several years, have addressed calls for regulation of AI by introducing (and in some cases, passing) legislation (e.g., the EU’s proposed AI Act) and releasing voluntary AI governance standards (such as the White House’s AI Bill of Rights). In addition, dozens of industry bodies and standards-setting organizations have introduced AI standards, guidelines, and frameworks, some of which are industry/technology agnostic, and others that are industry or technology-specific.

The AI RMF 1.0 was developed by NIST in collaboration with stakeholders from the U.S. government, industry, and academia over a two+ year period, following a concept paper (December 2021), and drafts of the RMF (March and August 2022, respectively).

The AI RMF 1.0 is a voluntary framework that organizations can use to address risks in the design, development, use, and evaluation of AI products, services, and systems. The AI RMF 1.0 can serve as a useful guide to organizations in their development and maintenance of an AI governance program, by helping introduce and provide some high-level answers to threshold questions:

  1. How is an AI System defined? The AI RMF 1.0 defines “AI System” as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”

  2. What risks does AI present? The AI RMF 1.0 provides a framework on how to address, document, and manage AI risks and potential negative impacts. AI RMF 1.0 introduces three categories of potential harms related to AI systems for organizations to consider while framing relevant risks, including: (1) harm to people; (2) harm to organization; and (3) harm to an ecosystem.

  3. Who should be involved in AI governance…and where might privacy and legal departments be involved? The AI RMF 1.0 discusses and provides examples of the AI “actors” (i.e., stakeholders) that will need to be involved in AI governance. Like privacy and cybersecurity compliance and governance, the NIST framework acknowledges that “AI Actors will represent a diversity of experience, expertise, and backgrounds and comprise demographically and disciplinarily diverse teams.”

  4. What are the characteristics of a “trustworthy” AI system? Also referred to as “responsible” or “ethical” AI, the RMF discusses in detail the following characteristics: valid and reliable, safe, secure, resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias.

Part 2 of the framework describes the “Core” of the AI RMF 1.0, which is composed of four functions: Govern, Map, Measure, and Manage (depicted in the image below). Alongside the AI RMF 1.0 NIST released a companion NIST AI RMF PlaybookAI RMF Explainer Video, an AI RMF RoadmapAI RMF Crosswalk, and various Perspectives. The crosswalk is a particularly informative document that compares the AI RMF 1.0 with other frameworks, such as the White House’s AI Bill of Rights and the OECD Recommendation on AI.


We have significant experience advising companies in a variety of industries on AI and AI-related issues, including the creation of AI governance programs and boards, development of AI technologies in-house and acquisition licensing of AI technologies from vendors, AI and algorithmic impact assessments, and otherwise. Notably, we have also created a crosswalk between the AI RMF and the proposed EU AI Act. We also have deep experience advising global clients across industry verticals on the privacy-related impacts of AI, including automated decision-making and profiling and other privacy compliance issues (see our detailed blog post on ADM and profiling here).

If you are interested in learning more about our experience with AI governance and compliance, contact one of the authors or your Squire Patton Boggs relationship attorney. If you are planning on attending the IAPP’s Global Privacy Summit in April, consider attending Kyle Fath’s session on Building an AI Framework and Governance Program.

This will not be the last time we hear from NIST this year. Indeed, NIST is set to host several public workshops in February to discuss changes to NIST’s Cybersecurity Framework, and we will be here to guide you through those changes, too.

© Copyright 2023 Squire Patton Boggs (US) LLPNational Law Review, Volume XIII, Number 45

About this Author

Kyle R. Fath Cybersecurity Attorney Squire Patton Boggs New York Los Angeles
Of Counsel

Kyle Fath is counsel in the Data Privacy & Cybersecurity Practice. He offers clients a unique blend of deep experience in counselling companies through compliance with data privacy laws, drafting and negotiating technology agreements, and advising on the privacy, IT, and IP implications of mergers & acquisitions and other corporate transactions. His practice has a particular focus on the the ingestion and sharing of data by way of strategic data transactions, data brokers, and vendor relationships, the implications of digital advertising (as companies look toward...

Kristin L. Bryan Litigation Attorney Squire Patton Boggs Cleveland, OH & New York, NY
Senior Associate

Kristin Bryan is a litigator experienced in the efficient resolution of contract, commercial and complex business disputes, including multidistrict litigation and putative class actions, in courts nationwide.

She has successfully represented Fortune 15 clients in high-stakes cases involving a wide range of subject matters.

As a natural extension of her experience litigating data privacy disputes, Kristin is also experienced in providing business-oriented privacy advice to a wide range of clients, with a particular focus on companies handling customers’ personal data. In this...

Gicel Tomimbang Los Angeles California Associate Attorney Data Privacy Cybersecurity Squire Patton Boggs LLP

Gicel Tomimbang is an associate in the Data Privacy, Cybersecurity & Digital Assets Practice.

A significant portion of Gicel’s practice focuses on the intersection of healthcare with privacy. Clients frequently turn to her for advice and counsel on complex issues that arise under the Health Insurance Portability and Accountability Act (HIPAA), the Confidentiality of Medical Information Act (CMIA), the California Consumer Privacy Act (CCPA), the FTC Act and the FTC Health Breach Notification Rule.

Gicel previously...

Katherine Spicer, attorney, Squire

Katherine (Katy), is a litigator who draws on her unique military background to help clients solve their complex legal issues. Katy represents clients in internal and government investigations, complex civil and criminal litigation and international arbitration.