March 23, 2023

Volume XIII, Number 82


March 23, 2023

Subscribe to Latest Legal News and Analysis

March 22, 2023

Subscribe to Latest Legal News and Analysis

March 21, 2023

Subscribe to Latest Legal News and Analysis

NIST Publishes Artificial Intelligence Risk Management Framework

On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released guidance entitled Artificial Intelligence Risk Management Framework (AI RMF 1.0) (the “AI RMF”), intended to help organizations and individuals in the design, development, deployment, and use of AI systems. The AI RMF, like the White House’s recently published Blueprint for an AI Bill of Rights, is not legally binding. Nevertheless, as state and local regulators begin enforcing rules governing the use of AI systems, industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments of AI systems, negotiating contracts with vendors, performing audits on AI systems, and monitoring the use AI systems.

NIST broadly defines an “AI system” as an “engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.” This broad definition covers many of the commonly used AI-based hiring and recruitment products, such as resume screening software and gamified assessment or selection tests.

The AI RMF is divided into two parts. Part One includes foundational information about AI Systems, including seven characteristics of trustworthy AI systems:

  • Valid and reliable – AI systems can be assessed by ongoing testing or monitoring to confirm that the system is performing as intended.

  • Safe – AI systems should not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.

  • Secure and resilient – AI systems and their ecosystems are resilient when they are able to withstand unexpected adverse events or changes in their environment.

  • Accountable and transparent – Information about an AI system and its outputs increases confidence in the system and enables organizational practices and governing structures for harm reduction.  

  • Explainable and interpretable – The representation of the mechanism underlying AI systems’ operation (explainability), and the meaning of an AI systems’ output (interpretability), can assist those operating and overseeing AI systems. 

  • Privacy-enhanced – Anonymity, confidentiality, and control generally should guide choices for AI system design, development and deployment.

  • Fair with harmful bias managed – NIST has identified three major categories of AI bias to be considered and managed: systemic, computational, statistical, and human-cognitive AI bias.

Part Two details the “core” of the AI RMF, which is structured around four functions—each containing categories and subcategories—designed to “enable dialogue, understanding, and activities to manage AI risks and responsibly develop trustworthy AI system.” The four core functions are summarized as follows:

  • Govern – Cultivating and implementing a culture of risk management and outlining processes and organizational schemes to identify and manage risk, as well as understanding, managing, and documenting legal and regulatory requirements involving the AI system.

  • Map – Understanding and documenting the intended purposes and impacts of the AI system, as well as the specific tasks and methods used to implement the AI system.

  • Measure – Evaluating the AI system and demonstrating it to be valid, reliable, and safe. 

  • Manage – Determining whether the AI system achieves its intended purpose, determining whether it should proceed, and ensuring that mechanisms are in place to sustain the value of the AI system.

Part Two also suggests preparing “AI RMF Profiles,” describing implementation of core functions:

  • Use case profiles – Applying core functions to a specific use case, such as an “AI RMF hiring profile” or an “AI RMF fair housing profile.”

  • Temporal profiles – Comparing the current state of an AI risk management activity to a desired target state, revealing gaps to be addressed and management objectives.

  • Cross-sectoral profiles – Covering risks that can be used across different use cases.

Although the AI RMF does not include model templates, organizations should consider preparing AI RMF Profiles to streamline the process of operationalizing and documenting compliance with AI RMF guidance.

©2023 Epstein Becker & Green, P.C. All rights reserved.National Law Review, Volume XIII, Number 32

About this Author

Adam S. Forman, Epstein Becker Green, Workforce Management Lawyer, Chicago, Detroit, Social Media Issues Attorney

ADAM S. FORMAN is a Member of the Firm in the Employment, Labor, and Workforce Management practice, based in Chicago and Detroit (Metro). As noted in the 2015 edition of Chambers USA, Mr. Forman “is a renowned expert in social media issues relating to the workplace” and also “focuses on litigation, training and preventive advice on the employment side.” A frequent writer and national lecturer on issues related to technology in the workplace, such as social media, Internet, and privacy issues facing employers, Mr. Forman is often interviewed by...

Nathaniel M. Glasser, Epstein Becker, Labor, Employment Attorney, Publishing

NATHANIEL M. GLASSER is a Member of the Firm in the Labor and Employment practice, in the Washington, DC, office of Epstein Becker Green. His practice focuses on the representation of leading companies and firms, including publishing and media companies, financial services institutions, and law firms, in all areas of labor and employment relations.

Mr. Glasser’s experience includes:

  • Defending clients in employment litigation, from single-plaintiff to class action disputes,...

Alexander Franchilli, Epstein Becker Law Firm, Labor and Employment Litigation Attorney

Alexander Franchilli is an Associate in the Employment, Labor & Workforce Management and Litigation practices, in the New York office of Epstein Becker Green. 

Mr. Franchilli’s experience includes:

  • Representing employers in labor and employment law litigation involving breach of employment agreements, promissory notes, wage and hour violations, wrongful termination, and WARN Act violations

  • Litigating cases concerning unfair competition and breaches of non-competition agreements

  • Providing representation to employers in federal...