Skip to Main Content
Main Menu
Standard

The National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management

The NIST AI Risk Management Framework (RMF) is designed to equip organizations and individuals with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.

Are you subject to the AI RMF?

The NIST AI RMF is a voluntary framework that can be used by any organization or entity that develops, deploys, or operates AI systems. This includes government agencies, private companies, research institutions, and any other entities involved in AI development and deployment.

Areas of the AI RMF

Govern function

  • Establishes a risk management culture within organizations dealing with AI systems;
  • Develops processes to anticipate, identify, and manage risks associated with AI, considering users and societal impacts;
  • Includes procedures for assessing potential impacts of AI systems;
  • Aligns AI risk management with organizational principles and strategic priorities;
  • Integrates technical AI design with organizational values, fostering competencies for personnel involved; and
  • Manages the entire AI product lifecycle, covering legal issues related to third-party software and data usage.

Map function

  • Establishes the context to frame risks related to an AI system; and
  • Enables negative risk prevention and informs decisions for
  • Processes such as model management, as well as an initial decision about appropriateness or the need for an responsible AI solution.

Measure function

  • Employs quantitative, qualitative, or mixed method tools, techniques, and methodologies to analyze, benchmark, and monitor AI risk and related impacts;
  • Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI configurations; and
  • Processes developed or adopted in the measure function should include rigorous software testing and performance assessment methodologies with associated measures of uncertainty, comparisons to performance benchmarks, and formalized reporting and documentation of results.

Manage function

  • Entails allocating risk resources to mapped and measured risks on a regular basis;
  • Eisk treatment comprises plans to respond tom recover from, and communicate about incidents or events; and
  • After completing the manage function, plans for prioritizing risk and regular monitoring and improvement will be in place
Whitepaper

Advancing Accountable AI: A Readiness Guide For Privacy

FAQs

  • What is the purpose of the NIST AI RMF?

    The NIST AI RMF aims to provide a structured approach for managing risks associated with AI systems. It offers guidelines and best practices to enhance the responsible development, deployment, and operation of AI technologies.

  • How is the NIST AI RMF related to the Blueprint for the AI Bill of Rights?

    While the NIST AI RMF offers a broader framework for managing AI risks, the Blueprint for the AI Bill of Rights focuses specifically on mitigating risks related to human rights and access to resources, complementing the efforts of the NIST AI RMF in promoting responsible and ethical AI practices.

  • Is the NIST AI RMF legally binding?

    The NIST AI RMF itself is not legally binding. However, it may be referenced or incorporated into regulatory frameworks or industry standards related to AI governance and risk management. Organizations may choose to adopt the NIST AI RMF voluntarily to improve their AI practices and demonstrate compliance with relevant regulations or standards.

The information provided does not, and is not intended to, constitute legal advice. Instead, all information, content, and materials presented are for general informational purposes only.

Back to Top