What is the NIST artificial intelligence (AI) risk management framework?

Principles & Frameworks
AI Governance & Assurance
Ethics & Responsibility
AI risk management framework - image

What is NIST, and why is artificial intelligence in their purview?

The National Institute of Standards and Technology (now part of the U.S. Department of Commerce) has been an American institution since 1901. 

NIST mission
To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.

NIST has been developing the AI Risk Management Framework (RMF) to “better manage risks to individuals, organizations, and society associated with artificial intelligence (AI).” While the mission of NIST is broad and far-reaching, the organization’s recent contributions when it comes to AI technology have been strategic and specific.

The risk management framework (RMF) from NIST is being crafted through a community approach, with over 85 entities contributing comments and recommendations on the recent draft of the framework in March 2022. The organizations contributing to the development, and research behind, the framework range from higher education institutions and Fortune 500 companies to research labs and AI technology companies (including Monitaur - read our NIST AI RMF comments here).

About the AI risk management framework

NIST’s framework for AI risk management is currently in draft form. The organization aims to release an official version 1.0 in early 2023. In both cases, the framework is meant to be a voluntary resource for organizations using AI technology to do so more effectively while: 

  • Protecting themselves and society from risk 
  • Being aware of, and taking proactive steps towards, ethical practices
  • Tracking and mitigating unintended and/or harmful bias (and other potential harmful consequences)
  • Improving trust in AI technology

NIST lists four intended audiences for the AI RMF:

  • AI system stakeholders
  • Operators and evaluators
  • External stakeholders
  • General public
Image source

The overall outline of the NIST AI Risk Management Framework Draft is: 

  • Framing Risk
  • AI Risks and Trustworthiness
  • Core RMF components 

Additional information in the draft includes the scope of the framework, intended audiences, and a practice guide.

Framing Risk

NIST compiled information on understanding risk and adverse impacts of AI and the challenges that come with AI risk management. Key points include potential harm (and advantages) that can stem from AI, being able to measure and track AI initiatives and the potential associated risks, and organizational integration. 

AI Risks and Trustworthiness

In this section of the AI RMF, NIST defines “characteristics that should be considered in comprehensive approaches for identifying and managing risk related to AI systems: technical characteristics, socio-technical characteristics, and guiding principles.” 

Read the full AI Trust section to learn more.

Core RMF Components 

The ultimate objective of the AI RMF Core is to enable functions within an organization to “organize AI risk management activities at their highest level to map, measure, manage, and govern AI risks.” 

Governance at the core of the AI risk management framework

Image source

NIST includes the following functions as essential to AI governance:

  • Map: the context of AI use within the organization is established and understood, and related risks are identified
  • Measure: processes in place for assessing, analyzing, and tracking risks associated with AI
  • Manage: risks are prioritized and acted upon
  • Govern: a culture of risk management is cultivated and present

Implications of the AI risk management framework 

This vital project from NIST has the potential to accelerate effective governance and assurance of AI and ML systems.

At Monitaur, we believe that, by creating more trust and confidence in how these technologies are applied and managed, all stakeholders – corporations, regulators, and consumers – can benefit from extraordinary innovations that will improve our lives. We also believe that good AI requires great governance to ensure that these systems are more fair, safe, compliant, and robust than the human processes that they replace or enhance.

We recognize that NIST is at its core a technical organization seeking to provide clarity on the use of AI technologies, and the AI RMF achieves that aim. However, the risks associated with AI are not solely technical in nature, nor are we at a time in its maturity when we can mitigate those risks effectively with purely technical solutions. Recognizing those limitations, we encourage NIST to consider a holistic, lifecycle approach that incorporates oversight of the people and processes involved, in addition to the model and data risk management.

NIST previously delivered just such a comprehensive approach in its Cybersecurity Framework. In it, inherently technical activities (e.g. Detect) are complemented by human- and process-driven activities (e.g. Identify), as well as a recognition that technical activities must be supported by effective human effort. The combination of people, process, and technology enables organizations to mitigate risks, and we believe it should serve as a model for the AI RMF to create direction, clarity, and accountability for organizations that wish to use AI systems now and in the future.

Read Monitaur’s full response to the NIST AI RMF draft.

Special report on bias in Artificial Intelligence 

Earlier this year, NIST also published a special report: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.

This special report from NIST is aligned with what we’ve said at Monitaur from day one: bias is a human problem, not a machine problem. Also, “context is everything.” 

In this report, NIST suggests a “socio-technical” approach to mitigate bias in AI by acknowledging that AI operates in a larger social context.

Some of the key takeaways from the report include:

  • Academic research on bias can be disconnected from reality. A full-lifecycle, multi-stakeholder approach is needed.
  • Identifying the sources of bias is the first step in any mitigation strategy. 
  • In modeling, there has sometimes been a culture of using datasets that are readily available rather than those that are most suitable. 
  • A push for the return of statistical best practices for dataset creation. A great starting point is independently sampled and representative datasets that are created with special care to be representative, fair, and balanced.
  • Computer scientists and data scientists often focus on optimizing a model for performance. There needs to be a culture shift to emphasize creating accurate and fair models. 
  • NIST advocates for having subject matter experts involved in all stages of dataset and model creation as well as validation to ensure the model is performing as expected and captures the complexity of the use case.
  • Diversity is more than ethnicity and gender. It is education background, religious background, employment background, and more. Operations research will approach a problem differently than a computer scientist. When we talk about diversity on a team, ethnicity diversity of computer scientists on a team is insufficient to fully address the problem.

“This document has provided a broad overview of the complex challenge of addressing and managing risks associated with AI bias. It is clear that developing detailed technical guidance to address this challenging area will take time and input from diverse stakeholders, within and beyond those groups who design, develop, and deploy AI applications, and including members of communities that may be impacted by the deployment of AI systems.” Read the full report.