top of page

AI RMF – Summary and Key Takeaways

The complexities and hurdles that accompany Artificial Intelligence are growing with the increasing popularity and use of AI. Issues like the prevalence of bias, the necessity to maintain validity and accuracy, the possibility of certain privacy encroaching functions, and the need for interpretability among many others can often be an inherent part of AI and can be challenging to deal with.

To help organisations, developers and users of AI combat these risks, Congress directed National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF 1.0) at the end of January 2023.


NIST collaborated with private and public sectors which included businesses, government agencies, academia as well as NGO’s who represented consumers to develop this framework. The organisation conducted workshops, released drafts and accepted comments to gain various perspectives and create a comprehensive and widely applicable framework. The intention behind the framework is that it can be used across different domains and sectors owing to its flexible nature.

Summary and Takeaways The framework is divided into 2 parts: Foundational Information and Core & Profiles. Part 1 covers all necessary background information to gain a full understanding of the nature of AI, its risks, and the constituents of trustworthy AI. AI Risk management faces a number of challenges in

  • Accurate and well-defined means to measure risks.

  • Accepting certain risks to maintain or achieve organisational goals.

  • Appropriately allocating resources according to the needs of the risks

  • Incorporating AI risk management into the organisational processes and culture

An important component of Part 1 of the framework is the guidance provided on what trustworthy AI looks like. According to NIST, trustworthy AI has to be: -

  1. Valid and reliable – meaning that it should fulfil its intended use and should be able to operate as intended regardless of different conditions, for example throughout its lifetime.

  2. Safe – responsible design and development, clear communication to deployers, responsible decision making, and appropriate documentation and information of risks should be employed to ensure the safety of AI.

  3. Secure and resilient – systems should be able to withstand adverse conditions such as security concerns like data poisoning and the exfiltration of training data or other intellectual property.

  4. Accountable and transparent – Accurate and complete information about the AI systems should be provided to users. This can include development decisions and training data, structure of the model, intended use cases, information on decisions made throughout its lifetime.

  5. Explainable and interpretable – Those who use or oversee the AI systems should have the right understanding of the systems mechanisms and the meaning of their output in regard to their designed purposes.

  6. Privacy-enhanced – AI should be designed with privacy in mind. PETs and data minimization tools should be employed.

  7. Fair with harmful bias managed – Systemic, computational & statistical and human-cognitive are 3 categories of AI bias that should be correctly managed.

Part 2 of the Framework lays out the AI RMF Core which constitutes the actions to manage AI risks and develop trustworthy AI systems. The four functions (govern, map measure, manage) described here are broken down into categories and subcategories. Overall, the framework details 19 categories and 72 subcategories.

Govern The govern function provides structure for the risk management of AI and incorporates it into organisation goals and principles by cultivating a risk management culture. The categories and subcategories under Govern detail the following: -

  1. The right policies and practices and procedures related to the other 3 functions are in place and implemented.

  2. The right teams and individuals are trained and empowered to carry out their responsibilities relating to the remaining functions.

  3. There is inclusion, diversity and equity in all areas of the mapping, measuring and managing of the AI risks.

  4. A culture of understanding and cultivating AI risk management is followed.

  5. Relevant AI actors are heard, communicated and engaged with.

  6. Third-party and supply chain software and data risks are addressed through appropriate policies and procedures that are in place.

Map The Map function provides contextual knowledge and awareness among the different AI actors throughout the lifecycle of the AI systems. The Map function information helps prevent negative risks and informs decision making. The categories and subcategories under the Map function detail the following: -

  1. Context from every area and domain is collected and understood from intended purposes, norms and expectations to the organisations mission and goals to the organisations risk tolerances.

  2. Correct categorization and documentation is done on all contextual information.

  3. Al contextual information such as its capabilities, usage, goals etc are examined, documented and understood.

  4. Risks and benefits are mapped. Especially important for third party software and data

  5. Various societal group impacts are identified and documented.

Measure As the name suggests, the Measure function focuses on the analysis, assessment and monitoring of the AI risks and impacts. The categories and subcategories under the Measure function detail the following: -

  1. The right metrics and measurement methods for risk and trustworthiness are identified and used. Those areas that cannot be properly measured are documented. The AI metrics and existing controls are regularly assessed and updated.

  2. Trustworthiness of the AI systems are measured regularly. This includes system performance, assurance criteria, functionality and behaviour, security and resilience, privacy, fairness etc.

  3. Practices, procedures, and personnel to identify risks over time as well as unanticipated risks are in place.

  4. Feedback from domain experts, end users and AI actors are collected and analysed.

Manage The contextual information is used here to appropriately allocate resources as defined under the Govern function. The manage function should be used throughout the lifecycle of the AI system as risks, contexts, needs etc evolve over time. The categories and subcategories under the Manage function detail the following: -


  1. Information obtained from the other functions, Map and Measure, are prioritised, and addressed.

  2. Strategies are prepared and implemented to maximise AI benefits and minimise risks. This includes mechanisms to sustain the value of the systems, mechanisms to address and recover from risks, and mechanisms to ensure that the systems are performing according to their intended use.

  3. Third-party risks and benefits are regularly monitored and managed.

  4. Risk treatment practices and procedures are monitored regularly. Improvement updates are incorporated into the systems.


AI Risk management is not a one-time activity. It should be continuous, should follow a schedule and should be implemented throughout the AI lifecycle.

Companies and AI users can contribute to the upcoming versions of the AI RMF by reaching out to NIST via the email AIframework@nist.gov

NIST has also provided an AI RMF Playbook to complement the framework that enables easy navigation and use of the AI RMF.

Featured Posts

Recent Posts

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page