The National Institute of Standards and Technology published a voluntary framework in January 2023. Three years later, it is referenced in state statutes, federal procurement, executive orders, and the underwriting files of the insurers that will carry AI risk through the next decade. For an operator of an autonomous agent in the United States, the NIST AI RMF is no longer optional in practice. It is the document courts, regulators, and contracts use to answer whether the operator acted reasonably.

Key takeaways

  • NIST AI RMF 1.0 is voluntary by design but operationally load-bearing. It is cited in Colorado SB 24-205, federal EOs on AI, OMB M-24-10, and a growing set of sectoral rules.
  • The framework organises around four functions (Govern, Map, Measure, Manage) and approximately seventy subcategories producing a testable set of outcomes.
  • The Generative AI Profile (NIST AI 600-1, July 2024) extends RMF to foundation models and autonomous agents with twelve specific risks.
  • In US negligence law, alignment with an accepted practice framework is strong evidence of reasonable care. NIST is currently the closest thing to an accepted practice document for AI.
  • Implementation is iterative, not a checklist. The framework expects documented decisions and continuous reassessment rather than a one-time assessment.

The framework as published.

The AI Risk Management Framework was developed under section 5301 of the National Artificial Intelligence Initiative Act of 2020 and released as version 1.0 in January 2023. It consists of a 48-page core document, a companion playbook with implementation guidance, and a set of profiles that tailor the framework to specific use cases. The document describes itself as voluntary, rights-preserving, non-sector-specific, and use-case agnostic. The language is careful. It is not a regulation. It is a consensus description of what managing AI risk looks like.

The four-function structure is the most visible feature. Govern is the umbrella. It covers the cultures, policies, and processes that allow an organisation to manage AI risk at all. Map establishes the context in which the system operates, the stakeholders affected, the categories of risk, and the intended benefits. Measure applies quantitative and qualitative analysis to the mapped risks and tracks them over time. Manage prioritises risks, selects responses, executes the responses, and monitors their effects. The four run concurrently across the AI lifecycle, not sequentially.

Each function decomposes into categories (typically four to six) and those into subcategories (typically three to five each), producing approximately seventy named outcomes an organisation should be able to demonstrate. The decomposition is important. It converts the abstract goal of responsible AI into testable statements. An auditor, a regulator, or an insurer can ask whether a specific subcategory is satisfied and what the evidence is.

The Generative AI Profile.

NIST published the Generative AI Profile (NIST AI 600-1) in July 2024, adapting the RMF to foundation models and autonomous agents. The profile identifies twelve risks specific to generative systems. Confabulation. Dangerous, violent, or hateful content. Data privacy. Environmental. Harmful bias and homogenisation. Human-AI configuration. Information integrity. Information security. Intellectual property. Obscene, degrading, or abusive content. Value chain and component integration. Knowledge of the lifecycle. For an operator of an autonomous agent, at least five of these risks are usually load-bearing: confabulation, human-AI configuration, information integrity, information security, and value chain traceability.

The profile is the first NIST text explicitly addressing agentic systems. It introduces terminology that has become common in US policy discourse: the difference between system autonomy and user control, the distinction between in-context mitigation and upstream mitigation, the role of the AI actor (developer, deployer, user). For operators reading the profile for the first time, the most useful section is the enumeration of suggested actions under each Manage subcategory. These are not rules. They are a vocabulary for describing what an operator actually did.

From voluntary to operational.

Three mechanisms have transformed NIST from a voluntary document into an operational benchmark.

First, the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directed federal agencies to align their AI activities with NIST's framework. OMB Memorandum M-24-10 operationalised the directive for most civilian agencies, requiring risk management plans aligned with the RMF for AI used in safety-impacting or rights-impacting contexts. Federal procurement now routinely references NIST by name.

Second, state AI statutes have built the framework into their safe-harbour structures. The Colorado AI Act's rebuttable presumption of reasonable care turns on whether the deployer has adopted a risk management programme aligned with a nationally recognised framework. NIST is the only US framework that clearly satisfies the description. Other states advancing similar legislation (California, Connecticut, Texas, New York) have adopted parallel references.

Third, the insurance market has adopted NIST as a baseline underwriting reference. US-based specialty AI carriers and brokers treat alignment with the RMF as the minimum documented governance posture below which coverage is difficult to price. For an operator of an autonomous agent, failing to align with NIST means losing the simplest argument available at the underwriting desk.

The cross-reference effect. NIST is not the most detailed framework available. ISO/IEC 42001:2023 is more prescriptive and certifiable. But NIST is referenced more often by US-facing authorities, and each new reference increases the evidentiary weight of the framework in any subsequent dispute. In US practice, adoption tends to converge on whichever document is cited most.

Reasonable care in negligence.

US negligence law imposes a reasonable-person standard. A defendant is liable for harm caused by conduct that a reasonable person would not have undertaken in the same circumstances. For decades, courts have used industry standards and accepted practice frameworks to define reasonableness in technical fields. The Restatement (Third) of Torts, section 13, provides that compliance with a statute, administrative regulation, or similar is evidence but not a complete defence. Alignment with a voluntary framework is less, but not nothing. Where a framework is widely adopted and independently developed, courts have been receptive to its use as evidence of accepted practice.

For AI specifically, the doctrinal picture is still unsettled. The cases that have reached US federal or state appellate courts as of 2026 are few, and most of them involve data or privacy dimensions rather than autonomous decision-making. Academic commentary has converged on the proposition that where no specific statutory duty applies, NIST AI RMF is the leading candidate for defining reasonable care. This is the logic behind the Colorado safe harbour. It is also the logic several plaintiff and defence bars are preparing for.

Implementation patterns.

Organisations implementing the RMF in practice tend to produce three categories of artefact.

The first is governance documentation. This includes the AI policy, the assignment of roles (an AI risk owner, a documentation owner, a monitoring lead), the definition of risk tolerance, and the decision-making forum that reviews AI deployments. Under RMF this maps to the Govern function.

The second is per-system documentation. For each AI system in use, the organisation produces a context record covering intended purpose, stakeholders, data sources, performance metrics, known limitations, and integration boundaries. The record is updated through the lifecycle. This maps to Map and Measure.

The third is the monitoring and response infrastructure. Logs, metrics, evaluation reports, incident playbooks, and the procedures that turn a detected issue into an actioned response. This maps to Manage and closes the loop back to Govern.

None of these are unique to AI. The function of the RMF is to reorganise general management practice around the specific risks that generative and agentic systems introduce. An organisation with a mature security or quality management programme will find much of the scaffold already in place.

Common failure modes.

Three implementation failures recur in reviews.

The first is paper alignment. An organisation adopts an RMF-structured policy but does not produce the artefacts that correspond to the subcategories. The policy references Measure 2.3 without ever running a measurement. The result is a document that looks defensible from the outside but does not survive a question about specific evidence.

The second is static deployment. An organisation runs the mapping and measurement at procurement but does not update either as the system evolves. By the time an incident occurs, the mapped context no longer matches the deployed system. Under the rebuttable presumption in Colorado, this failure is sufficient to defeat the presumption.

The third is single-owner governance. One senior individual owns everything and the risk is carried by that person's attention. RMF Govern explicitly requires distributed accountability. When the single owner leaves or is reassigned, the programme decays. Auditors and regulators treat single-owner programmes as high-fragility, regardless of documented quality.

The trajectory through 2027.

Three movements will strengthen NIST's operational weight. NIST itself is working on sector-specific profiles, with early work on healthcare, financial services, and critical infrastructure. State statutes will continue to reference NIST by name or by the nationally recognised framework formula. Federal procurement will continue to require RMF alignment for AI used in rights-impacting contexts. Insurance markets will continue to treat RMF alignment as the floor.

The effect is asymmetric. Organisations that adopt the framework early have time to operationalise it. Organisations that wait will be asked to demonstrate alignment at the point of an incident, which is not when the alignment can be built. For an operator deploying autonomous AI agents into a US context in 2026, the rational posture is to treat NIST AI RMF as a live operational reference, not a theoretical one.

Related reading

For the statute that made NIST alignment a legal safe harbour, see the Colorado AI Act and deployer obligations under SB 24-205. For the international management-system standard that complements NIST, see our reading of ISO/IEC 42001 and the other major frameworks. For the cross-jurisdictional picture, see US, EU, UK: three approaches to the same question.

Frequently asked questions

Is NIST AI RMF mandatory?

No. It is a voluntary framework. But it is referenced in statutes, executive orders, procurement, and insurance underwriting, producing a de facto obligation for most organisations.

What are the four functions?

Govern, Map, Measure, Manage. They run concurrently across the AI lifecycle, not sequentially.

What is the Generative AI Profile?

NIST AI 600-1 (July 2024), an extension of the RMF for foundation models and autonomous agents, identifying twelve specific risks.

How does NIST define reasonable care?

It does not define the term. But in US negligence doctrine, alignment with a widely adopted voluntary framework is evidence of reasonable care. NIST is the closest candidate in AI today.

Is NIST sufficient for EU compliance?

No. The EU AI Act imposes specific procedural duties that NIST does not mirror. An operator should build to the higher standard and map the NIST artefacts into the EU operator file.

References

  1. NIST AI Risk Management Framework 1.0 (January 2023).
  2. NIST AI 600-1, Generative AI Profile (July 2024).
  3. National Artificial Intelligence Initiative Act of 2020, section 5301.
  4. Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of AI (2023).
  5. OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of AI (2024).
  6. SB 24-205, Colorado AI Act.
  7. Restatement (Third) of Torts, section 13.
  8. ISO/IEC 42001:2023, Artificial intelligence management system.