The European Union, the United States, and the United Kingdom are each answering a single question: who is responsible when an autonomous AI system causes harm. Their answers share substance and diverge in structure. This article reads the three in parallel, sets out the common duties, and identifies the places where an operator of an autonomous agent must tailor practice to jurisdiction.

Key takeaways

  • All three regimes converge on a common set of operator duties: risk management, transparency, oversight, documentation, and incident response.
  • The EU approach is a horizontal statute with detailed procedural duties on deployers, backed by a revised Product Liability Directive.
  • The US approach is a hybrid of voluntary federal framework (NIST AI RMF), state statutes (Colorado leading), and sectoral regulation.
  • The UK approach is sectoral regulator leadership over five cross-sector principles, with narrower statutory intervention focused on frontier AI.
  • A single compliance programme built to the highest standard will substantially cover all three, with jurisdiction-specific documentation layers.

The European Union. Horizontal statute, strict product liability.

The EU AI Act (Regulation 2024/1689) is the clearest model of the horizontal-statute approach. It classifies AI systems by risk, defines prohibited uses, imposes the heaviest obligations on high-risk systems, and sets transparency duties for most systems. For deployers, Article 26 is the operative text. Seven duties, none delegable: use within the provider's instructions, named human oversight, input data relevance, monitoring and incident reporting, log retention of at least six months, information to affected workers, and public-sector registration. Article 27 adds a fundamental rights impact assessment for deployers of systems used in a subset of high-risk domains.

The companion instrument is Directive 2024/2853 on product liability. It treats AI software as a product subject to strict liability, introduces rebuttable presumptions of defect in specific circumstances (including where the AI Act has been breached), and expands the categories of compensable damage. Member States have until 9 December 2026 to transpose it. Read together with the AI Act, the directive creates a substantive liability path for affected parties that is independent of fault.

Enforcement is shared between the European AI Office and national supervisors. Penalties for operator breaches sit in the second tier at up to EUR 15 million or 3 per cent of worldwide annual turnover. The reach is explicit: deployers outside the Union whose AI outputs are used inside it are within the regime.

The United States. Voluntary framework, state statute, sectoral regulation.

The US approach is structurally different. There is no federal AI statute. The operative instruments are a voluntary federal framework (NIST AI RMF 1.0, extended by the Generative AI Profile in July 2024), executive orders directing federal agencies, state statutes (Colorado leading with SB 24-205 effective 1 February 2026), and sector-specific rules from the FTC, CFPB, SEC, EEOC, FDA, HHS, and others.

The NIST framework carries disproportionate weight because it fills the vacuum between statutes. Its four functions (Govern, Map, Measure, Manage) define what a reasonable US operator is doing. The Colorado AI Act encodes this by making alignment with a nationally recognised framework a rebuttable presumption of reasonable care. Several additional states (California, Connecticut, New York, Texas) have advanced similar statutes with comparable safe-harbour structures. The trajectory is toward a patchwork of state regimes unified by their reliance on NIST as the operational baseline.

Enforcement is fragmented. The Colorado Attorney General holds exclusive authority over the Colorado statute. Federal agencies enforce sectoral rules within their jurisdictions. Private rights of action are limited and depend on the specific statute; Colorado creates none, several sectoral laws do. The practical effect is a regulatory landscape that is harder to map but easier to build toward if an operator adopts NIST as the organising reference.

The United Kingdom. Sectoral regulator leadership.

The UK government's 2023 AI Regulation White Paper committed to a sectoral approach. Five cross-sector principles (safety and robustness, transparency and explainability, fairness, accountability and governance, contestability and redress) frame activity by existing regulators: ICO, FCA, CMA, Ofcom, MHRA, and others. Each regulator issues guidance on how the principles apply within its remit. The ICO's guidance on AI and data protection is the most mature example; the FCA's interventions on algorithmic trading and conduct risk are longer-standing.

The 2024 to 2025 consultations signalled a narrower statutory intervention focused on frontier AI, administered through the AI Safety Institute and an expected companion bill. Agentic systems deployed in specific sectors (finance, health, energy, online safety, employment) fall under the existing regulators and the body of law they already apply. The result is a regime that varies substantially by sector: financial services deployers face heavier operational duties than general-purpose software deployers, with the principles pulling all sectors toward a common minimum.

Enforcement is sectoral. The ICO enforces data protection dimensions of AI; the FCA enforces conduct in financial services; the CMA enforces competition and consumer protection; Ofcom enforces online safety. Fines follow the regime cited in the specific enforcement action. The overall structure is lighter on paper than the EU and Colorado regimes but heavier where sectors are already regulated.

Where the three converge.

Read in parallel, the three regimes converge on a substantive core.

Risk management is expected in all three. Article 9 of the EU AI Act (on providers) and the Article 26 duties (on deployers) describe a continuous risk management process over the AI lifecycle. The Colorado AI Act requires a risk-management programme aligned with NIST or equivalent. UK regulators expect deployers to demonstrate risk management proportionate to the use case. The vocabulary differs. The expectation is the same.

Transparency to affected parties. The EU Act's Article 13 (provider transparency) and Article 26's disclosures sit alongside Article 52's transparency obligations for systems interacting with humans. The Colorado Act requires consumer disclosures when AI is a substantial factor in a consequential decision, and explanations in adverse-decision contexts. UK regulators require transparency proportionate to use, with the ICO's guidance providing the most detailed treatment.

Human oversight. Article 14 of the EU AI Act is the most detailed codification, requiring the system to be designed so that oversight is possible and the deployer to staff it with competent humans. Colorado implies human oversight through the reasonable care standard. UK regulators have explicitly identified human oversight as a component of accountable deployment.

Documentation. Each regime expects the operator to hold files. The EU expects the operator file under Article 26 and the FRIA under Article 27. Colorado expects an impact assessment annually and on material modification, retained for three years. UK regulators expect documentation proportionate to risk; data protection guidance requires a DPIA where relevant.

Incident response. The EU AI Act's Article 26(5) and the Act's serious incident definitions operate concurrently with sectoral reporting regimes. Colorado requires 90-day notification to the Attorney General when algorithmic discrimination is discovered. UK regulators operate existing incident-reporting regimes in their sectors.

The five common duties. Risk management, transparency, human oversight, documentation, incident response. These are the operator duties on which all three regimes agree. An operator that can evidence all five is in the defensible position in any of the three jurisdictions. An operator that cannot evidence any of them is in a weak position everywhere.

Where the three diverge.

The divergence is in structure, scope, and enforcement.

On scope, the EU Act's Annex III list is broader than Colorado's consequential-decision list. The EU covers law enforcement, migration, administration of justice, critical infrastructure, and more. Colorado focuses on employment, housing, education, healthcare, financial services, insurance, legal services, and essential government services. The UK's sectoral approach can be broader or narrower than either depending on the sector; no single list defines scope.

On enforcement, the EU combines administrative fines under Article 99 with the civil path of the Product Liability Directive. Colorado relies on the Attorney General and the Consumer Protection Act. The UK relies on sectoral regulators and existing enforcement powers. The numbers vary. EU fines for Article 26 breaches reach EUR 15 million or 3 per cent of turnover. Colorado penalties follow the Consumer Protection Act. UK penalties follow the specific enforcement regime (GDPR fines up to 4 per cent of turnover, FCA fines on a case basis, etc.).

On extraterritorial reach, the EU Act is the most explicit. Deployers outside the Union whose outputs are used in the Union are within the regime. Colorado reaches persons doing business in Colorado, following the consumer-protection test. The UK depends on the specific sectoral regulator; the ICO's reach is broad, others less so.

Practical implication for cross-border operators.

For an operator deploying autonomous agents across the three jurisdictions in 2026, the rational posture is to design a single programme to the highest standard and map outputs to each regime. The EU operator file is the most demanding, and the Colorado file maps substantially into it. The UK's sectoral expectations can be layered on top using the relevant regulator's guidance.

The core documents are the same. The risk record, oversight register, impact assessment, instructions-for-use map, logging schedule, and incident protocol are all recognisable in each regime. What varies is the jurisdiction-specific labelling: Article 27 FRIA in the EU, algorithmic impact assessment in Colorado, data protection impact assessment in the UK. The underlying analytical work is one document; the regulatory presentations are three.

Enforcement readiness is harder to unify. An operator should expect to interact with the AI Office or a national supervisor in the EU, with the Colorado Attorney General in the US, and with the relevant UK sectoral regulator (most often the ICO for AI-specific questions). Maintaining a single point of contact and a single evidence archive reduces the operational cost of responding to inquiries from any of them.

Related reading

For the detailed EU deployer regime, see the operator provisions of Regulation 2024/1689. For the Colorado statute, see Colorado AI Act deployer obligations. For the US framework that underpins reasonable care, see NIST AI RMF and reasonable care. For the APAC picture that complements this article, see Asia-Pacific AI governance in 2026.

Frequently asked questions

Is there convergence between the three regimes?

On substance yes, on structure no. The core operator duties are shared. The statutory architecture differs.

Which regime is most demanding?

The EU AI Act imposes the most detailed procedural regime on deployers. Colorado is narrower but prescriptive. The UK varies by sector.

Can one compliance programme satisfy all three?

Substantially yes, with jurisdiction-specific documentation layers. Building to the EU standard covers most Colorado requirements and most UK sectoral expectations.

What happens if a US operator has no EU customers?

Extraterritorial reach still applies if outputs are used in the EU, including by third parties. A careful operator verifies the downstream use of its outputs.

Does the UK have a general AI statute?

Not yet. A narrower frontier-AI statute has been under consultation. General AI regulation is administered by existing sectoral regulators applying five cross-sector principles.

References

  1. Regulation (EU) 2024/1689, AI Act.
  2. Directive (EU) 2024/2853, Revised Product Liability Directive.
  3. NIST AI RMF 1.0 (January 2023) and NIST AI 600-1 (July 2024).
  4. SB 24-205, Colorado AI Act.
  5. UK Government, A pro-innovation approach to AI regulation (White Paper, March 2023).
  6. ICO Guidance on AI and Data Protection.
  7. Executive Order 14110 on Safe, Secure, and Trustworthy AI (2023).
  8. OMB Memorandum M-24-10.