AI risk scoreAI risk levelsEU AI Act categorieshigh-risk AI systemsAI governance frameworkAI Governance

What are the risk levels of AI?

24 February 2026
Answered by Rohit Parmar-Mistry

Quick Answer

AI risk levels describe how much harm a system could cause if it fails: low (assistive), medium (material decisions with checks) and high (safety/rights/regulated impacts). Classify by use‑case, data sensitivity and decision criticality before you deploy.

Detailed Answer

AI risk levels help you match controls to real exposure

Most organisations don’t fail because they lack an AI policy. They fail because they apply the same controls to everything. Risk levels let you prioritise: high-risk systems need rigorous oversight and evidence; low-risk uses can move faster with lighter guardrails.

AI Governance Retainers

The four risk levels (a practical view)

Many frameworks group AI into four levels:

  • Unacceptable risk: use cases that should not be deployed because harm is disproportionate
  • High risk: systems affecting people’s rights, safety, access to services, or regulated outcomes
  • Limited risk: meaningful but manageable risks, often mitigated with transparency and oversight
  • Minimal risk: low-impact internal productivity or support tooling with constrained inputs

AI Risk & Efficiency Audit

How to calculate an AI risk score in your business

A workable score usually blends:

  • Impact: what happens if the system is wrong?
  • Data sensitivity: personal data, special category data, confidential client data
  • Automation level: suggestion vs decision vs action
  • Scale: number of people affected and frequency of use
  • Change rate: model or prompt changes and drift risk

Controls by risk tier (what ‘good’ looks like)

For high-risk systems, expect stronger controls: DPIAs, documented decision rights, extensive logging, testing, monitoring, and incident handling.

For limited-risk systems, focus on transparency, clear boundaries, and human review points.

For minimal-risk uses, focus on safe defaults: approved tools, no sensitive inputs, and basic monitoring.

Next steps

Start with an inventory of AI use cases, tier them by risk, then implement controls proportionate to each tier. That approach avoids both extremes: reckless deployment and paralysing bureaucracy.

AI Implementation

FAQ

Is this the same as the EU AI Act categories?

The EU AI Act has specific legal categories. Many teams use the four-level model as a practical internal risk tool that maps to those obligations where relevant.

Where does generative AI drafting sit?

Often minimal or limited risk if inputs are constrained and outputs are always reviewed. It becomes higher risk when it influences decisions, affects customers, or touches sensitive data.

Can a low-risk tool become high-risk?

Yes. Scope creep is common. Monitoring and change management matter because use cases evolve.

What is the quickest win?

Risk-tiering. Once you classify your AI use cases, the right controls become obvious and you can prioritise the high-risk ones first.

Need More Specific Guidance?

Every organisation's situation is different. If you need help applying this guidance to your specific circumstances, I'm here to help.