InsuranceSM&CRAccountabilityAI DecisionsSenior Managers

Who Is Accountable for AI Decisions Under SM&CR in Insurance Firms?

9 January 2026
Answered by Rohit Parmar-Mistry

Quick Answer

Who is accountable when an AI makes a decision that harms a customer? Learn how SM&CR applies to AI decisions in insurance firms.

Detailed Answer

This article is for informational purposes only and does not constitute financial or legal advice. You should consult with a qualified professional before making any decisions about the use of AI in your firm.


Who Is Accountable for AI Decisions Under SM&CR in Insurance Firms?

When your AI pricing model creates a new "poverty premium" or your AI claims system denies a legitimate claim, the FCA will not be asking to speak to your Head of Data Science. They will be looking for the Senior Manager whose name is on the hook. Under the Senior Managers and Certification Regime (SM&CR), accountability for AI is not a technical question; it is a leadership responsibility. And in many insurance firms, that responsibility is dangerously diffuse.

The SM&CR was designed to end the era of "it was the computer, guv." It establishes clear lines of individual accountability for everything that happens within a regulated firm. The rise of AI does not change this; it makes it more critical than ever.

The Accountability Void: A Common Problem

In many insurance firms, the accountability for AI falls into a dangerous void between different senior management functions (SMFs):

  • The SMF24 (Chief Operations Officer) is typically responsible for the firm's technology systems. They own the "box."
  • The SMF4 (Chief Risk Officer) is responsible for the firm's risk management framework. They own the "risk."
  • The SMF16 (Compliance Oversight) is responsible for compliance with regulatory requirements. They own the "rules."
  • The Head of Underwriting or Claims (a Certified Person) is responsible for the business function that is using the AI. They own the "outcome."

So, when an AI system goes wrong, who is actually accountable? The COO who bought the system? The CRO who assessed the risk? The Head of Underwriting who used it? The answer is: it depends. And if you do not have a clear answer, you have a major governance failure.

Pinning the Tail on the Donkey: A Framework for AI Accountability

The FCA expects you to have explicitly allocated responsibility for AI within your governance framework. This is not about finding a single scapegoat; it is about creating a clear and logical chain of accountability. Here is a practical framework:

Level of Accountability Senior Management Function (SMF) / Role
1. Overall Responsibility for AI Strategy & Governance This should be a Prescribed Responsibility allocated to a single, senior SMF. This could be the CEO (SMF1), the CRO (SMF4), or a dedicated Head of AI (if they are an SMF). This person is responsible for ensuring the firm has a robust, firm-wide AI governance framework.
2. Responsibility for a Specific AI System The senior manager responsible for the business area that is using the AI is accountable for its use. For example, the Head of Underwriting is accountable for the AI underwriting model. The Head of Claims is accountable for the AI claims system. They are responsible for understanding the system, its risks, and its performance.
3. Responsibility for Technology & Operations The SMF24 (COO) is responsible for the operational resilience, security, and performance of the AI system. They are responsible for ensuring the "box" works as it should.
4. Responsibility for Risk & Compliance Oversight The SMF4 (CRO) and SMF16 (Compliance) are responsible for providing independent challenge and oversight. They need to be able to satisfy themselves that the risks of the AI are being managed effectively and that its use is compliant with regulations like the Consumer Duty.

The "Partial Understanding" Problem

A recent Bank of England survey found that 46% of firms have only a "partial understanding" of the AI technologies they use. Under the SM&CR, "partial understanding" is a synonym for "non-compliant."

If you are the senior manager responsible for an AI system, you cannot delegate your understanding to your data science team or your vendor. You need to be able to demonstrate that you have personally engaged with the risks and that you have taken reasonable steps to mitigate them.

The Bottom Line: Accountability Must Be Designed

Accountability for AI does not happen by accident. It must be deliberately designed and embedded into your governance framework. Your statements of responsibilities, your governance map, and your management committees all need to be updated to reflect the reality of AI.

The FCA's message is simple: someone must be responsible. If you have not clearly defined who that someone is, then the answer is probably you.

And when the regulator comes knocking, you had better have a very good story to tell about the reasonable steps you have taken. Because "I thought the CRO was looking at that" will not be a story they are interested in hearing.


Take the Next Step

If you are ready to move from theory to action, I can help. My AI Audit gives you a comprehensive assessment of your firm's AI readiness, identifying the gaps in your governance, the risks in your current tooling, and a clear roadmap to get you where you need to be.

Book a Discovery Call → or learn more about the AI Audit.

Need More Specific Guidance?

Every organisation's situation is different. If you need help applying this guidance to your specific circumstances, I'm here to help.