InsurancePRAGovernanceSolvencyModel Risk

What AI Governance Framework Does the PRA Expect From Insurers?

22 January 2026
Answered by Rohit Parmar-Mistry

Quick Answer

The PRA focuses on stability, solvency, and systemic risk. Learn how to build an AI governance framework that addresses both FCA and PRA expectations.

Detailed Answer

This article is for informational purposes only and does not constitute financial or legal advice. You should consult with a qualified professional before making any decisions about the use of AI in your firm.


What AI Governance Framework Does the PRA Expect From Insurers?

The Prudential Regulation Authority (PRA) is the silent partner in the UK’s dual regulation of insurance. While the FCA focuses on conduct and consumer outcomes, the PRA is obsessed with the financial stability of your firm and the wider market. When they look at your use of AI, they are not just thinking about fairness; they are thinking about solvency, model risk, and operational resilience. And their expectations for your governance framework are just as high as the FCA’s.

In their 2026 supervision priorities, the PRA has made it clear that they are focusing on three key areas that are directly impacted by the rise of AI: the implementation of Solvency UK, model risk management, and operational resilience.

An AI governance framework that does not explicitly address the PRA’s prudential concerns is incomplete. It is a framework that is only looking at half of the regulatory picture.

The PRA’s Lens: Stability, Solvency, and Systemic Risk

While the FCA asks, "Is it fair?", the PRA asks, "Is it safe?" They want to know that your use of AI is not introducing new, unmanaged risks that could threaten your firm’s capital adequacy or the stability of the financial system.

Here is how the PRA’s key priorities translate into expectations for your AI governance framework:

PRA Priority PRA Expectation for AI Governance
1. Solvency UK & Model Risk Management The PRA expects you to have a robust model risk management framework that is compliant with SS1/23. This is not just for your internal capital models; it applies to any model, including AI, that is used to make material business decisions. You must be able to demonstrate that you understand, validate, and govern your AI models to the same standard as your Solvency II models.
2. Operational Resilience The PRA expects you to have identified your important business services and to have set impact tolerances for them. If you are using AI to support one of these services (e.g., claims processing), you must be able to show that you have assessed the operational resilience risk. What happens if your AI vendor has an outage? What is your plan B?
3. Third-Party Risk Management The PRA has a keen interest in the concentration risk that arises from the industry’s reliance on a small number of large technology providers. Your AI governance framework must include a rigorous process for managing the risks of your third-party AI vendors, including assessing the systemic risk if that vendor were to fail.

Integrating PRA Expectations into Your AI Governance Framework

Your AI governance framework cannot be a standalone, conduct-focused document. It must be integrated with your existing prudential risk management frameworks. This means:

  • Your AI Risk Register must include prudential risks, such as the risk of model failure leading to inadequate reserving, or the risk of an AI-driven investment strategy leading to excessive market risk.
  • Your Model Risk Management Framework must be updated to explicitly cover AI models. This includes having a clear model inventory, a process for independent model validation, and a system for ongoing performance monitoring.
  • Your Operational Resilience Framework must be updated to consider the impact of AI on your important business services. This includes conducting scenario testing for AI-related disruptions.
  • Your Outsourcing and Third-Party Risk Management Framework must be enhanced to address the specific risks of AI vendors, including concentration risk and the risk of vendor failure.

The Board’s Role: A Prudential Responsibility

The PRA places a strong emphasis on the role of the board in overseeing risk. The board is ultimately responsible for ensuring that the firm has an adequate and effective risk management framework. This includes the risks posed by AI.

Your board needs to be receiving regular, clear, and comprehensive reporting on the firm’s use of AI, the risks it poses, and the controls that are in place to manage those risks. They need to be able to challenge the executive on the firm’s AI strategy and to satisfy themselves that it is consistent with the firm’s risk appetite.

The Bottom Line: AI Governance is a Tale of Two Regulators

Effective AI governance in the UK insurance sector is a tale of two regulators. It is about satisfying the FCA’s focus on conduct and consumer protection, and the PRA’s focus on prudential soundness and financial stability.

A governance framework that only addresses one of these is a framework that is only doing half the job.

The PRA may be the quieter of the two regulators, but its powers are just as significant. And when it comes to the safety and soundness of your firm, its voice is the one you cannot afford to ignore.


Take the Next Step

If you are ready to move from theory to action, I can help. My AI Audit gives you a comprehensive assessment of your firm's AI readiness, identifying the gaps in your governance, the risks in your current tooling, and a clear roadmap to get you where you need to be.

Book a Discovery Call → or learn more about the AI Audit.

Need More Specific Guidance?

Every organisation's situation is different. If you need help applying this guidance to your specific circumstances, I'm here to help.