How Can I Demonstrate "Understanding" of AI Systems Under SM&CR?
Quick Answer
Under SM&CR, you cannot delegate accountability to an algorithm. Learn how to demonstrate meaningful understanding of your AI systems to the regulator.
Detailed Answer
This article is for informational purposes only and does not constitute financial or legal advice. You should consult with a qualified professional before making any decisions about the use of AI in your firm.
How Can I Demonstrate "Understanding" of AI Systems Under SM&CR?
Under the Senior Managers and Certification Regime (SM&CR), "I did not understand it" is not a defence; it is an admission of failure. The Treasury Committee has made it abundantly clear: if you are a senior manager, you are accountable for the AI systems used in your area of responsibility. This means you must be able to demonstrate a credible and robust understanding of the risks they pose. But what does "understanding" actually mean in this context?
It does not mean you need to be able to write Python code or build a neural network. The FCA does not expect you to become a data scientist. What it does expect is that you can articulate the risks of the AI system in plain English and explain the steps you have taken to mitigate them.
Demonstrating this understanding is about moving beyond the vendor's sales pitch and engaging with the technology on a deeper, more critical level.
The Three Levels of AI Understanding for a Senior Manager
To satisfy the SM&CR, your understanding of an AI system needs to operate on three distinct levels:
| Level | Description | Key Questions to Ask |
|---|---|---|
| 1. Conceptual Understanding | You need to understand what the AI is supposed to do, what type of AI it is (e.g., machine learning, generative AI), and the fundamental principles of how it works. | What business problem does this AI solve? What are the inputs and what are the outputs? Is it a probabilistic or deterministic system? What are the known limitations of this type of AI? |
| 2. Risk-Based Understanding | You need to understand the specific risks the AI system introduces to your business area. This is the most critical level for SM&CR purposes. | What is the risk of this AI producing biased or discriminatory outcomes? How could this AI be manipulated by malicious actors? What is the operational risk if this AI fails? What is the data privacy risk? How does this AI impact our compliance with the Consumer Duty? |
| 3. Governance Understanding | You need to understand the controls, oversight, and accountability mechanisms that are in place to manage the risks you have identified. | Who is responsible for the ongoing monitoring of this AI? What is our process for validating its outputs? What training have our people received? What is our incident response plan if this AI causes harm? What does our contract with the vendor say about liability? |
Moving from Passive to Active Understanding
The mistake many senior managers make is to remain passive recipients of information. They read the vendor's documentation, they listen to a presentation from their IT team, and they tick a box. This is not understanding; it is delegation.
To demonstrate active understanding, you need to be the one asking the probing questions. You need to challenge the assumptions of your team and your vendors. You need to play devil's advocate.
Here is a practical example. Your firm is proposing to use an AI tool to help with investment recommendations:
- Passive understanding: "The IT team says the AI will help us make better investment decisions."
- Active understanding: "The AI uses a machine learning model trained on historical market data. I want to know what steps have been taken to ensure the model is not overfitting to past performance. I want to see the results of the bias testing. I want to understand the scenarios in which the model is known to be unreliable. I want to know who is responsible for signing off on the recommendations before they go to the client."
Documenting Your Understanding: The Defensibility Trail
In the world of SM&CR, if it is not written down, it did not happen. You need to create a defensibility trail that documents your understanding and the reasonable steps you have taken.
This includes:
- Minutes of meetings where you have challenged your team and your vendors on the risks of the AI system.
- Your own notes from those meetings, demonstrating your engagement with the issues.
- Emails where you have sought clarification on technical points.
- Your sign-off on the AI risk assessment, with comments and annotations.
This documentation is not bureaucracy. It is your personal protection. It is the evidence you will present to the FCA to show that you have met your duty of responsibility.
The Bottom Line: Understanding is an Action, Not a State
Demonstrating your understanding of an AI system under SM&CR is not about having all the answers. It is about asking the right questions.
It is about showing that you have applied your mind to the problem, that you have engaged with the risks in a meaningful way, and that you have taken deliberate steps to ensure the system is being used safely and responsibly.
In the eyes of the FCA, ignorance is not bliss. It is negligence. at best, negligence, and at worst, a breach of your fundamental duties as a senior manager.
Take the Next Step
If you are ready to move from theory to action, I can help. My AI Audit gives you a comprehensive assessment of your firm's AI readiness, identifying the gaps in your governance, the risks in your current tooling, and a clear roadmap to get you where you need to be.
Book a Discovery Call → or learn more about the AI Audit.