Who is accountable for AI decisions?
Quick Answer
Who is liable when AI makes a mistake? In the UK/EU, it's always the deployer. Learn how the SM&CR and GDPR assign accountability for AI decisions.
Detailed Answer
Who is accountable for AI decisions?
You are. If you deploy an AI system in your business, whether it is a custom-built algorithm for loan approvals or a standard subscription to ChatGPT for drafting emails, the accountability for its decisions rests squarely with you, the deployer. There is no legal framework in the UK or EU that allows you to outsource liability to an algorithm, a "black box," or a third-party software vendor.
In the eyes of the law, an AI error is simply a business error. If your AI denies a loan application based on biased historical data, your firm has discriminated. If your chatbot hallucinates a refund policy that does not exist, you may have entered into a binding contract. The "black box" defence, claiming you did not know how the AI reached its conclusion, is not a shield; under current regulatory frameworks like the UK GDPR and the upcoming EU AI Act, it is effectively an admission of negligence.
For leaders in regulated sectors like financial services, insurance, and law, this is the single most critical concept to grasp: AI is a tool, not a legal entity. It cannot be sued, it cannot be fined, and it cannot be fired. You can.
The "Vendor Shield" fallacy
A common misconception we encounter is the belief that using a tool from a major vendor like Microsoft, Salesforce, or OpenAI insulates the business from liability. Leaders often assume that because the software is provided by a tech giant, the accountability for its outputs sits with them. This is dangerously incorrect.
Most AI vendors operate under a Shared Responsibility Model. They are responsible for the security and functionality of the model itself (the infrastructure). You are responsible for the data you put into it, the context in which you use it, and the decisions you make based on its output. If you feed sensitive client data into a public model and it leaks, that is your data breach, not the vendor's system failure.
The Senior Managers and Certification Regime (SM&CR)
For UK financial services firms, accountability goes beyond corporate liability; it becomes personal. Under the Senior Managers and Certification Regime (SM&CR), firms must identify senior individuals who are personally accountable for specific areas of the business. The FCA and PRA have made it clear that this extends to the use of artificial intelligence and machine learning.
You cannot simply assign responsibility to "IT". A named Senior Manager must be accountable for the firm’s use of technology, model risk, and operational resilience. If an AI system causes consumer harm or market disruption, regulators will look for the individual whose duty it was to oversee that system. If that individual cannot demonstrate they took "reasonable steps" to understand and control the AI, they face personal sanctions.
Why "Human-in-the-Loop" is not a silver bullet
Many firms believe they solve the accountability problem by placing a "human in the loop", having a staff member review the AI's output before it is actioned. While this is a good starting point, it is often implemented as "automation theatre."
If the human operator blindly accepts the AI’s recommendation 99% of the time because they are under time pressure or lack the expertise to challenge the algorithm, you do not have meaningful human oversight. You have rubber-stamping. Regulators are increasingly looking for evidence of meaningful human intervention, proof that the human reviewer has the authority, competence, and time to disagree with the AI.
This is where the gap in skills becomes a liability. We see a significant rise in searches for AI training in London specifically from financial and legal firms. Directors are realising that without documented, rigorous staff training on how to interpret, challenge, and govern AI outputs, they cannot demonstrate the "reasonable steps" required to defend against negligence claims.
Building a defensible position: The Pattrn Protocol
Accountability does not mean you should avoid AI. It means you must wrap your AI implementation in a governance framework that makes its decisions defensible. At Pattrn Data, we use The Pattrn Protocol to ensure our clients can answer three questions for every AI decision:
- Data Lineage: Do we know exactly what data was used to make this decision?
- Logic & Explainability: Can we explain, in plain English, why the model reached this conclusion?
- Human Oversight: Who was the qualified human responsible for validating this outcome?
We saw the importance of this approach in our work with UBS. They faced a challenge with data governance reporting that took a month to compile, leaving them exposed to unresolved data risks for weeks at a time. By implementing an automated data governance solution, we didn't just speed up the process to one hour; we created a rigorous, auditable trail of every data issue and resolution. Automation didn't remove accountability, it enhanced it by providing a level of transparency that manual processes could never match.
Conclusion: Governance is your licence to operate
The question "Who is accountable?" has a simple answer, but a complex execution. You are accountable. To accept that responsibility safely, you need more than just good intentions; you need a system. You need clear policies, trained staff who understand the limitations of the tools they use, and an audit trail that proves you are in control.
Don't wait for a regulator to ask who is responsible for your AI. By then, it’s already too late.