What Questions Should I Ask About My Wrap Platform's AI and Automation Systems?
Quick Answer
Your wrap platform is increasingly powered by AI and automation. Here are the questions you should be asking about your platform's technology.
Detailed Answer
This article is for informational purposes only and does not constitute financial or legal advice. You should consult with a qualified professional before making any decisions about your choice of platform.
What Questions Should I Ask About My Wrap Platform's AI and Automation Systems?
Your wrap platform is no longer just a collection of funds and a dealing service. It is a technology company. It is increasingly using artificial intelligence and automation to power everything from customer service and fraud detection to trade execution and compliance monitoring. As a financial adviser, you have a duty to understand how this technology works and the risks it poses to your clients. Blind trust is not a due diligence strategy.
The FCA's heightened scrutiny of platform technology is not just about operational resilience; it is about the entire technology stack. As platforms integrate more AI, you need to expand your due diligence to cover these new, complex systems. You are the one recommending the platform, so you are the one who needs to be satisfied that its use of AI is safe, effective, and compliant.
Why You Need to Ask About AI
AI introduces a new set of risks that your traditional due diligence process may not cover:
- Algorithmic Bias: The AI could be making decisions that unfairly disadvantage certain groups of your clients.
- "Black Box" Decisions: The platform may not be able to explain why the AI made a particular decision, leaving you in the dark.
- Automation Complacency: Over-reliance on automation can lead to errors being missed by human oversight.
- Security Vulnerabilities: AI systems can be a new target for cyber-attacks.
A Due Diligence Questionnaire for the Age of AI
Here are the key questions you should be asking your platform provider about their use of AI and automation. You should expect clear, evidenced answers, not vague marketing assurances.
| Due Diligence Area | Key Questions |
|---|---|
| 1. AI Governance & Accountability | Who is the Senior Manager at the platform responsible for the governance of AI systems? Can you provide details of your AI governance framework? How do you ensure that your use of AI is compliant with the Consumer Duty? |
| 2. Use Cases & Transparency | In which specific business processes are you using AI and automation (e.g., onboarding, fraud detection, customer support, trade routing)? Are you using generative AI in your client-facing communications or support channels? How are you transparent with advisers and clients about your use of AI? |
| 3. Algorithmic Fairness & Bias | How do you test your AI models for bias to ensure they are not producing discriminatory outcomes for different groups of customers? What data are you using to train your AI models, and how do you ensure it is representative and free from bias? |
| 4. Human Oversight & Intervention | What level of human oversight is in place for your automated processes? In what scenarios does a human review a decision made by the AI? What is the process for an adviser to challenge or appeal a decision made by an AI system? |
| 5. Security & Data Protection | How do you protect our client data when it is being processed by your AI systems? Have your AI systems undergone independent security testing? What are your policies on using third-party AI tools (like ChatGPT) with client data? |
| 6. Performance & Reliability | How do you monitor the performance and accuracy of your AI models over time? What is your process for managing "model drift"? What happens if an AI system fails? What are your backup and contingency plans? |
What a Good Answer Looks Like
A good answer is not a simple "yes" or "no." A good answer demonstrates a mature and considered approach to AI governance.
- Bad answer: "We use AI to enhance our services, and we take compliance very seriously."
- Good answer: "Our AI governance framework is overseen by our Chief Risk Officer (SMF4). We use AI primarily for fraud detection and workflow automation. All models are subject to a pre-deployment fairness audit, and we have a human-in-the-loop process for any flagged transactions over £10,000. We can provide you with a summary of our AI governance policy."
The Bottom Line: Your Reputation is on the Line
When you recommend a platform, you are implicitly endorsing its technology and its processes. If that technology fails, and your client suffers harm, it is your reputation that will be damaged.
The FCA has put the industry on notice. The technology that underpins the platform market is now a core area of regulatory focus. As an adviser, you can no longer afford to be a passive consumer of that technology.
You need to be a critical, informed, and sceptical evaluator. Asking these questions is the first step. And if you are not satisfied with the answers, you need to be prepared to take your clients elsewhere.
Take the Next Step
If you are ready to move from theory to action, I can help. My AI Audit gives you a comprehensive assessment of your firm's AI readiness, identifying the gaps in your governance, the risks in your current tooling, and a clear roadmap to get you where you need to be.
Book a Discovery Call → or learn more about the AI Audit.