What should a CPA firm ask before putting client data into an AI provider?
Quick Answer
A CPA firm should ask how the AI provider stores, processes, trains on, shares and deletes client or taxpayer data before any upload. The safer route is a written vendor review, contract limits, human review and documented consent where tax rules require it.
Detailed Answer
Why CPA firms need better questions before using AI with client data
Before a CPA firm puts client files, taxpayer information, workpapers or draft advice into an AI provider, it needs more than a quick security reassurance. It needs a clear record of what data is being sent, who can access it, whether it can be reused for model training, where it is stored, how long it is retained and what human review sits between the AI output and the client.
The CalCPA and CAMICO checklist makes a useful distinction: AI used as a professional tool under supervision carries a different risk profile from AI that interacts directly with clients or gives automated tax and accounting guidance. That difference should drive the level of due diligence, consent, monitoring and sign-off required.
The safest answer is to approve the tool before the data, not after
A CPA firm should not treat an AI provider as safe for client data until it has checked the provider's privacy, security, contract and operational controls. The minimum review should cover storage, processing location, retention, deletion, model training, third-party access, incident reporting, confidentiality obligations and whether taxpayer consent is needed before disclosure to an external system.
For tax engagements, CAMICO specifically points to IRC Sec. 7216 and related rules restricting the use or disclosure of taxpayer information to third parties, including some AI platforms. UK firms will map the same practical risk to confidentiality, UK GDPR, engagement terms and professional standards. The principle is simple: do not upload sensitive client information to a tool unless the firm can explain and evidence the legal basis, contractual guardrails and review process.
Review your AI data risk before rollout
The core vendor questions to ask first
Start with the data lifecycle. Ask what categories of information the provider receives, whether the service separates your firm's data from other customers, where data is processed and stored, and whether backups, logs or support tickets can contain client material. Then ask how long each copy remains in the system and what evidence the provider gives when deletion is requested.
Next, ask about model training and reuse. The contract should say whether prompts, uploaded documents, outputs, metadata or usage patterns can be used to train, tune or improve models. If the provider says data is not used for training, the firm should still ask whether humans can review content for support, abuse monitoring, quality control or product improvement. The useful answer is not a sales sentence. It is a contract clause, policy extract or security schedule the firm can keep on file.
Then ask about third parties. Many AI tools rely on infrastructure providers, subprocessors, analytics vendors and support systems. The firm should know who those parties are, where they operate, what they can see and how changes are notified. If the provider cannot give a subprocessor list or change notice process, the firm has weak control over client data once it leaves the building.
Questions that matter for taxpayer and confidential client information
CPA firms need a stricter test for taxpayer data than for generic internal notes. Before uploading tax returns, source documents, payroll records, identity documents or client correspondence, ask whether the transmission counts as disclosure to a third party, whether written taxpayer consent is required, and whether the provider is contractually barred from using the data beyond the engagement purpose.
The firm should also ask whether the AI task can be done without identifiable data. In many cases, the safer workflow is to remove names, tax IDs, addresses, bank details and transaction references before using an external tool. If the task needs exact client data, it should move through an approved environment with access controls, logging and named human ownership.
Confidentiality also means controlling outputs. AI-generated summaries, draft emails, tax research notes or client recommendations can contain errors or sensitive inferences. Every output that affects client advice should be reviewed by a qualified person before it leaves the firm, with the review recorded in the file.
What the approval record should contain
A practical approval record does not need to be long, but it must be complete enough for a partner, insurer, regulator or client to understand the decision later. Record the use case, the data categories involved, the approved tool, the provider due diligence reviewed, the contractual restrictions, the retention and deletion terms, the human review step and the owner accountable for monitoring.
It also helps to classify tools by permitted use. One tool may be approved for public research prompts but prohibited for client documents. Another may be approved for anonymised analysis but restricted for taxpayer data. A firm-controlled environment may be approved for higher-risk workflows if logging, access control and review are in place.
Set up an AI governance operating model
How to turn the answers into an operating policy
Once the firm has answered the vendor questions, convert them into everyday rules staff can follow. The policy should name approved tools, prohibited data, consent triggers, review steps, escalation routes and incident reporting expectations. It should also say what staff must do when a client asks whether AI was used in their work.
Training matters because most data exposure happens through ordinary workflow decisions: copying a spreadsheet into a public chatbot, asking for help with a client email, uploading a contract for summarisation or using a browser extension without checking its terms. Staff need examples, not abstract warnings.
The policy should be reviewed whenever the provider changes terms, launches new data features, adds subprocessors or introduces agentic functions that can take actions on connected systems. Vendor approval is not a one-off procurement task. It is a living control.
Conclusion
The right question is not whether AI can help a CPA firm. It can. The question is whether the firm can prove that client and taxpayer data were handled lawfully, confidentially and under professional supervision. If the answer is unclear, the tool should stay out of client workflows until the contract, controls and review process are clear.
Build safer AI workflows for client work
FAQ
Can a CPA firm put client data into ChatGPT or another public AI tool?
Only if the firm has approved that tool for the specific data and use case. Public AI tools should be treated as restricted until the firm has checked training, retention, confidentiality, access and consent requirements.
What is the first question to ask an AI provider?
Ask exactly what happens to uploaded data: where it is stored, who can access it, whether it is used for training, how long it is retained and how deletion is evidenced.
When is taxpayer consent needed?
For tax-related engagements, firms should check whether transmitting taxpayer information to an external AI provider is a disclosure that requires written consent under applicable tax rules and professional obligations.
Who should approve AI tools inside a CPA firm?
Approval should involve the accountable partner or risk owner, IT or security input, and the practice leader responsible for the workflow. The decision should be recorded and reviewed as the provider or use case changes.