When should an accounting firm disclose AI use to clients?
Quick Answer
An accounting firm should disclose AI use when the tool interacts with clients, influences advice, handles client or taxpayer data, or could affect a client decision. Lower-risk internal drafting may not always need separate notice, but it still needs human review, data controls, and clear engagement-letter wording.
Detailed Answer
Why AI disclosure is now an accounting risk question
Accounting firms do not need to turn every internal use of AI into a client announcement. They do need a clear rule for when AI use becomes material to the client relationship. The practical distinction is whether AI is being used as a supervised professional tool, or whether it is interacting with clients, handling sensitive client data, shaping advice, or making decisions without enough human control.
The CalCPA/CAMICO checklist makes that distinction explicit. AI used for research, drafting, audit automation or analysis under professional supervision is a different risk from an autonomous chatbot, automated tax guidance, AI-driven client service or a tool that makes operational decisions without human oversight.
The safest answer: disclose when AI changes the client’s risk or expectations
Disclose AI use when a reasonable client would expect to know about it before agreeing to the work, sharing data, or relying on the output. That includes direct AI-to-client interaction, automated recommendations that could affect advice, use of external AI systems with client or taxpayer information, and any process where AI output materially informs a professional conclusion.
Internal drafting support may be lower risk if a qualified accountant reviews the work, the tool is approved, confidential data is protected, and the final judgement remains with the firm. Even then, firms should consider engagement-letter wording that explains responsible AI use in plain language.
Check where AI creates client disclosure risk
Five triggers that make disclosure necessary
1. The client is talking to AI, not a person. If a chatbot, portal assistant or automated support flow answers client questions, clients should not be left thinking they are dealing with a human adviser.
2. AI influences advice or a client decision. If AI helps generate tax, audit, accounting or advisory recommendations, disclose the role of AI where it is material, and keep evidence of human professional review.
3. Client, taxpayer or confidential data enters an external system. Firms should check whether the provider stores, reuses, trains on, transfers or exposes data. For tax work, consent and restrictions around taxpayer information may be relevant before data is transmitted to third parties.
4. Law or regulation requires transparency. The source highlights privacy, consumer and transparency rules where automated decisions significantly affect individuals or where customers interact with AI rather than humans. UK firms should translate that principle into their own legal and regulatory context.
5. The engagement letter would otherwise be misleading. If AI meaningfully changes how services are delivered, the client should not have to infer that from generic technology wording.
What the disclosure should say
Good disclosure is specific enough to build trust, but not so detailed that it promises controls the firm cannot evidence. It should cover the type of AI use, the purpose, whether client data may be processed, who reviews AI-generated work, and what safeguards apply to confidentiality and quality control.
A practical engagement-letter clause might say that the firm may use approved AI tools to assist with research, drafting, analysis or workflow support, that AI-generated material is subject to professional review, and that the firm maintains policies for confidentiality, data protection and responsible use. It should be tailored to the actual tools and workflows in use.
Build an AI disclosure and governance model
The governance record behind the disclosure
Disclosure without controls is weak. Before approving a tool, keep a record of the business use case, data classification, vendor terms, retention settings, training opt-out position, access controls, review steps, incident process and accountable owner. For higher-risk use, add monitoring, sample checks and escalation rules.
This is also where many firms find the real issue. The question is not simply whether to tell the client. It is whether the firm can prove that AI use was approved, proportionate, supervised, and consistent with confidentiality and professional standards.
How to implement this in a small or mid-sized firm
Start with a short AI-use register. Put each workflow into one of four categories: prohibited, restricted, approved with review, or approved for low-risk internal support. Then attach disclosure rules to each category. Direct client interaction, client data in external tools and advice-shaping workflows should trigger stronger review and clearer client notice.
Train staff on examples rather than abstractions. A team member using an approved tool to tidy an internal checklist is different from uploading client tax records to a public assistant or letting a bot answer a client’s technical question. The policy should make those differences obvious.
Conclusion
An accounting firm should disclose AI use when it affects the client’s understanding of who is doing the work, how data is handled, how advice is formed, or what safeguards apply. The best approach is not blanket alarm or silence. It is a documented rule, tailored engagement wording, human review, and evidence that client data and professional judgement remain protected.
Turn AI policy into working controls
FAQ
Does every AI-assisted draft need client disclosure?
No. Low-risk internal drafting support may not need separate disclosure if it is supervised, approved and does not expose client data. Engagement-letter wording can still explain responsible AI use at a general level.
What makes disclosure more urgent?
Direct client interaction, automated advice, use of client or taxpayer data in external systems, material reliance on AI output, and any legal transparency obligation all raise the need for disclosure.
Should the disclosure name the AI vendor?
Only when that is useful or required. In many cases, clients need to know the purpose, data handling, review controls and safeguards more than the brand name of the tool.
What evidence should the firm keep?
Keep the approved use case, risk tier, data controls, vendor terms, retention position, review steps, engagement wording, staff training record and accountable owner.