AI GovernanceGenerative AIRisk managementData privacyClient disclosureProfessional servicesAccounting

How should CPA firms manage generative AI risk (governance, privacy, disclosure and liability)?

30 March 2026
Answered by Rohit Parmar-Mistry

Quick Answer

How should CPA firms manage generative AI risk? Treat it like a new high-impact tool: set clear governance and approved use cases, keep client data out of public models by default, and document when and how AI was used to protect quality and liability.

Detailed Answer

Generative AI can speed up research, drafting and analysis inside a CPA firm, but it also creates new failure modes: confidentiality leakage, hallucinated numbers, unclear responsibility, and awkward client conversations when AI was involved.

Generative AI in a CPA firm: the risk is predictable (and manageable)

If you treat GenAI as a general-purpose ‘assistant’ with no guardrails, you will get inconsistent behaviour across staff, inconsistent quality, and avoidable exposure. The safer approach is to treat it like any other regulated tool in a professional services environment: define what it can be used for, what it must never touch, and how you evidence quality.

The safest approach in practice (governance, privacy, disclosure and liability)

1) Governance: appoint an owner, define approved use cases, and publish a simple policy that staff can follow. Your policy should cover: allowed tools/models, prohibited data types, prompt logging expectations, human review requirements, and escalation when something looks wrong.

2) Data privacy: default to no client data in public GenAI tools unless your firm has verified contractual terms, security controls, and a clear data handling position. Where possible, route GenAI through enterprise controls (SSO, audit logs, model settings that prevent training on your inputs, and data retention limits).

3) Client disclosure: decide what you will disclose, when, and how. Many firms choose disclosure for material client work (for example where AI-generated content materially informs deliverables), and do not disclose for purely administrative use (such as internal drafting) provided a qualified professional performs full review.

4) Liability: assume AI can be wrong in ways that look confident. Treat AI outputs as drafts. Your controls should ensure that a qualified professional validates assumptions, calculations, sources, and final conclusions. Document that review.

Get an AI Risk & Efficiency Audit for your firm

A practical control checklist you can implement this quarter

  • Use-case register: list permitted use cases (for example: summarising public guidance, drafting internal emails, drafting client-facing prose with full review) and banned use cases (for example: entering client identifiers into public tools, generating tax positions without verification).
  • Data classification: define what counts as client confidential, personal data, special category data, and regulated data, and map each to approved tools.
  • Tooling standards: prefer enterprise versions with admin controls, audit logs, and clear data retention terms. Disable optional features that increase leakage risk (plugins, browsing, automatic file syncing) unless explicitly required.
  • Prompting guidance: teach staff how to ask for structure, assumptions, and citations, and how to avoid over-sharing. Include ‘safe prompt’ examples.
  • Human review gates: require review by a qualified professional before AI-influenced content leaves the firm. For higher-risk work, add a second reviewer.
  • Evidence: keep enough traceability to demonstrate that a human checked the work (for example: a short review note, version history, and the sources relied on). Avoid storing sensitive prompts in places that increase exposure.
  • Incident playbook: define what to do if someone pastes client data into the wrong tool, or if a model output is discovered to be wrong after delivery.

What to say to clients (without overcomplicating it)

Most client discomfort comes from two things: confidentiality and quality. Your message should be simple:

  • Confidentiality: ‘We do not place your confidential information into public AI tools. Where we use AI, it is within controlled systems and governed by policy.’
  • Quality: ‘AI does not replace professional judgement. All work is reviewed and signed off by qualified staff.’

If you choose to disclose AI use in engagement terms, keep it precise and operational: what AI is used for, what it is not used for, and that professional review remains the standard.

Governance that actually works (without killing adoption)

The goal is not to ban GenAI. It is to standardise safe use so teams can move faster without creating a hidden risk pile.

A workable operating model usually includes:

  • A named AI owner (risk/quality/compliance) and a small steering group
  • Policy + training that fits on one page, plus detailed annexes for edge cases
  • Quarterly review of tools, use cases, incidents, and client feedback
  • Clear red lines on data handling and client deliverables

See our Governance Retainers (policy, controls and operating model)

Conclusion: make GenAI boring (in the best way)

CPA firms can use generative AI safely by putting a small number of controls in place: clear governance, strict data handling, a sensible disclosure stance, and defensible human review. Once those are set, teams can adopt quickly without gambling with client trust or professional liability.

Explore Implementation Projects (secure rollout + workflows)

FAQ

Do we need to disclose AI use to every client?
Not necessarily. Many firms disclose AI use when it materially influences the client deliverable, and do not disclose purely internal drafting that is fully reviewed. The key is consistency and a defensible policy.

Can staff use free public tools like ChatGPT?
Treat this as high risk unless you have verified business terms and controls. Default to firm-approved tools with admin controls, logging and clear data handling.

What data should never go into GenAI tools?
Client identifiers, personal data, bank details, tax IDs, and any confidential client documents should be prohibited unless you have an approved secure environment and a defined purpose.

How do we reduce hallucination risk?
Limit AI to defined tasks, require citations/sources, and enforce human review. For any numbers, calculations or positions, the professional must verify against authoritative sources.

What is the minimum governance we need to start?
An owner, an approved tool list, a short policy, and a mandatory review gate for client-facing work. You can mature controls over time, but those basics prevent most failures.

Need More Specific Guidance?

Every organisation's situation is different. If you need help applying this guidance to your specific circumstances, I'm here to help.