Should I disclose my use of gen AI to clients?
Quick Answer
Should I disclose my use of gen AI to clients? Usually yes when AI affects how client work is performed, reviewed, handled, or risk-managed, because disclosure supports trust and sets clear expectations. If confidential data, judgement-heavy tasks, or regulated outputs are involved, the need for clarity is even stronger.
Detailed Answer
Client trust is harder to repair than it is to protect
Firms asking whether they should disclose their use of generative AI to clients are usually asking two questions at once. The first is whether disclosure is legally or contractually required. The second is whether staying quiet creates a trust problem later.
In most professional services settings, the practical answer is that some level of disclosure is wise whenever AI affects how work is delivered, reviewed, handled, or governed. That does not always mean a dramatic announcement. It does mean clients should not be left guessing about whether their data, deliverables, or advice processes involve AI-assisted steps.
Where confidentiality, regulated judgement, or client reliance are at stake, transparency is usually the safer position.
Disclosure is usually the stronger governance position
Many firms can use AI internally without needing to flag every low-risk productivity task. But once AI touches client work in a meaningful way, non-disclosure becomes harder to justify.
Disclosure is especially important where:
- client information is entered into an AI-enabled system
- AI helps generate draft analyses, reports, or recommendations
- the output supports regulated, financial, legal, or professional judgement
- contract terms limit subcontracting, data sharing, or technology use
- the client expects a particular review or staffing model
- errors or hallucinations could affect advice quality or accountability
The point is not to create fear. The point is to make sure the client understands the process well enough for trust, consent, and accountability to hold up.
Assess where AI disclosure risk actually sits
What firms should disclose in practice
Useful disclosure is specific enough to set expectations without overwhelming the client with technical detail.
In practice, firms should be ready to explain:
- what role AI plays in the workflow
- whether client data is entered into any external or vendor-managed system
- what human review remains in place
- what confidentiality and security controls apply
- whether AI output is relied on directly or only used as a draft aid
- what limits or exclusions apply to higher-risk tasks
This lets the client understand not just that AI is used, but how risk is being managed around it.
Why silence can create more risk than disclosure
Some firms avoid disclosure because they worry it will sound unprofessional, reduce perceived value, or trigger unnecessary questions. In reality, the bigger risk is often that the client discovers AI use later and concludes the firm was evasive.
That can create problems such as:
- loss of trust even where the actual controls were reasonable
- disputes over whether consent should have been obtained
- questions about confidentiality handling and vendor access
- pressure on fees if the client thinks automation was hidden
- difficulty defending the workflow if quality issues appear later
When firms are transparent and controlled in how they communicate AI use, they usually put themselves in a stronger commercial and governance position.
Build a client-safe AI governance model
When disclosure should be mandatory rather than optional
Some situations move beyond best practice into something closer to a requirement.
Firms should treat disclosure as strongly indicated when:
- engagement terms require notice of new tools, subprocessors, or data handling changes
- sensitive personal, financial, legal, or commercially confidential data is involved
- AI is used in drafting substantive client-facing outputs
- the client has its own AI policy or procurement controls
- the work depends heavily on professional judgement and the workflow changes materially
- sector regulation or professional standards point toward transparency
In those cases, the question is less whether to disclose and more how to disclose clearly and proportionately.
What good disclosure language sounds like
Good disclosure is calm, practical, and specific. It should explain the role of AI in service delivery, the boundaries on its use, and the safeguards that remain in place. It should not sound like a defensive confession or a vague marketing statement.
For example, firms may explain that AI is used to support drafting, summarisation, or workflow efficiency under human review, and that confidential information is handled according to defined security and governance controls. That gives the client something concrete to assess.
A simple decision rule for firms
If a reasonable client would care that AI is being used in the workflow, disclosure is probably the right move. If the use of AI changes how data is handled, how work is produced, or how quality is controlled, silence is usually a weak governance choice.
The better approach is to define the use case, assess the risk, decide what the client needs to know, and communicate it in plain language before the issue becomes reactive.
Turn AI use policies into workable client-facing practice
Conclusion
Firms should usually disclose their use of generative AI when it materially affects client work, data handling, review processes, or accountability. The goal is not maximum disclosure for its own sake. It is making sure clients understand how the service is delivered and why the control position deserves trust.
The practical rule is simple. If AI changes the workflow in a way a client would reasonably care about, disclose it clearly.
FAQ
Do firms need to disclose every minor internal AI use?
Not necessarily. Low-risk internal productivity use may not need explicit disclosure if it does not affect client data, deliverables, or accountability.
What if the AI output is always reviewed by a human?
Human review helps, but it does not automatically remove the case for disclosure if AI still affects the workflow, data handling, or quality process.
Can disclosure be built into engagement terms?
Yes. That is often the cleanest approach, especially where the firm wants a repeatable and proportionate client communication model.
Why do clients care so much about disclosure?
Because they care about confidentiality, quality, accountability, and whether the service they are paying for matches their expectations.
What is the biggest mistake firms make here?
Treating disclosure as a branding problem instead of a governance and trust decision.