Legal ServicesAI Governanceclient confidentialityvendor risk

Which AI tools can touch client-confidential data?

5 May 2026
Answered by Rohit Parmar-Mistry

Quick Answer

Which AI tools can touch client-confidential data depends on whether the tool is approved, contractually controlled, technically restricted and auditable. If you cannot prove confidentiality, data handling, retention and human review, do not approve it for client matter data.

Detailed Answer

Why this question matters for client-confidential data

AI tool approval is no longer a simple productivity decision. For a legal, advisory or regulated team, the first question is whether the tool can receive client-confidential data without creating avoidable confidentiality, privilege, security or regulatory risk.

The practical rule is straightforward: classify the tool by the sensitivity of the data it will touch, then approve only the combinations where the contract, controls and operating process are strong enough for the risk.

The safest answer is a tiered approval model

AI tools can touch client-confidential data only when they sit in an approved high-trust tier. That usually means an enterprise account, clear data processing terms, no training on customer inputs, defined retention, access controls, logging and a documented human review process.

Consumer-grade tools, browser extensions and unmanaged plug-ins should be treated as unsuitable for client matter data unless they have passed the same vendor, legal and security review as any other system that handles confidential information.

Map which AI tools can safely handle sensitive data

Use four data tiers before approving any tool

A useful approval model starts with data categories, not tool names:

  • Public data: website copy, public reports, published policies and already public case material.
  • Internal business data: non-public operating information, internal templates, meeting notes and draft strategy.
  • Client-confidential data: matter details, client files, advice, negotiations, evidence, due diligence material and privileged context.
  • Restricted data: special category personal data, highly sensitive litigation material, credentials, financial crime information or data subject to strict contractual limits.

Most AI tools may be acceptable for public data after basic review. Far fewer should be approved for client-confidential data, and restricted data should require explicit exception approval.

The proof you need before approving client data use

Approval should be evidence-based. For any tool that may touch client-confidential data, collect and retain proof in a short approval file:

  • Contractual proof: data processing agreement, confidentiality terms, subprocessors, jurisdiction, audit rights and termination/deletion terms.
  • Training and reuse proof: confirmation that prompts, files and outputs are not used to train public or shared models unless expressly agreed.
  • Retention proof: how long inputs, outputs, logs and uploaded files are kept, plus whether retention can be configured.
  • Security proof: authentication, single sign-on, role-based access, encryption, audit logs and incident notification commitments.
  • Operational proof: acceptable-use rules, user training, human review, matter-level restrictions and escalation routes.
  • Client proof: any engagement-letter, outside counsel guideline or client-specific restriction that affects AI use.

If the proof is missing, the tool may still be useful for public or synthetic data, but it should not be approved for client-confidential material.

How to decide which tools belong in each tier

A simple matrix is enough for most firms:

  • Green: approved for specific data classes and use cases, with named controls and owner.
  • Amber: approved only for public, anonymised or synthetic data, or only for a named pilot.
  • Red: not approved for business use, client data, file uploads or browser access.
  • Exception: requires matter partner, risk or governance sign-off before use.

The important discipline is not the label itself. It is that every user can tell, before pasting or uploading information, whether the tool is allowed for that data and that task.

Put an AI approval model under ongoing governance

Common failure points to avoid

  • Approving the brand instead of the product: enterprise and consumer versions of the same AI product can have different data terms.
  • Ignoring plug-ins and connectors: a low-risk chat tool can become high-risk when connected to email, documents, CRM or matter systems.
  • Relying on user judgement alone: people need default settings, examples and blocked categories, not a vague instruction to be careful.
  • Skipping output review: confidentiality controls reduce data risk, but they do not remove hallucination, accuracy or professional judgement risk.

A practical implementation checklist

  1. Create a live register of AI tools, owners, approved use cases and approved data classes.
  2. Define public, internal, client-confidential and restricted data tiers.
  3. Run vendor review for any tool that may touch client-confidential or restricted data.
  4. Record the specific proof behind each approval decision.
  5. Configure access, logging, retention and sharing settings before rollout.
  6. Train users with examples of what they can and cannot paste, upload or connect.
  7. Review the register quarterly, and whenever a tool changes its terms, model, retention policy or integrations.

This does not need to become a slow committee process. The aim is a repeatable route to yes for safe use, and a clear no for tools that cannot prove they protect the data.

Build the approval workflow and tool register

Conclusion

Client-confidential data should only touch AI tools that have been approved for that exact risk tier. The approval should be backed by contractual, technical and operational proof, then translated into clear user rules.

If the team cannot show why a tool is safe for client data, it should stay limited to public, anonymised or synthetic inputs until the evidence catches up.

FAQ

Can staff use free AI tools for client questions?

Not with client-confidential data. Free or unmanaged tools should normally be limited to public, anonymised or synthetic examples unless they have passed formal review.

Is anonymisation enough to use a non-approved tool?

Sometimes, but only if the information is genuinely stripped of client identity, matter context and indirect identifiers. Pseudonymised matter facts can still be confidential.

What is the minimum proof before approving an AI tool?

You need evidence on data use, model training, retention, subprocessors, security, access control, logging and deletion. For legal work, add confidentiality and client-specific restrictions.

Who should own the approval decision?

Ownership should sit with a named business owner, supported by risk, legal, security and data protection input. No tool should be approved by enthusiasm alone.

How often should approvals be reviewed?

Review high-risk tools at least quarterly, and immediately after a material change to terms, integrations, model behaviour, retention or the type of data being used.

Need More Specific Guidance?

Every organisation's situation is different. If you need help applying this guidance to your specific circumstances, I'm here to help.