Is ChatGPT safe for lawyers to use?
Is ChatGPT Safe for Lawyers to Use?
Short Answer
ChatGPT can be used safely by lawyers, but only with significant guardrails. Free and personal-tier ChatGPT should never be given confidential client information, because prompts may be used for model training and stored by OpenAI. ChatGPT Enterprise, ChatGPT Team, and the API with data retention disabled offer stronger contractual protections and are the only versions most state bars view as defensible for handling matter-related content.
Full Answer
The question of whether ChatGPT is safe for lawyers has been answered, refined, and re-answered by nearly every state bar ethics committee since 2023. The short version is that the tool itself is neutral; safety depends entirely on which version you use, what you put in, and how you supervise the output. Attorneys using the free ChatGPT interface to draft a brief that includes client facts, opposing counsel's names, or the contents of a sealed settlement agreement are almost certainly violating Model Rule 1.6 on confidentiality. OpenAI's consumer terms permit the company to use inputs to improve its models unless the user opts out, and even opted-out conversations may be retained for abuse monitoring. For a profession whose core duty is secrecy, that is a material risk that cannot be waived by pasting a disclaimer at the top of a prompt.
The safer path is ChatGPT Enterprise or ChatGPT Team. Both products come with a contractual commitment that inputs and outputs will not be used for training, SOC 2 Type 2 attestation, encryption at rest and in transit, SSO, and administrative controls over data retention. OpenAI's API, when used with the zero data retention flag or with standard 30-day retention plus a signed DPA, is similarly defensible. The ABA's Formal Opinion 512 (issued in 2024) essentially codified what thoughtful practitioners had already concluded: a lawyer may use generative AI, but must understand the tool's capabilities, protect client confidences, supervise outputs, consider whether disclosure to the client is required, and bill reasonably for AI-assisted work. None of those duties disappear because the tool is fast or cheap.
Beyond contractual protections, the second layer of risk is output accuracy. The much-publicized sanctions against the New York lawyers in Mata v. Avianca (fake citations generated by ChatGPT) remain the cautionary tale every CLE starts with, and federal judges in at least a dozen jurisdictions now require lawyers to certify whether AI was used to draft filings. ChatGPT, being a general-purpose model, has no access to Westlaw, Lexis, or Bloomberg Law and will happily fabricate case names that sound plausible. For anything citation-bearing, the output must be verified against a primary source, every single time. This is not a ChatGPT problem specifically; it is a large-language-model problem. But it means ChatGPT is a poor fit for research tasks where a legal-specific tool such as Lexis+ AI, Westlaw Precision AI, or Harvey would return grounded answers with real citations.
Where ChatGPT genuinely shines for lawyers is in non-confidential, non-citation work: brainstorming argument structures using hypothetical facts, summarizing public documents, drafting internal memos about firm operations, translating between languages, cleaning up transcripts, generating intake checklists, or refining the tone of a client update. Many firms have written acceptable-use policies that define a "public information only" rule for ChatGPT and route any matter-specific work through a vetted enterprise tool. That bright-line approach is easier to train staff on than a nuanced "it depends" and tends to survive audits better.
Finally, practical hygiene matters. Turn off chat history for any account used for work, even paid accounts. Do not upload client documents to Custom GPTs unless the Custom GPT is built on an Enterprise workspace. Avoid browser extensions and third-party wrappers that proxy your prompts through unknown servers. Check your state's latest ethics opinion (Florida, California, New York, New Jersey, and Washington all have them; most others are catching up) because the guidance is evolving fast. And remember that supervising partners are responsible under Rule 5.1 for the AI use of the associates and paralegals they manage.
The bottom line: ChatGPT is safe enough for public-facing and operational work, dangerous for confidential or citation-bearing work, and acceptable for matter work only on Enterprise or Team with appropriate policies. For most firms, the answer is not "use or don't use" but "use the right version for the right task."
Related Questions
- Is legal AI confidential and attorney-client privileged?
- What are AI hallucinations in legal work and how to prevent them?
- Is it ethical for lawyers to use AI?