AI EthicsUpdated April 12, 2026

Is it ethical for lawyers to use AI?

Is It Ethical for Lawyers to Use AI?

Short Answer

Yes, and in some contexts it is now arguably unethical not to. ABA Formal Opinion 512 (2024) confirmed that lawyers may use generative AI, provided they maintain competence, protect client confidences, supervise outputs, consider client disclosure, and bill reasonably. The duty of technological competence under Model Rule 1.1, Comment 8, increasingly implies familiarity with AI tools that are now standard practice.

Full Answer

The ethical framework for lawyer AI use is no longer a subject of speculation. ABA Formal Opinion 512, issued in July 2024, walked through the major Model Rules and explained how each applies to generative AI, and most state bars have since issued parallel or supplementary opinions. The consensus is not "AI is forbidden" and not "AI is freely permitted," but a set of duties the lawyer must satisfy, which together amount to a framework for responsible use. Understanding the framework is more useful than memorizing any single opinion, because the rules will keep evolving faster than the opinions.

The duty of competence is the foundation. Model Rule 1.1 requires that a lawyer provide competent representation, and Comment 8 (added in 2012) explicitly extends competence to "the benefits and risks associated with relevant technology." Generative AI is now relevant technology for almost every area of practice, which means lawyers have an affirmative duty to understand what these tools do, where they fail, and how to use them. The duty does not require that every lawyer personally become a prompt engineer, but it does require enough literacy to supervise AI use intelligently and to spot failure modes. A lawyer who has never used any generative AI tool and does not know what hallucinations are is probably falling short of this duty in 2026, and the gap will widen over time.

The duty of confidentiality under Rule 1.6 is the next layer. Lawyers must not disclose information relating to representation without informed consent or implied authorization. Feeding client information to a tool whose terms allow the vendor to train on inputs, store them indefinitely, or share them with third parties is a disclosure. This is why the enterprise-tier versus consumer-tier distinction in AI tools is an ethics issue and not just a vendor question. Opinion 512 is explicit: lawyers must understand the data practices of the tools they use and must not use tools whose practices are inconsistent with their confidentiality obligations.

Supervision is the third pillar. Rules 5.1 and 5.3 require lawyers to supervise subordinate lawyers and nonlawyer assistants, and the ABA has taken the position that AI tools fit the "nonlawyer assistance" frame closely enough that supervisory duties apply. In practice this means the lawyer is responsible for the output, no exceptions. A partner whose associate used an AI to draft a brief is responsible for the associate's failure to verify the AI output; an associate who used an AI to generate a research memo is responsible for the memo's errors even if the AI produced them. The "the tool did it" defense does not exist in legal ethics.

Two more duties round out the framework. Rule 1.5 requires that fees be reasonable, which has implications for AI billing: if a task that used to take two billable hours now takes fifteen minutes with AI, the honest billing reflects the actual time, not the old benchmark. Opinion 512 and subsequent state bar opinions are unanimous on this. Some firms have experimented with value billing or flat fees to capture AI productivity gains for the firm rather than passing them entirely to clients; this is permissible, but the economics must be disclosed and agreed to upfront. The old habit of billing the pre-AI hours for post-AI work is not permissible and is attracting active bar enforcement.

Rule 1.4 on client communication sometimes requires disclosure of AI use to the client. The rule has always required enough information to let the client make informed decisions about the representation. If AI use materially affects how a matter is handled, the cost, or the risk profile, the client should know. Many firms now include a short paragraph in engagement letters disclosing that AI tools may be used on the matter, subject to the firm's supervision and confidentiality practices. This is not legally required in most jurisdictions but is becoming best practice and protects the lawyer from any later claim of non-disclosure.

The harder ethical questions are the ones that are not yet fully settled. What happens when an AI tool's output reflects biased training data and the client is disadvantaged by the bias? What about the fiduciary implications of using a tool whose vendor has financial ties to a party in the matter? What about the obligation to disclose AI use to a court when the judge has not asked? These are live debates, and the law will develop over the next several years. For now, the safe course is to err on the side of disclosure, supervise AI output rigorously, document the diligence you did on each tool, and stay current on your state's ethics opinions. The risk of using AI ethically is low; the risk of using it carelessly is substantial.

Related Questions

Recommended Tools

  • Harvey AI - Enterprise platform with compliance-ready posture.
  • Lexis+ AI - Grounded research under existing Lexis contract terms.
  • CoCounsel - Thomson Reuters professional-grade AI assistant.

Recommended Tools

Browse more FAQs

Explore our full library of answers to the questions attorneys actually ask about legal AI.

All FAQs