ABA Formal Opinion 512 Explained: Ethics of AI in Legal Practice

April 12, 2026

legal-aianalysis

ABA Formal Opinion 512 Explained: Ethics of AI in Legal Practice

On July 29, 2024, the American Bar Association's Standing Committee on Ethics and Professional Responsibility released Formal Opinion 512, titled "Generative Artificial Intelligence Tools." It is the first comprehensive national ethics guidance on lawyers' use of generative AI, and it has become the baseline reference document for state bar opinions, law firm policies, and continuing legal education programs across the country.

This post unpacks Opinion 512 rule by rule, explains what each obligation means in practical terms, and provides a compliance checklist that any firm can adapt. The goal is to demystify the opinion so that the lawyer responsible for AI at your firm (and in 2026 that should be someone) can walk away with a clear picture of what the ABA expects.

What Opinion 512 Says

Opinion 512 does not create new ethical duties. Instead, it applies existing Model Rules of Professional Conduct to the specific question of generative AI use. The committee's core message is that generative AI tools, like any technology a lawyer adopts, must be deployed in a way that satisfies the duties of competence, confidentiality, communication, supervision, reasonableness of fees, and candor to tribunals.

The opinion defines generative AI broadly to include large language models and related systems that produce new content in response to user prompts. It acknowledges that these tools can deliver significant benefits in efficiency, access to justice, and quality. But it warns that the technology's opacity, tendency to hallucinate, and reliance on training data create ethical risks that lawyers must actively manage.

Four themes run through the opinion:

  1. Lawyers must understand, at least generally, how the AI tools they use work.
  2. Client data fed into AI systems must be protected the same way any other confidential information is protected.
  3. Lawyers remain responsible for the work product AI tools help generate, full stop.
  4. Billing practices must fairly reflect the actual time and effort invested, not pre-AI time estimates.

Let's walk through the specific rules.

Rule 1.1 Competence

Model Rule 1.1 requires a lawyer to provide "competent representation" and, per Comment 8, to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology."

Opinion 512 interprets this duty of "technological competence" to require that lawyers using generative AI have a "reasonable understanding of the capabilities and limitations" of the specific tools they employ. This does not mean every lawyer must become a machine learning engineer. It does mean, at minimum, that a lawyer must know:

  • Whether the tool is a general-purpose chatbot or a retrieval-augmented legal platform
  • Whether outputs can be fabricated (they generally can)
  • What training data the tool was built on, to the extent the vendor discloses it
  • How the tool handles user inputs, especially whether prompts are used for further training
  • What the tool's accuracy limitations are for the specific task at hand

The opinion emphasizes that competence is tool-specific. A lawyer may be competent to use one AI platform and not another. Before adopting a new tool, a reasonable vetting process is required. This is why purpose-built legal platforms like Harvey AI and Casetext CoCounsel have proliferated in law firms: they come with documentation, security attestations, and known failure modes that a lawyer can actually evaluate.

The competence duty also includes an ongoing obligation. Because generative AI is evolving rapidly, a lawyer's understanding from 2024 is not the same as a lawyer's understanding from 2026. Continuing education is not optional.

Rule 1.6 Confidentiality

This is the rule that sends the most cold sweat down the spine of general counsel. Model Rule 1.6 prohibits a lawyer from revealing "information relating to the representation of a client" without informed consent, subject to narrow exceptions.

Opinion 512 addresses the specific question of whether entering client information into a generative AI tool counts as "revealing" it. The answer turns on how the tool handles the input. If the tool is "self-learning" or transmits inputs to a third-party vendor who may use them for training, improving models, or any other purpose, then the lawyer must obtain informed client consent before inputting confidential information, or must use a tool configured to prevent such use.

The opinion is practical about this. It recognizes that lawyers can satisfy the duty by:

  • Using enterprise-grade tools that offer contractual commitments not to retain or train on user inputs
  • Configuring tools to opt out of data retention where the option exists
  • Stripping or anonymizing client data before submission
  • Obtaining informed consent when the above are not feasible

Note the phrase "informed consent." General language in an engagement letter saying "we may use technology in the course of the representation" is probably not sufficient. The consent must be informed, meaning the client understands what tool will be used, what data will be shared, and what risks exist.

The opinion also reminds lawyers of Rule 1.6(c), which requires "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation." This translates to ordinary cybersecurity due diligence on any AI vendor: review the vendor's security posture, understand their data handling practices, read the terms of service, and get written assurances about data segregation.

Rules 5.1 and 5.3 Supervision

Model Rules 5.1 and 5.3 impose supervisory duties on partners, managers, and lawyers with direct supervisory authority. Rule 5.1 concerns lawyer subordinates; Rule 5.3 concerns nonlawyer assistance.

Opinion 512 treats generative AI tools as "nonlawyer assistance" within the meaning of Rule 5.3. That framing has real consequences. It means a supervising lawyer must make "reasonable efforts to ensure that the [tool's] conduct is compatible with the professional obligations of the lawyer." In practice:

  • Firms must have policies governing AI use
  • Supervising lawyers must verify that subordinates understand those policies
  • Outputs from AI tools must be reviewed before being relied upon
  • The firm's training program must cover AI-specific risks

The opinion also notes that managerial lawyers (those with firm-wide responsibility) must put systems in place that give reasonable assurance of compliance. For most firms, this means a written AI policy, approved-tools list, training program, and periodic audit.

This is where many firms are genuinely behind. Plenty of firms have allowed shadow AI use, where individual lawyers adopt tools without any central coordination. Opinion 512 makes clear that this is not acceptable under Rules 5.1 and 5.3.

Rule 1.5 Fees

Model Rule 1.5 prohibits unreasonable fees and expenses. Opinion 512 addresses several fee-related questions raised by generative AI.

Billing for time. Lawyers cannot bill hourly clients for time they did not spend. If an AI tool drafts a memo in ten minutes that would have taken four hours without the tool, the lawyer cannot bill four hours. This seems obvious but the opinion states it explicitly because the temptation is real.

Billing for efficiency. The opinion does not prohibit lawyers from benefiting from AI-driven efficiency. A flat-fee or value-billed arrangement can legitimately capture the gains from AI. What a lawyer cannot do is bill hourly for phantom time.

Charging for AI tools. If the firm absorbs AI tool costs as overhead, they cannot be passed through to clients as separate expenses. If AI costs are billed as out-of-pocket expenses, they must reflect actual costs without markup, and the engagement letter should disclose the practice.

Disclosure. When a client asks whether AI was used, the lawyer should answer honestly. Some clients will affirmatively want AI used; some will not; some will be indifferent. The fee agreement is the best place to address expectations up front.

Rule 3.3 Candor to the Tribunal

Model Rule 3.3 prohibits a lawyer from knowingly making a false statement of fact or law to a tribunal. This is the rule that Steven Schwartz ran afoul of in Mata v. Avianca, and Opinion 512 addresses it directly.

The opinion reminds lawyers that filings must be independently verified. A lawyer who cites authority without verifying it, whether generated by AI or any other source, risks violating Rule 3.3 if the authority turns out to be false. The duty is non-delegable; it cannot be outsourced to a chatbot.

The opinion also notes that some courts now require affirmative disclosure of AI use in filings. Where such orders exist, failure to disclose can itself constitute a candor violation. Lawyers must know and follow the standing orders of every court where they practice.

Compliance Checklist

Based on Opinion 512, here is a practical checklist firms should work through.

Governance

  • Designate a partner or committee responsible for AI oversight
  • Adopt a written AI use policy
  • Maintain an approved-tools list; prohibit unapproved tools for client work
  • Review and update the policy at least annually

Vendor Diligence

  • For each approved tool, document security posture, data handling, and training data policies
  • Obtain contractual commitments on data retention and training use
  • Review SOC 2 or equivalent attestations
  • Reassess when vendors materially change terms

Competence and Training

  • Provide onboarding training for all users
  • Require annual refresher training
  • Cover hallucination risk, verification workflows, and confidentiality specifically
  • Document training completion

Confidentiality

  • Prohibit input of privileged or confidential information into non-enterprise tools
  • Default to tools with no-training contractual terms
  • Update engagement letters to address AI use
  • Obtain informed consent where required

Supervision

  • Require human review of all AI output before filing or client delivery
  • Implement citation verification workflows for any AI-assisted research
  • Document the review step

Billing

  • Review and update fee agreements to reflect AI use
  • Prohibit hourly billing for phantom time
  • Disclose AI-related expense pass-throughs

Candor

  • Maintain awareness of court standing orders on AI
  • Require disclosure where mandated
  • Verify all citations before filing

Firms that build their programs around retrieval-grounded tools like Harvey AI and Casetext CoCounsel have a significant advantage in meeting these obligations because these platforms are designed from the ground up with legal ethics in mind.

Frequently Asked Questions

Is Opinion 512 binding? ABA formal opinions are not binding in any jurisdiction. But they are highly persuasive, and state bars routinely adopt their reasoning. Treat it as the national baseline.

Does Opinion 512 prohibit any specific tool? No. It is technology-neutral. It prohibits misuse, not particular products.

Do I need client consent every time I use AI? Not necessarily. If the tool is configured to protect confidentiality and the use is within the scope of the engagement, consent may be implied. When in doubt, obtain explicit consent.

What about free public chatbots for non-client work? Personal use, brainstorming, and non-client work are generally not covered by Rule 1.6. But caution is warranted because habits formed in personal use often transfer to client work.

How does this interact with Mata v. Avianca? Mata is the cautionary tale; Opinion 512 is the prescriptive guidance. Read both.

What if a client demands that I use AI to lower costs? You can use AI to lower costs, but the work still must meet the standard of competent representation. You cannot cut corners that compromise quality.

Does the opinion address AI in contract review specifically? Not by task. The principles apply across all practice areas.

Opinion 512 is not the last word on legal AI ethics; further guidance is inevitable as the technology evolves. But for now, it is the indispensable starting point. Read it, apply it, and document the fact that you did.

Advertisement
Ad Space

Stay Updated

Get weekly AI tools for lawyers delivered to your inbox.