LegalUpdated April 12, 2026

How should lawyers cite AI-generated content?

How Should Lawyers Cite AI-Generated Content?

Short Answer

AI-generated content is usually not citable authority and should not appear as a cite in a brief. When an AI tool is used to assist with drafting or research, disclosure requirements depend on the court and increasingly on the individual judge's standing order. The Bluebook's 21st edition added a rule for citing generative AI outputs when they are the source, but the right answer for most filings is to cite the underlying authority, not the AI.

Full Answer

The question of how to cite AI in legal writing conflates two different issues. The first is whether to disclose that AI was used in the preparation of a filing, which is a procedural question governed by court rules and standing orders. The second is whether to cite AI output as authority, which is a substantive question governed by the rules of citation and the nature of legal argument. The two have different answers and they need to be handled separately.

On disclosure, the landscape in 2026 is messier than it should be. There is no single national rule. Federal district courts and individual federal judges have been issuing standing orders since mid-2023, and the orders vary significantly. Some require a certification that AI was not used. Some require disclosure of which AI tools were used and on which portions of the filing. Some require the lawyer to certify that any AI-generated text was independently verified. Some impose no requirement at all but remind counsel of their Rule 11 obligations. The Fifth Circuit considered and rejected a circuit-wide rule in 2024; the Second Circuit has a pilot program in some chambers. State courts are even more varied. The practical implication is that before filing in any court you should pull up the judge's standing order and read it carefully, and if the order is silent, default to the conservative position of disclosing material AI use in a footnote.

Material AI use is the key phrase. Using an AI tool to check your grammar is not material; neither is using one to brainstorm an outline that you then completely rewrote. Using an AI to draft an argument or a statement of facts that substantially made it into the final filing is material, and the emerging norm is that material use should be disclosed if the court requires disclosure, and documented internally in any event. Keeping a simple log of which tool was used on which filing, with what prompts, protects you in two ways: it lets you respond accurately to any post-hoc inquiry, and it creates a record of the diligence you applied to any AI-assisted content.

On citation as authority, the strong default is do not. AI-generated text is not authority and has no weight in court. The reason to cite authority in a brief is to ground a legal proposition in something a judge can verify and rely on. An AI output, even a correct one, does not satisfy that purpose; it is more like a hypothesis you formed during drafting than a source. If the AI tells you that a particular case says a particular thing, your job is to pull the case, confirm the holding, and cite the case, not the AI. The Mata v. Avianca fiasco is the canonical example of what happens when lawyers forget this: the sanctioned attorneys had effectively cited ChatGPT, except they hid it by dressing the ChatGPT output in fake case citations, which is the worst possible version of the mistake.

That said, there is a narrow category where AI output is itself the source. If you are writing a law review article about how AI models answer legal questions, or an expert report analyzing the outputs of a specific tool for purposes of a case about AI behavior, you do need to cite the AI output directly. The Bluebook's 21st edition (published 2024) added Rule 18.3 for this purpose, which requires citing the model name and version, the date the output was generated, and the prompt. The standard format is something like "ChatGPT, OpenAI (GPT-4o, Sept. 12, 2026) (response to prompt 'summarize the holding of Brown v. Board of Education')." This rule exists for when the AI output is the thing being analyzed, not for when the AI is a research tool. Conflating the two is a category error.

Practical tactics. Decide at the start of every matter whether AI will be used and, if so, how the firm will document it. Pull and save every standing order for every judge in every active case. Train associates to distinguish AI as a drafting assistant from AI as a source. Never let an AI-generated citation into a filing without independent verification. Consider including a short "AI use" protocol in the firm's litigation handbook that defines material use, documentation requirements, and disclosure defaults. And remember that the trend line is toward more disclosure, not less, so when in doubt over-disclose rather than under-disclose. Judges are much more forgiving of lawyers who proactively explain their AI use than of lawyers who are caught hiding it.

Related Questions

Recommended Tools

Browse more FAQs

Explore our full library of answers to the questions attorneys actually ask about legal AI.

All FAQs