Does malpractice insurance cover AI mistakes?
Does Malpractice Insurance Cover AI Mistakes?
Short Answer
Yes, standard lawyer malpractice policies generally cover errors made with the assistance of AI tools the same way they cover errors made with the assistance of any other technology, because the lawyer remains professionally responsible for the output. The caveats are that carriers are starting to ask about AI use during underwriting, some policies exclude unauthorized or negligent use of third-party tools, and careless AI use can raise questions about whether the lawyer exercised reasonable competence.
Full Answer
Malpractice carriers have been watching the legal AI story closely since 2023, and as of 2026 the coverage picture is clearer than many lawyers assume. The short version is that your existing policy almost certainly covers errors made with AI assistance, because the policy covers professional negligence and the lawyer's professional responsibility does not evaporate when a tool is involved in the work. If you use ChatGPT to draft a brief, fail to verify a citation, and the false citation leads to sanctions and a client loss, your policy will generally respond to the resulting claim. If a Harvey-assisted due diligence missed a critical contract term and the deal blows up, your policy will generally respond. The carriers are treating AI-assisted work like technology-assisted work more broadly: the tool is not the insured, the lawyer is.
That said, the carriers are not ignoring the AI transition. Several of the major legal malpractice insurers (ALPS, CNA, Lawyers Mutual, Lloyd's syndicates, Liberty) have added AI questions to their renewal underwriting questionnaires since 2024. The questions typically ask which AI tools the firm uses, whether there is a written AI use policy, whether staff have been trained, and whether there are procedures for supervising AI output. These are not gatekeeping questions in the sense that answering them will cause a denial of coverage; they are risk-assessment questions that affect pricing and that establish a baseline for later disputes. A firm that says "yes, we have a policy and training" will get a better treatment at renewal than a firm that says "we don't know what our lawyers are using." Write the policy.
There are a few specific areas where coverage gets complicated. Intentional misconduct is always excluded, which matters in an AI context because carriers could theoretically argue that using AI in a reckless way (no verification, no supervision, pasting confidential client data into consumer tools) crosses from negligence into conscious disregard. The case law is thin, but early fact patterns in sanctioned AI cases suggest that courts have been willing to find gross negligence in particularly egregious situations. Gross negligence itself is usually still covered, but the argument gets thinner as the conduct gets worse. The safe course is to use AI carefully enough that no reasonable person would call the use reckless.
Unauthorized use of third-party tools is another potential issue. Some policies have language that limits coverage for claims arising from the use of technology not sanctioned by the firm. In the pre-AI era this language was rarely invoked because lawyers were not secretly bringing unsanctioned software into the practice. In the AI era, the exact risk the language is targeting is "an associate is using free ChatGPT on client matters without the firm knowing." Firms that have a written AI policy and that maintain a list of approved tools are in a better position both for compliance and for coverage disputes. If your firm is on a policy with this kind of language, read it carefully and make sure your internal practices match.
Confidentiality breaches are covered differently from drafting errors. Most professional liability policies have separate provisions or sub-limits for data breach and privacy claims, and some have separate cyber policies that sit alongside the professional liability coverage. An AI-related confidentiality breach (a lawyer pasting privileged material into a tool whose vendor trained on it, and the information later surfacing in another user's output) could implicate both the professional liability policy and the cyber policy, and the interaction is not always clean. Firms with significant AI use should review both policies together and make sure the carriers know how to coordinate on an AI-related incident.
Duty to supervise is the doctrinal anchor. Model Rules 5.1 and 5.3 require lawyers to supervise subordinates and nonlawyer assistants, and the ABA has extended this analysis to AI tools. From a malpractice coverage perspective, the duty to supervise is what keeps the lawyer in the chain of responsibility, which is what makes the claim a covered professional error rather than a tool failure the carrier can disclaim. Embrace the duty rather than trying to escape it. The lawyer is the insured; the lawyer's supervision of AI output is what brings the claim inside the policy.
Practical guidance for firms. Write a short AI use policy, even if it is only two pages. Maintain a list of approved tools with a brief rationale for each. Train staff on the policy annually. Document AI use on material matters. Tell your carrier what you are doing at renewal, and ask for confirmation in writing that your AI practices are within the contemplated scope of coverage. Consider whether a cyber policy alongside the professional liability coverage is appropriate. And remember that the biggest protection against an AI-related malpractice claim is not the insurance, it is the verification and supervision habits that keep the error from happening in the first place.
Related Questions
- Is it ethical for lawyers to use AI?
- Is legal AI confidential and attorney-client privileged?
- What are AI hallucinations in legal work and how to prevent them?