Lawyers are warning clients that AI chatbot conversations could be used against them in court
A federal judge ordered a fraud defendant to hand over 31 documents generated using Anthropic's Claude, prompting law firms to warn clients

RJ Sangosti / Getty Images
A federal judge in New York has ruled that conversations with AI chatbots cannot be shielded from prosecutors under attorney-client privilege, according to Reuters, prompting more than a dozen major U.S. law firms to warn clients that their AI exchanges could be used against them in court.
The order, issued in February by U.S. District Judge Jed Rakoff of Manhattan, compelled Bradley Heppner — the former chair of bankrupt financial services company GWG Holdings and founder of Beneficent — to produce 31 Claude-generated documents tied to his fraud case. No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote.
Related Content
Last November, federal prosecutors charged Heppner with securities fraud, wire fraud, and related crimes. They claim he used a shell company to take money from GWG Holdings while he was chairman. Heppner pled not guilty.
Heppner had generated the documents by feeding case-related information into Claude and then sharing the resulting reports with his legal team, who contended that the exchanges deserved protection because they reflected attorney communications about his defense strategy. The government pushed back, arguing that because Heppner's attorneys played no direct role in generating the Claude outputs, and because chatbots fall outside the scope of attorney-client privilege, the documents were fair game. Rakoff also noted at a February hearing that Claude "expressly provided that users have no expectation of privacy in their inputs."
In the ruling's wake, law firms have been sending clients emails and posting advisories on their websites outlining steps to reduce the chances of AI chats surfacing in legal proceedings. "We are telling our clients: You should proceed with caution here," Alexandria Gutiérrez Swette, a lawyer at New York-based Kobre & Kim, told Reuters.
Sher Tremonte, a New York firm, went a step further by embedding a warning directly into client intake agreements, putting in writing that routing a lawyer's advice or communications through a chatbot risks waiving the confidentiality protections those communications would otherwise enjoy. Among the guidance circulating from firms including Los Angeles-based O'Melveny & Myers is the suggestion that enterprise-grade, closed AI platforms may carry fewer disclosure risks than consumer tools — though attorneys acknowledge the legal landscape around even those systems has yet to be tested in court.
Debevoise & Plimpton took the guidance further, posting a notice on its website recommending that anyone using AI for legal research under a lawyer's instruction flag that fact at the outset of the chatbot session — proposing language such as: "I am doing this research at the direction of counsel for X $TWTR litigation."
A contrasting outcome emerged from Michigan, where U.S. Magistrate Judge Anthony Patti issued his own ruling on the same date. Patti declined to order a pro se plaintiff to disclose her ChatGPT exchanges in an employment dispute against her former employer, concluding that the conversations qualified as her own litigation work product rather than as communications with an outside party.
According to Reuters, both OpenAI and Anthropic disclose in their terms of service that user data may be shared with outside parties.