On 10 February 2026, a judge of the Southern District of New York ruled that roughly 31 documents generated by a criminal defendant using Anthropic's consumer AI chatbot "Claude" were neither protected by attorney-client privilege nor shielded by the work product doctrine. The decision in United States v Heppner has confirmed a position many legal experts already suspected to be the case.
Bradley Heppner, a financial services executive charged with fraud, used Claude to research legal questions about the US government's investigation before his arrest. He prepared reports outlining potential defence strategies and legal arguments and later shared them with his attorneys. When the FBI executed a search warrant at his residence, agents seized electronic devices containing these documents. The prosecution sought a ruling that they were not privileged, which was granted by the court.
The court reasoned that an AI chatbot is not a lawyer. It holds no licence, owes no duty of loyalty and cannot form an attorney-client relationship. Anthropic's privacy policy for the type of account used by Heppner permitted the use of inputs for model training and allowed disclosure to third parties, so the communications could not be considered confidential. Sending pre-existing, non-privileged documents to a lawyer after the fact does not retroactively cloak them with privilege. The work product doctrine failed, as Heppner created the documents on his own initiative, not at his lawyers' direction.
A South African court would almost certainly reach the same conclusion. The Constitutional Court in Thint v NDPP confirmed that legal professional privilege requires, among other things, that the communication be with a legal adviser acting in a professional capacity, that it be made in confidence, and that its purpose be to obtain legal advice. An AI tool satisfies none of these requirements. South African courts have consistently confined privilege to communications with admitted attorneys, advocates or in-house legal advisers acting in their professional capacity and have explicitly declined to extend it to non-lawyers, even those with legal expertise in a particular field. Submitting information to a consumer platform that reserves the right to use and disclose that data would not meet the confidentiality requirement either.
That said, AI-generated documents can attract privilege in the right circumstances. The AI must, however, be used within the existing attorney-client relationship, not as a substitute for it. If an attorney uses an enterprise AI platform with contractual confidentiality protections to draft advice for a client, the AI is just a tool, like a word processor or research database, and all elements of privilege remain intact. Similarly, if a lawyer directs a client to use a specific tool to organise facts for the lawyer's litigation preparation, work product protection could apply. In-house counsel using a firm-deployed platform with data isolation to prepare legal memoranda would likely be on the same footing, provided they are acting in their professional legal capacity, per Mohamed v President of South Africa.
We pause to flag that privilege is not the only consideration for legal practitioners. The duty of client confidentiality is broader than privilege and operates independently of it. Privilege is a rule of evidence and a shield against compelled disclosure in proceedings. Confidentiality, by contrast, is an ongoing ethical and fiduciary obligation that persists and endures even after the attorney-client relationship ends. The Code of Conduct under the Legal Practice Act requires all legal practitioners to maintain legal professional privilege and confidentiality regarding the affairs of present or former clients.
A practitioner who inputs client information into a consumer AI platform without adequate contractual safeguards risks breaching that duty, regardless of whether privilege is ever tested in court. It is also important to consider whether a particular tool respects the information barriers between matters and clients. A platform that combines inputs from multiple users, for example, could undermine the segregation of confidential information that underpins conflict management in multi-service practices. The question for practitioners is not just whether a document will be privileged at trial, but whether every tool in the chain of its creation meets the standards the profession demands.
AI and privilege are not incompatible, but the conversational interface of consumer AI tools creates a dangerous illusion of privacy. Typing a legal question into a chatbot feels like speaking to an adviser in confidence. It is not. Unless the platform is deployed with contractual confidentiality protections, as is often the case with paid-for profiles, the user is inputting information into a third-party commercial system that retains data and reserves broad rights to disclose it.
* Kim Rew is a partner at Webber Wentzel and a member of the firm’s AI specialist team, advising clients on emerging AI-related legal issues and potential risks. Tristan Marot is an innovation lawyer at Webber Wentzel Fusion.
Share