tech
April 15, 2026
A US judge ruled that a fraud defendant’s AI chats with Claude are not privileged
In a February ruling described as the first of its kind in the US, Judge Jed Rakoff found that Bradley Heppner’s conversations with Anthropic’s Claude about his legal exposure stripped away both attorney-client privilege and work-product protection, because an AI is not a lawyer and public AI platforms have no confidentiality obligation.

TL;DR
- A US federal court ruling has determined that discussions with public AI chatbots about legal matters are not legally privileged.
- Judge Jed Rakoff ruled in United States v. Heppner that AI conversations are neither protected by attorney-client privilege nor the work-product doctrine.
- The ruling cited that AI is not an attorney and public AI platforms, like Anthropic's Claude, have terms of service allowing data collection and disclosure.
- This decision has led numerous law firms to issue client advisories warning against using public AI for legal research or discussions.
- Recommendations include using private, closed AI systems with strict confidentiality agreements and always seeking attorney guidance before using AI for legal matters.
Continue reading the original article