AI is once again on the witness stand.
In comments made during preliminary trial proceedings, Manhattan-based U.S. District Judge Jed S. Rakoff decided that documents prepared by an AI tool and then shared with an attorney are admissible in court, existing outside of attorney-client privilege, reported Law360.
We need to talk about that Mike Tyson Super Bowl ad
The decision relates to the case of Beneficient CEO Bradley Heppner, charged with committing $150 million in securities and wire fraud between 2018-2021. The financial services exec was indicted in November. Prior to his arrest, Heppner used Anthropic’s Claude chatbot to create 31 documents later shared with his defense attorney, evidence which was then seized by investigators. Federal prosecutors say that the documents are fair game and should be regarded as a “work product,” rather than a confidential legal strategy. They also say that the AI tool’s own usage policies do not guarantee confidentially.
Mashable Light Speed
The defense argued that, even though they weren’t created by attorneys, the documents include information based on conversations with legal representatives that should be shielded. Also, by implicating the defense team itself in the documents, the evidence could create a conflict of interest between Heppner and his own representation, they added. Rakoff said he did not see any basis for claims of attorney-client privilege, but agreed the evidence could result in a witness-advocate conflict and a mistrial.
The comments shine a spotlight on an escalating conflict among AI developers, privacy watchdogs, and safety advocates.
This Tweet is currently unavailable. It might be loading or has been removed.
AI executives, including OpenAI CEO Sam Altman, have suggested extending the same kind of attorney and therapist client privilege, which protects communications from court discovery, to conversations with AI chatbots. Altman has argued that, in particular, increasingly personal uses of AI assistants, including those designed for therapy or health advice, necessitate a reconsideration of what kinds of communications are granted legal privileges. Facing a multitude of lawsuits for copyright infringement and AI’s effect on mental health and youth safety, AI developers have fought to keep chatbot conversations behind the black box, despite many providing express permission for government entities to view chat logs.
Meanwhile, chat history data has been essential in cases alleging misconduct and safety failures among AI’s big players. Simultaneously, privacy watchdogs have raised concerns about extensive data collection and storage by AI tools, which has in turn prompted AI developers to institute measures to minimize chat history storage and allow users to use AI incognito.
