A large language model is as free to read as you and me, a federal judge held Tuesday—unless that LLM’s creators didn’t pay for the books used to train that AI system.
Judge William Alsup’s Tuesday order turns aside part of a class-action lawsuit filed by book authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson against the AI firm Anthropic but agrees with one of their key claims. That means Alsup’s 32-page opinion could still prove expensive for the company behind the Claude series of AI models.
The most important part of Alsup’s ruling is that Anthropic has a fair-use defense for digitizing copies of the authors’ books that it purchased to train the San Francisco firm’s AI models.
Calling that an “exceedingly transformative” use, Alsup found that the authors had no more right to demand payment for it than to charge a human reader for learning from their writing.
“Everyone reads texts, too, then writes new texts,” he wrote. “But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable.”
In a later paragraph, Alsup compared the plaintiffs’ argument to a complaint that “training schoolchildren to write well would result in an explosion of competing works.” He concluded: “This is not the kind of competitive or creative displacement that concerns the Copyright Act.”
This case, unlike many other recent lawsuits brought against the operators of AI platforms, did not involve any claims that Claude had recreated or recited any copyrighted works: “Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service.”
Alsup also found that Anthropic did nothing wrong in its original act of book digitization. The company purchased paperback copies of books, scanned and digitized their contents as if they were CDs being ripped to copy to an iPod, and then destroyed the printed originals.
“One replaced the other,” Alsup writes. “And, there is no evidence that the new, digital copy was shown, shared, or sold outside the company.”
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
(Contrast that with the ruling by a panel of judges on a different federal circuit court last September that the Internet Archive had no right to turn digital copies of books it had legally obtained and scanned into e-book loans.)
But Anthropic didn’t just buy books by the truckload; it also downloaded millions of unauthorized copies of books from online troves of pirated works to speed up training Claude, then kept those copies around just in case.
“Every factor points against fair use,” Alsup wrote. He found that the company offered no justification “except for Anthropic’s pocketbook and convenience.”
Recommended by Our Editors
Anthropic’s comment to The Verge stuck to the positive parts of Alsup’s statement: “We are pleased that the Court recognized that using ‘works to train LLMs was transformative — spectacularly so.'”
This case does not address other ethical questions raised by the rise of AI that have resulted in litigation elsewhere. For example, many AI developers (reports have put Anthropic among them) have engaged in automatic scraping of sites for their content on a sufficiently widespread scale to inflict large server-bandwidth bills on the likes of Wikipedia.
The output of AI chatbots has also often led to copyright litigation.
In October, News Corp. sued Perplexity, alleging that its answers represented a “substitute product” for that conglomerate’s own work. In February, Thomson Reuters won a suit against a now-defunct startup called Ross Intelligence that had trained its AI service on the news agency’s Westlaw reference to offer a competing service. Earlier in June, Disney and Universal sued the generative-AI image-generation platform Midjourney for offering near-lookalike depictions of those studios’ copyrighted characters.
PCMag’s parent company Ziff Davis is also among the publishers pursuing litigation against AI platforms, having filed a lawsuit against OpenAI in April 2025 alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
About Rob Pegoraro
Contributor
