Technology

Judge rules Anthropic did not violate authors’ copyrights with AI book training

Judge rules Anthropic did not violate authors' copyrights with AI book training

Dario Amodei, Anthropic CEO, speaking on CNBC’s Squawk Box outside the World Economic Forum in Davos, Switzerland on Jan. 21st, 2025.

Gerry Miller | CNBC

Anthropic’s use of books to train its artificial intelligence model Claude was “fair use” and “transformative,” a federal judge ruled late on Monday.

Amazon-backed Anthropic’s AI training did not violate the authors’ copyrights since the large language models “have not reproduced to the public a given work’s creative elements, nor even one author’s identifiable expressive style,” wrote U.S. District Judge William Alsup.

“The purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative,” Alsup wrote. “Like any reader aspiring to be a writer.”

The decision is a significant win for AI companies as legal battles play out over the use and application of copyrighted works in developing and training LLMs. Alsup’s ruling begins to establish the legal limits and opportunities for the industry going forward.

Read more CNBC reporting on AI

CNBC has reached out to Anthropic and the plaintiffs for comment

The lawsuit, filed in the U.S. District Court for the Northern District of California, was brought by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson in August. The suit alleged that Anthropic built a “multibillion-dollar business by stealing hundreds of thousands of copyrighted books.”

Alsup did, however, order a trial on the pirated material that Anthropic put into its central library of content, even though the company did not use it for AI training.

“That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft, but it may affect the extent of statutory damages,” the judge wrote.

WATCH: Anthropic unveils next AI models