Necessary Always Active
Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.
|
||||||
|
||||||
|
||||||
|
A U.S. judge has ruled that Anthropic’s use of books to train its Claude AI model was legal under copyright law. As reported by Reuters, the court said this training qualifies as fair use, giving a major win in the ongoing Anthropic copyright lawsuit.
This ruling is the first in the U.S. to directly address the case of fair use of books to train AI in the context of generative models. It comes at a time when many authors sue AI companies like OpenAI, Meta, and Microsoft for using their copyrighted works without permission.
In a big boost to Anthropic, U.S. District Judge William Alsup said the company’s training of its Claude AI model was “exceedingly transformative.” He compared it to how a reader learns to become a writer.
“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them, but to turn a hard corner and create something different,” he said.
The judge agreed with Anthropic’s argument that its use of books helped build new and innovative technology. The company explained that its system didn’t just copy books, it studied them to learn information that cannot be copyrighted and applied that knowledge creatively.
Anthropic told the court that its training process supports creativity and is in line with copyright purposes.
An Anthropic spokesperson said the company was “pleased that the court recognized its AI training was ‘transformative’ and ‘consistent with copyright’s purpose in enabling creativity and fostering scientific progress.”
The Anthropic copyright case is one of many lawsuits targeting AI companies. It’s the first case to test the limits of fair use of books to train AI arguments in court. This outcome shows courts may support the idea that generative AI systems can legally learn from copyrighted works if their use is transformative and not a direct copy.
While the ruling supported Anthropic’s training method, it criticized the company for saving pirated copies of over 7 million books in a “central library.” Judge Alsup said this part did violate copyright law, and it was not covered under fair use.
The court has ordered a trial in December to decide how much Anthropic must pay for this infringement. Under U.S. law, willful copyright infringement can lead to damages of up to $150,000 per work. This could result in a massive financial penalty for the company.
Judge Alsup rejected Anthropic’s claim that it didn’t matter whether it got the books from pirate websites. He wrote, “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased … was itself reasonably necessary to any subsequent fair use.”
This part of the ruling shows that even if AI training is allowed, companies must still source data legally. Pirated material cannot be justified under the fair use argument.
The Claude AI lawsuit update shows a split decision, supporting AI training as fair use, but rejecting illegal data gathering methods.