The $1.5B Anthropic Settlement: A Costly Victory That Changes Nothing for Writers

Following the historic settlement, what is the downside for writers and creatives?

Sep 8, 2025
The $1.5B Anthropic Settlement: A Costly Victory That Changes Nothing for Writers
AI generated image

Approximately 500,000 writers will receive payments of at least $3,000 each following a landmark $1.5 billion settlement in a class action lawsuit brought against Anthropic by a group of authors.

While this historic settlement represents the largest payout in U.S. copyright law history, it falls short of being a meaningful victory for writers — instead, it's another example of how tech companies can afford to pay their way out of legal trouble.

Technology companies are aggressively collecting vast amounts of written content to train their large language models, which power AI systems like ChatGPT and Claude — the same systems that pose significant challenges to creative industries. These AI models improve with more data, but after harvesting most available online content, companies are facing a data shortage.

This led Anthropic, Claude's creator, to acquire millions of books from unauthorized "shadow libraries" for AI training. The resulting lawsuit, Bartz v. Anthropic, represents one of many legal challenges filed against companies including Meta, Google, OpenAI, and Midjourney over using copyrighted works for AI training without permission.

However, writers aren't receiving this settlement because their creative work was used to train AI systems — they're being compensated because Anthropic chose to illegally download books rather than purchase them legally. For a company that recently secured $13 billion in funding, this represents an expensive but manageable business cost.

The Legal Precedent That Favors Tech

In June, federal judge William Alsup delivered a significant ruling in favor of Anthropic, determining that training AI on copyrighted material is legally permissible. The judge argued that this application qualifies as "transformative" use under fair use doctrine — copyright law provisions that haven't been substantially updated since 1976.

"Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different," Judge Alsup explained.

The court's focus remained on the method of acquisition — the unauthorized downloading — rather than the AI training itself. With Anthropic's settlement, the need for a trial has been eliminated.

"Today's settlement, if approved, will resolve the plaintiffs' remaining legacy claims," stated Aparna Sridhar, Anthropic's deputy general counsel. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems."

Setting a Troubling Precedent

As numerous similar cases involving AI and copyright proceed through the courts, judges now have Bartz v. Anthropic as a reference point. The precedent established here could influence how future cases are decided, though different courts may reach alternative conclusions.

The settlement essentially allows Anthropic to resolve its legal issues while continuing practices that many writers view as harmful to their profession. The company can now point to this case as evidence that AI training on copyrighted works has judicial approval, even as the broader creative community continues to grapple with the implications of AI systems trained on their work.