Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus
Actualité
18/9/25

California Court Halts $1.5 Billion Settlement Between Anthropic and Authors

An atypical dispute, rooted in extraordinary facts and in the specific context of U.S. copyright litigation

On September 5, 2025, artificial intelligence company Anthropic announced a proposed $1.5 billion class action settlement to resolve claims that it had unlawfully used copyrighted books to train its large language models. The settlement, filed in Bartz et al. v. Anthropic before the United States District Court for the Northern District of California, was intended to compensate authors and publishers whose works had allegedly been copied and ingested into Anthropic’s training datasets. The named plaintiffs – Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson – brought the action on behalf of a putative class of authors and rightsholders.

However, on September 8, 2025, at the preliminary approval hearing, Judge William Alsup declined to grant approval of the deal at this stage. The court expressed concerns regarding the adequacy of the settlement and required further clarification on several key issues: the completeness and accuracy of the “Works List” identifying eligible titles, the notice procedures to potential class members, the treatment of co-rightsholders, and the allocation methodology between authors and publishers. Judge Alsup set a further hearing for September 25, 2025, ordering the parties to cure these deficiencies in the interim : Read Judge Alsup’s Order (September 8, 2025).

The scale of the alleged infringement is striking. According to the complaint, Anthropic downloaded more than 7 million pirated ebook files from sites such as LibGen and PiLiMi, creating a permanent internal library later used for model training. Not all of these works qualify for compensation under the proposed settlement. After removing duplicates and applying strict eligibility criteria – including the requirement that each work have an ISBN or ASIN and be timely registered with the U.S. Copyright Office – approximately 465,000 to 500,000 titles remained potentially eligible. The proposed distribution would amount to roughly $3,000 per work, subject to attorneys’ fees, administrative costs, and contractual allocations between authors and publishers.

The litigation is also unusual because of its procedural posture under U.S. copyright law. The doctrine of fair use, codified in Section 107 of the Copyright Act, plays a central role. Judge Alsup has noted that the use of lawfully acquired books for training purposes might arguably fall within fair use if sufficiently transformative. By contrast, the wholesale ingestion and retention of pirated copies cannot be justified under this doctrine. The case proceeds as a federal class action, a mechanism allowing hundreds of thousands of rightsholders to be represented collectively, subject to court certification and judicial oversight of any settlement under Rule 23(e) of the Federal Rules of Civil Procedure. Finally, the U.S. copyright system provides for statutory damages of up to $150,000 per work for willful infringement. Applied across hundreds of thousands of titles, this exposure created a substantial litigation risk for Anthropic and explains the magnitude of the proposed settlement.

From a European or French perspective, this case appears highly atypical. There is no equivalent to the open-ended fair use doctrine in EU copyright law, where exceptions and limitations are exhaustively listed and narrowly construed. European courts focus on actual damages suffered rather than punitive or statutory damages, and collective redress mechanisms remain far more limited in scope and financial impact than U.S. class actions.

The Anthropic settlement thus reflects a dispute that is both factually extraordinary – involving millions of pirated book files – and procedurally specific to the American legal system, with its unique combination of class action litigation, fair use jurisprudence, and statutory damages.

While the proposed settlement has been halted for now, the case highlights a broader trend: the growing wave of litigation against AI companies over the use of copyrighted works in training data. Even if the U.S. context is unique, the underlying issue – reconciling large-scale AI development with the rights of authors and publishers – is of global significance and will inevitably reach European courts under frameworks such as the EU’s Copyright Directive and the forthcoming AI Act.

For businesses deploying or relying on generative AI, the lesson is clear: contractual safeguards and compliance with copyright law are not optional but strategic imperatives.

Disclaimer: This article has been prepared by a French attorney. For any in-depth legal analysis of this U.S. class action and its implications, professional advice from a qualified U.S. lawyer

Vincent FAUCHOUX
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.