Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Les opérations de Carve-Out en France
DÉcouvrir
Découvrez le Livre Blanc : "Intelligence artificielle : quels enjeux juridiques"
DÉcouvrir
Intelligence Artificielle : quels enjeux juridiques ?
Actualité
10/7/25

The European Union’s Code of Practice for General-Purpose AI Models: A Detailed Legal Analysis (July 2025)

Introduction: A Landmark Framework for Responsible AI Governance

On 8 July 2025, the European Commission, in conjunction with the AI Office established under Regulation (EU) 2024/1689 (the “AI Act”), published the Code of Practice for General-Purpose AI Models (the “Code”). This document, a key deliverable under Articles 53 and 55 AI Act, was developed collaboratively by GPAI (General-Purpose AI) model providers, academic experts, regulators, and civil society stakeholders.

Although non-binding, the Code serves as a soft law instrument designed to operationalize the high-level obligations imposed on GPAI providers by the AI Act. It establishes recommended practices for compliance, particularly in the areas of transparency, copyright compliance, and systemic risk management.

By fostering a harmonized approach to regulatory requirements, the Code aims to promote legal certainty while safeguarding innovation and fundamental rights within the Union. This article provides a comprehensive legal analysis of its provisions, focusing on their relationship with existing EU copyright law and the broader regulatory landscape.

1. Transparency Obligations: Codifying Article 53 AI Act Requirements

1.1 Legal Basis: Article 53(1)(a) and (b) AI Act

The first chapter of the Code translates the transparency requirements of Article 53(1), points (a) and (b), and Annexes XI and XII of the AI Act into practical measures for GPAI providers. These obligations aim to ensure that key technical and operational information about GPAI models is accessible to:

  • downstream providers using GPAI models as inputs for AI systems or components,
  • the AI Office and national competent authorities for oversight purposes.

1.2 Model Documentation: Scope and Duration

Measure 1.1 obliges Signatories to prepare and maintain comprehensive Model Documentation for at least ten years after a GPAI model is placed on the market. This documentation must detail:

  • Model architecture and specifications;
  • Training methodologies and dataset provenance;
  • Energy usage during training and inference phases.

Such disclosures are crucial for enabling downstream compliance and facilitating regulatory audits.

1.3 Confidentiality and Trade Secret Protection

While Article 53(2) AI Act requires proportionality in information requests, Article 78 AI Act underscores that authorities receiving Model Documentation must respect trade secrets and intellectual property rights. This balance reflects the EU’s intention to avoid disincentivizing innovation while enforcing compliance.

2. Copyright Compliance: Aligning AI Practices with EU IP Law

2.1 Legal Basis: Article 53(1)(c) AI Act and Directive (EU) 2019/790

The Copyright Chapter of the Code directly addresses one of the most contentious legal questions in AI governance: the use of copyrighted material in training GPAI models and the risk of infringing outputs. Article 53(1)(c) AI Act requires GPAI providers to “identify and respect copyright protection and rights reservations” within their datasets.

This obligation complements the framework of Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market (DSM Directive). Notably, Article 4(3) DSM Directive allows rightsholders to exclude their works from text and data mining (TDM) operations via machine-readable opt-outs.

2.2 Key Measures and Their Legal Significance

Measure 1.1: Adoption of a Copyright Policy

GPAI providers must establish and enforce a written copyright policy within their organizations. This policy should define governance structures, assign responsibilities, and ensure ongoing compliance.

Measure 1.2: Ensuring Lawful Data Use

This measure mandates that Signatories use only lawfully accessible data for training purposes. Providers must avoid datasets obtained in breach of Article 6(3) of Directive 2001/29/EC (the InfoSoc Directive) and refrain from scraping websites persistently infringing copyright.

This provision aligns with the EU’s case law emphasizing the protection of rightsholders’ economic and moral rights, as reflected in Infopaq International A/S v. Danske Dagblades Forening (C‑5/08) and subsequent CJEU decisions.

Measure 1.3: Respecting Rights Reservations

GPAI providers are required to detect and comply with machine-readable rights reservations. Examples include restrictions implemented through the Robot Exclusion Protocol (robots.txt) or embedded metadata.

The Code anticipates the development of EU-wide standards for expressing reservations, a critical step for ensuring interoperability and predictability across jurisdictions.

Measure 1.4: Preventing Infringing Outputs

To mitigate the risk of infringing outputs, providers must:

  • Embed safeguards during model development;
  • Include prohibitions on infringing uses in their acceptable use policies;
  • Implement technical measures to detect and block infringing generations where feasible.

This approach echoes the proactive obligations under Article 17 of the DSM Directive imposed on online content-sharing service providers.

Measure 1.5: Rightsholder Engagement and Remedies

Providers must maintain a point of contact for copyright inquiries and complaints, ensuring effective redress mechanisms for alleged infringements.

2.3 Legal and Practical Implications for GPAI Providers

These copyright measures reflect a growing expectation that AI developers proactively align their practices with EU intellectual property law. Failure to do so risks enforcement action under both the AI Act and existing copyright directives.

Moreover, by requiring providers to consider not only the legality of training datasets but also the downstream uses of their models, the Code creates a chain of responsibility reminiscent of the doctrine of contributory infringement under EU law.

3. Systemic Risk Management: Operationalizing Articles 55–56 AI Act

3.1 Identifying and Mitigating Systemic Risks

The Safety and Security Chapter obliges Signatories to establish a robust framework for identifying, assessing, and mitigating systemic risks associated with GPAI models. This includes:

  • Continuous monitoring and adversarial testing;
  • Privacy-preserving logging of interactions;
  • Incident reporting obligations under Articles 75 and 91 AI Act.

3.2 Governance and Oversight

The establishment of a dedicated “AI & Copyright” unit within the European AI Office is anticipated to oversee adherence to these risk management obligations.

Conclusion: A Step Toward Responsible AI Development

The Code of Practice for General-Purpose AI Models represents a significant step in defining the operational parameters of GPAI providers under the AI Act. Although not legally binding, it serves as an interpretative aid and de facto standard for assessing compliance.

For AI developers and copyright holders alike, the Code underscores a paradigm shift: from reactive enforcement to proactive governance, balancing innovation with the protection of fundamental rights and intellectual property.

Vincent FAUCHOUX
Image par Canva
Découvrez l'eBook : Les opérations de Carve-Out en France
Télécharger
Découvrez le Livre Blanc : "Intelligence artificielle : quels enjeux juridiques"
Télécharger
Intelligence Artificielle : quels enjeux juridiques ?

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.