Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Les opérations de Carve-Out en France
DÉcouvrir
Découvrez le Livre Blanc : "Intelligence artificielle : quels enjeux juridiques"
DÉcouvrir
Intelligence Artificielle : quels enjeux juridiques ?
Actualité
11/9/25

Article 50 of the AI Act: the European consultation on transparency requirements (September 2025)

On 2 September 2025, the European Commission, through the Artificial Intelligence Office of DG CONNECT, published a targeted consultation on the implementation of the transparency obligations set out in Article 50 of Regulation (EU) 2024/1689, the “AI Act”, which entered into force on 1 August 2024 and will apply as of 2 August 2026. As the document expressly states, “this text is prepared for the purpose of consultation and does not prejudge the final decision”, but it will serve as the basis for future guidelines and a Code of Practice on the detection and labelling of AI-generated or manipulated content.

The context of Article 50 is crucial. The Regulation aims to “promote innovation and the uptake of AI, while ensuring a high level of protection of health, safety and fundamental rights, including democracy and the rule of law”. Within this framework, transparency obligations are decisive: they ensure that natural persons can recognise when they are interacting with an AI system or when they are exposed to artificial content. As the Commission underlines, these obligations are designed to “reduce the risks of impersonation, deception or anthropomorphisation and foster trust and integrity in the information ecosystem”.

Article 50 sets out several obligations:

  • Paragraph 1: providers of interactive AI systems must inform users that they are interacting with a machine, unless this is “obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use”.
  • Paragraph 2: providers of generative AI systems must “mark such content in a machine-readable manner and enable related detection mechanisms”. The recitals cited in the consultation specify that techniques may include “watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, logging methods, fingerprints, or a combination of such techniques”.
  • Paragraph 3: deployers of emotion recognition or biometric categorisation systems must inform natural persons exposed to them about their operation, subject to defined exceptions in criminal law.
  • Paragraph 4: systems generating deep fakes or text intended to “inform the public on matters of public interest” must clearly disclose their artificial origin, except in limited cases (artistic, satirical or fictional works; human-reviewed editorial content).
  • Paragraph 5: information must be provided “in a clear and distinguishable manner at the latest at the time of the first interaction or exposure”, while complying with accessibility requirements.
  • Paragraph 6: these obligations complement the provisions applicable to high-risk systems and must be read in conjunction with other transparency requirements under Union or national law.

The consultation therefore raises practical and still uncertain questions: in which cases can interaction with a system be considered “obvious”? Which marking and detection techniques are the most robust and accessible? How should individuals be effectively informed when exposed to emotion recognition? What criteria distinguish a misleading deep fake from a manifestly creative or satirical work?

The Commission also recalls that under Article 96(1)(d) it must issue guidelines on the practical implementation of transparency obligations, and that Article 50(7) empowers it to “encourage and facilitate the drawing up of codes of practice” to ensure effective compliance. The stated aim is clear: to translate legal requirements into technical standards and harmonised operational practices.

For operators, the summer 2026 deadline requires immediate anticipation. Companies must already plan technical solutions for marking, detection mechanisms and user-facing disclosure procedures. Otherwise, they face a dual risk: regulatory sanctions under the AI Act and parallel litigation under consumer law, data protection law or media law.

The full consultation document, published in September 2025: Stakeholder consultation on transparency requirements (PDF).

Vincent FAUCHOUX
Découvrez l'eBook : Les opérations de Carve-Out en France
Télécharger
Découvrez le Livre Blanc : "Intelligence artificielle : quels enjeux juridiques"
Télécharger
Intelligence Artificielle : quels enjeux juridiques ?

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.