On 2 September 2025, the European Commission, through the Artificial Intelligence Office of DG CONNECT, published a targeted consultation on the implementation of the transparency obligations set out in Article 50 of Regulation (EU) 2024/1689, the “AI Act”, which entered into force on 1 August 2024 and will apply as of 2 August 2026. As the document expressly states, “this text is prepared for the purpose of consultation and does not prejudge the final decision”, but it will serve as the basis for future guidelines and a Code of Practice on the detection and labelling of AI-generated or manipulated content.
The context of Article 50 is crucial. The Regulation aims to “promote innovation and the uptake of AI, while ensuring a high level of protection of health, safety and fundamental rights, including democracy and the rule of law”. Within this framework, transparency obligations are decisive: they ensure that natural persons can recognise when they are interacting with an AI system or when they are exposed to artificial content. As the Commission underlines, these obligations are designed to “reduce the risks of impersonation, deception or anthropomorphisation and foster trust and integrity in the information ecosystem”.
Article 50 sets out several obligations:
The consultation therefore raises practical and still uncertain questions: in which cases can interaction with a system be considered “obvious”? Which marking and detection techniques are the most robust and accessible? How should individuals be effectively informed when exposed to emotion recognition? What criteria distinguish a misleading deep fake from a manifestly creative or satirical work?
The Commission also recalls that under Article 96(1)(d) it must issue guidelines on the practical implementation of transparency obligations, and that Article 50(7) empowers it to “encourage and facilitate the drawing up of codes of practice” to ensure effective compliance. The stated aim is clear: to translate legal requirements into technical standards and harmonised operational practices.
For operators, the summer 2026 deadline requires immediate anticipation. Companies must already plan technical solutions for marking, detection mechanisms and user-facing disclosure procedures. Otherwise, they face a dual risk: regulatory sanctions under the AI Act and parallel litigation under consumer law, data protection law or media law.
The full consultation document, published in September 2025: Stakeholder consultation on transparency requirements (PDF).