Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Les opérations de Carve-Out en France
DÉcouvrir
Découvrez le Livre Blanc : "Intelligence artificielle : quels enjeux juridiques"
DÉcouvrir
Intelligence Artificielle : quels enjeux juridiques ?
Actualité
27/3/25

Compliance of AI Systems with the GDPR: Issues, Penalties and Prospects

Lately, artificial intelligence (AI) editors have been in the spotlight for their management of personal data. AI systems, based on machine learning, often depend on massive volumes of data, much of which is personal data. The General Data Protection Regulation (GDPR) imposes strict obligations in terms of transparency, lawfulness and respect for individuals' rights, obligations that fully apply to generative AI technologies.

GDPR versus generative AI: the inevitable confrontation?

The GDPR is based on fundamental principles that apply to AI systems processing personal data. Their implementation in this context presents specific challenges linked to the nature of machine learning technologies.

1. Transparency

Article 5(1)(a) of the GDPR requires data processing to be carried out in a transparent manner. For AI systems, this means clearly informing individuals about the collection, use and purposes of the data. For AI, this transparency is particularly complex due to the inherent opacity of some algorithms (such as deep neural networks).

The onus is on the developers of generative AI to clearly explain how data shapes the decisions made by their models, but the transparency of these algorithms is complex, both because the algorithms are so sophisticated and because of the need to preserve the industrial secrets and intellectual property of AI publishers.

2. Data Minimisation

According to Article 5(1)(c), only data necessary for the purpose of the processing operation should be collected. In AI, where models thrive on vast data sets, this principle may seem contradictory. Publishers must justify each piece of data they use and consider alternatives such as federated learning or synthetic data to reduce their dependence on personal data.

 

3. Lawfulness of Processing

Article 6 of the GDPR requires a lawful basis for any processing, such as consent, performance of a contract or legitimate interest. For AI systems, obtaining explicit consent can be difficult, particularly when collecting data on a massive scale or re-using data. Legitimate interest, often invoked, requires a delicate balance with the rights of individuals, which can be challenged by data protection authorities.

 

4. User rights

Articles 15 to 22 of the GDPR guarantee rights such as access, rectification, erasure ("right to be forgotten") and objection. In AI, these rights pose technical challenges: for example, removing data from a trained model may require costly retraining. Software publishers need to design systems that can respond to these requests, thereby respecting privacy by design. However, implementing this privacy by design would mean a significant loss of revenue for AI publishers, in addition to the technical difficulty of putting it in place.

CNIL unveils its compliance recommendations 

The CNIL in its recommendations on AI systems of 8 April 2024 issued guidance to support AI editors in their compliance with the GDPR, offering practical solutions to the challenges identified.

  • Clear purpose: The CNIL insists on defining an explicit purpose at the design stage, in accordance with Article 5(1)(b). This helps to limit abuses and inform users.
  • The CNIL suggests that a good practice for dealing with this problem is to determine the foreseeable capabilities of the AI system that are most at risk, and to define the terms of use for open source, SaaS or API. 
  • Impact Assessment (AIPD): For high-risk systems, an AIPD (article 35) is recommended to assess and mitigate the risks to individuals' rights. 
  • Data minimisation: According to CNIL recommendations "The principle of minimisation does not prevent an algorithm from being trained with very large volumes of data, but implies :
    • to think ahead so that only personal data that is useful for the development of the system is used; and
    • to subsequently implement the technical means to collect only these".
  • Data Quality : Data must be accurate and relevant (Article 5(1)(d)). The CNIL recommends verification mechanisms, particularly for data from third-party sources.
  • Robust consent: Where consent is required (Article 7), it must be free, specific and revocable, which may require innovative user interfaces.
  • Responsibility: The CNIL promotes the principle of "privacy by design" (Article 25) and recommends governance policies, including a Data Protection Officer (DPO).

These recommendations underline the need for a proactive approach, which is essential if sanctions are to be avoided.

AI publishers in the crosshairs of European data protection authorities

Recent sanctions reveal the legal risks facing AI editors and highlight the demands of regulatory authorities.

1. OpenAI: €15 million fine imposed by the Italian authorities

On 20 December 2024, the Italian data protection authority (Garante) fined OpenAI €15 million for breaches of the GDPR. The grievances included processing data without a legal basis (Articles 5§2 and 6 of the GDPR), lack of transparency (Articles 12 and 13 of the GDPR), production of inaccurate data (AI hallucination) and failure to notify a data breach that occurred in March 2023 (Article 33 of the GDPR). In addition, OpenAI had not put in place adequate systems to protect minors from inappropriate content generated by its AI, i.e. a failure to verify the age of users (articles 24 and 25§1 of the GDPR). OpenAI was forced to launch an information campaign in Italy to make users aware of ChatGPT's data collection practices. In France, Mistral AI, the famous French generative AI unicorn, is also the subject of a complaint to the CNIL. The complainants accuse the company of failing to obtain user consent for the training of these models (article 7 of the GDPR) and of failing to provide clear information on the data collected (article 12 of the GDPR). 

These surveys show that it is difficult for generative AI to absorb massive amounts of data as input in order to function while complying with the GDPR. 

2. Clearview AI: Multiple sanctions in Europe

Clearview AI, a US facial recognition company, has been fined several times in Europe for collecting and processing biometric data without a legal basis. In France, the CNIL imposed a fine of €20 million in 2022, followed by a further penalty of €5.2 million in 2023 for failure to comply with the injunctions. The Dutch Data Protection Authority (Autoriteit Persoonsgegevens) fined Clearview AI €30,500,000 on the same grounds.  Fines were also imposed in Italy, Greece, the Netherlands and the United Kingdom. These sanctions illustrate the severity with which European regulators are dealing with breaches involving sensitive data. Clearview AI, a company based in Manhattan in the United States, refuses to comply with the European sanctions and even to pay the fines, and has not changed its behaviour since then, engaging in a tug-of-war with the European regulators. As a result, the Dutch regulator is considering imposing sanctions and holding the company's directors responsible, in the same way as the French courts did in the case of Telegram.

3. Spotify: Sanction in Sweden

In June 2023, the Swedish Data Protection Authority (IMY) fined Spotify €5 million for failing to provide sufficiently detailed information in response to user access requests. Spotify uses AI for its music recommendations.

Although Spotify did provide some information, it was unclear how the data was used by its recommendation algorithms.

 

What are the solutions? 

AI editors therefore face increasing risks:

  • Tougher penalties: Fines could be increased, with more frequent checks.
  • New Regulations: The AI Act, which supplements GDPR with AI-specific rules. In its law transposing the AI Act, Spain has introduced a fine of up to 7% of the AI publisher's turnover or €35 million.

To ensure compliance, AI players must:

  • Carry out data protection impact assessments for high-risk processing operations.
  • Incorporate the principle of privacy by design right from the development phase.
  • Provide clear and accessible information to users on data processing.
  • Put in place robust mechanisms to guarantee the exercise of users' rights.

By anticipating legal risks and taking a proactive approach, AI editors can not only avoid costly penalties, but also strengthen user confidence and their market position.

Striking a balance between essential compliance with the guarantees imposed by the GDPR and the need for European artificial intelligence systems to perform is a major challenge today. This challenge arises in a particularly tense geopolitical context, where each normative constraint on performance can significantly penalise the nations involved in the AI race.

When it was adopted in April 2016, the GDPR did not anticipate the rapid and massive emergence of generative AI, which is now accessible to the general public. These new technologies, which process huge volumes of personal data on a daily basis, now raise a key question: is the GDPR already on the way to becoming obsolete in the face of this major technological innovation?

If you are faced with these issues, the expert lawyers at Deprez Guignot et Associés, who specialise in GDPR and artificial intelligence law, will be able to provide you with expert support. 

Vincent FAUCHOUX / Benjamin KAHN
Découvrez l'eBook : Les opérations de Carve-Out en France
Télécharger
Découvrez le Livre Blanc : "Intelligence artificielle : quels enjeux juridiques"
Télécharger
Intelligence Artificielle : quels enjeux juridiques ?

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.