Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Actualité
15/10/24

The IA Act: a successful alchemy combining legal rules and ethical standards?

Ethics (Low Latin ethica, morality, from Greek êthikon): feminine noun; 1. Part of philosophy that considers the foundations of morality; 2. Set of moral principles that form the basis of one's conduct.

Artificial intelligence
: feminine noun; set of theories and techniques used to create machines that function in a similar way to the human brain.

 

The European Regulation on Artificial Intelligence ("IA Act"), in force from August 2024, establishes a legal framework for the development and ethical use of artificial intelligence (AI), to be phased in between 2025 and 2026. But how does this regulation address ethical issues as part of a more responsible development and use of AI? In reality, the AI Act is not only rooted in the fundamental values of the European Union (EU), but addresses ethics as an integral part of the legal framework, far beyond simply inviting operators to self-regulate. 

1. Discrimination and bias, a fight against AI practices deemed unacceptable

In 2018, a report1 revealed that Amazon's recruitment algorithm discriminated against female applicants. Trained on 10 years of internal data, the algorithm had learned from historical trends where men dominated, especially in technical positions. As a result, it systematically penalized resumes containing terms associated with women, such as references to women's schools or clubs, thus reproducing sexist biases favoring men for these positions.

How does the AI Act combat discrimination and bias?

  • Articles 5, 9 and 10 of the AI Act prohibit certain AI practices, notably those that exploit the vulnerabilities of specific groups or could lead to discrimination.
  • The Regulations encourage developers and users to assess potential risks, including discrimination risks. The AI Act imposes a risk management system for high-risk AI systems, requiring continuous assessment throughout their lifecycle. This system must identify, analyze and manage potential risks to health, safety or fundamental rights, ensuring that remaining risks are acceptable and implementing measures to minimize them.

2. Manipulating opinion and consent: stepping up the fight against disinformation

During the municipal elections in Delhi in early 2024, manipulated videos attributing false statements to political candidates circulated on social networks, necessitating the intervention of the Election Commission of India.

What does the AI Act recommend?

  • Article 50 imposes transparency on companies, which must inform their users when they interact with an AI system or when an image is generated or manipulated by an AI system. This information must be given to the user in a clear and distinct manner, specifying that the content has been artificially created or manipulated, labeling the AI result accordingly and disclosing its artificial origin.

3. Invasion of privacy, image and personal data, risks of abuse

An example of abuse in the USA is the accusation made in May 2024 by American actress Scarlett Johansson against OpenAI and its CEO, Sam Altman, for deliberately copying her voice without her consent after she refused to participate in the current ChatGPT 4.0 system. However, out of respect for Scarlett Johansson and shortly after her public criticism, the company removed the voice. These recent developments enable a controlled and safer use of AI.

What does the AI Act provide for?

  • Article 18 requires documentation retention. Suppliers must keep full documentation on high-risk AI systems, including information on data collection and use, to ensure transparency and compliance.
  • Again, Section 50 of the IA Act on transparency acts as a shield: suppliers and deployers of all AI systems must guarantee transparency of processes and data use, by making this data accessible.

N.B. Personal data issues are mainly covered by the GDPR, with the exception of specific AI-related issues, which implies a cumulative approach to both regulations.

Finally, the absence of human supervision, an issue with significant consequences

Let's go back to our example of Amazon's recruitment algorithm. Autonomously, the algorithm has taught itself to systematically penalize female CVs. Without human supervision, artificial intelligence (AI) systems deployed in sensitive sectors, such as healthcare, transport, recruitment or banking, can cause serious security breaches. For example, an AI used to interact with customers could accidentally disclose personal or financial information to unauthorized third parties.

What are the requirements of the AI Act regarding human supervision of AI systems?

  • Whether high-risk or not, responsibility extends to the entire AI value chain (article 25). Certain high-risk AI systems (including biometric AI) must meet certain standards, and suppliers must be assessed (article 43).
  • Human oversight is guaranteed in the AI Act by: a code of practice (section 56); an AI Office overseeing the application of regulations (section 64); expert panels (sections 67 and 68) advising on ethical, technical and societal issues.

 

Beyond the question of major theoretical ethical principles, the responsible use of AI raises eminently concrete, technical and complex issues. These will often take the form of technical standards imposed on operators, or voluntarily chosen by them as part of self-regulation.

These ethical principles, transcribed into technical standards (e.g. anonymization or pseudonymization of data, etc.), will have sufficient flexibility to adapt to the characteristics of each economic sector; for example, we can easily admit that the technical standards guaranteeing the major ethical principles mentioned above s will be very different for the military drone industry, the cosmetics industry, or the press. Fortunately, lawyers will always have a role to play in this process of containing AI (cf. Mustapha Suleymann's theory) and "customizing" ethical principles with evolving, sector-specific technical standards - that of drafting "tailor-made" ethical charters for the responsible use of AI, within a visionary European legal framework.

Special thanks to Noa Arfi, DDG intern, for her invaluable help in preparing this article.

Vincent FAUCHOUX / Noa ARFI

1 Reuters, Insight - Amazon scraps secret AI recruiting tool that showed bias against women, By Jeffrey Dastin, October 11, 2018 https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/ 

Image par Canva
Découvrez le Livre Blanc : "Intelligence artificielle : quels enjeux juridiques"
Télécharger
Intelligence Artificielle : quels enjeux juridiques ?

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.