Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Téléchargez le rapport d'activité annuel du cabinet
DÉcouvrir
Actualité
10/6/24

Why is the appeal of June 4, 2024, on the risks associated with Advanced AI primarily addressed to us, the legal professionals ?

Let’s first recall the purpose of this public warning letter (“A right to warn about Advanced Artificial Intelligence”), written by eminent AI specialists as well as anonymous former employees of OpenAI and Google DeepMind.

The signatories of this letter believe in the potential of AI technology to deliver unprecedented benefits to humanity. But, they also highlight the serious risks posed by these technologies, from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. The signatories underline that AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts.

The problem addressed by the signatories is the following : AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. The signatories do not think they can all be relied upon to share it voluntarily.

The signatories of this warning letter therefore call upon advanced AI companies to commit to these four principles:

  1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
  2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
  3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
  4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. Any effort to report risk-related concerns should avoid releasing confidential information unnecessarily.

Firstly, the authors of this warning letter are addressing lawmakers, emphasizing that they, alongside the general public and the scientific community, are in a position to mitigate three identified risks: the exacerbation of inequalities, the manipulation of information, and the loss of control over autonomous AI systems. It is therefore the law that can help limit the major risks generated by the exponential development of advanced artificial intelligence.

Secondly, this appeal is also addressed to legal professionals because it clearly recognizes that whistleblowing by employees of major AI system operators, which the letter advocates for, is likely to clash with intellectual property rights and trade secrets. Thus, a balance must be struck between the imperative to report significant risks to our society and the pragmatic necessity of protecting the intellectual property of innovative companies.

Thirdly, this letter serves as an additional call to legal professionals by highlighting the complexities of the task at hand. Indeed, such whistleblowing by employees of AI system operators comes with the challenge of understanding the contractual obligations they have committed to, particularly confidentiality obligations, non-disparagement clauses, and many other commitments.

In conclusion, reading this global warning document suggests that new laws and new agreements for a new form of whistleblowing will enable us to achieve the ambitious and probably essential goals set by its signatories.

Vincent FAUCHOUX
Découvrez le rapport d'activité annuel du cabinet
lire le rapport

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.