


On 22 October 2025, the Future of Life Institute (FLI) released a text barely thirty words long — yet its implications are profound, both politically and ethically:
“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”
With this statement, the FLI and several thousand signatories — including Geoffrey Hinton, Yoshua Bengio, Steve Wozniak, Richard Branson, Mary Robinson, Meghan Markle, and Steve Bannon — call for a prohibition on the development of superintelligence until two cumulative conditions are met:
Unlike the March 2023 open letter that called for a six-month pause in the training of advanced AI systems, the Statement on Superintelligence advocates a conditional ban — not a temporary moratorium, but a global freeze on the pursuit of superintelligence until both scientific safety and public legitimacy are established.
The FLI defines superintelligence as systems capable of surpassing human intelligence across most cognitive domains. The risks identified extend far beyond issues of bias or misinformation: loss of human control, large-scale social manipulation, economic dislocation, and, in the most extreme view, an existential threat to humanity itself.
The statement does not specify who would be competent to assess these conditions, nor how scientific consensus or public approval would be determined.
From a legal standpoint, the use of the term “prohibition” implies a binding legal interdiction, rather than a mere ethical recommendation. It suggests the creation of a normative instrument, at national or international level, grounded in the principles of precaution and collective security.
Implementing such an approach would require:
The Statement on Superintelligence goes far beyond the scope of the EU Artificial Intelligence Act (AI Act), which relies on a risk-based framework and does not envisage any general prohibition on the development of “strong” AI.
While the AI Act imposes transparency, traceability, and risk-management duties for general-purpose systems, the FLI calls for a pre-emptive ban — a suspension of development until both scientific and societal validation have been achieved.
This concise but powerful text introduces a novel legal and philosophical concept: that of a democratic suspension of technological progress, pending societal assurance of safety and control.
Such an approach echoes existing international moratoria on sensitive technologies — nuclear research, biotechnology, or chemical weapons — where scientific advancement remains legitimate but its application is subject to collective oversight.
The Statement on Superintelligence thus opens a new chapter in the global debate: the emergence of a legal and ethical framework capable of pausing, in the name of prudence, one of humanity’s most transformative trajectories.

