Statement on Superintelligence — Orange Pill Wiki
EVENT

Statement on Superintelligence

The October 2025 Future of Life Institute statement calling for a conditional prohibition on superintelligence development—not to be lifted until scientific consensus on safety and strong public buy-in are established.

The Statement on Superintelligence was issued in October 2025 by the Future of Life Institute, marking the furthest escalation in Tegmark's policy trajectory. It called for a prohibition on the development of superintelligence—AI substantially exceeding human cognitive performance across essentially every domain—that would remain in place until two conditions were met: broad scientific consensus that such development could proceed safely and controllably, and strong public buy-in from the populations that would be affected. The Statement represented a qualitative departure from the 2023 Pause Letter's six-month moratorium framing. Where the Pause Letter sought time for safety research to catch up, the Statement asserted that time alone would not suffice—that superintelligence development must be foreclosed until specific, demanding conditions are satisfied.

In the AI Story

Hedcut illustration for Statement on Superintelligence
Statement on Superintelligence

The escalation was deliberate calibration. Each Tegmark-led policy intervention—the 2017 Asilomar Principles, the 2023 Pause Letter, the 2025 Statement—was calibrated to the capability landscape at the time of articulation, and each was overtaken by events before implementation. The Statement reflected Tegmark's assessment that guidelines had failed, that voluntary restraint had failed, and that only a conditional prohibition could create the institutional space for the wisdom race to be caught up.

The two conditions carry enormous weight. Scientific consensus on safety requires the alignment problem to be solved, or at least solved well enough that residual risk falls below an acceptable threshold. No such consensus currently exists; the field remains divided on whether alignment is solvable at all, and if so, by what methods. Strong public buy-in requires that democratic populations understand the technology well enough to give informed consent—a condition also distant from current reality, given the state of public AI literacy and the pace of capability advancement.

The Statement's most radical implication is democratic. It introduces the principle that AI development should not proceed without the informed consent of affected populations—a principle that departs from the prevailing model in which developers determine pace and the public is consulted, if at all, after consequences become visible. Tegmark's position is that AI development consequences are too significant to be determined by developers alone, and that democratic legitimacy requires meaningful public participation.

The predictable objections arrived immediately: competitive dynamics (prohibition by some but not others incentivizes defection), innovation costs (delay prevents beneficial applications), inevitability (development will happen regardless). Tegmark's responses draw explicit parallels to nuclear, chemical, and biological weapons regimes—imperfect, constantly tested, but nonetheless successful in constraining development below the probability that inevitability arguments predicted was achievable.

Origin

The Statement was developed at FLI throughout 2025 in response to accumulating evidence that voluntary measures had failed and that capability was advancing faster than safety research. It was released in October 2025 with signatures from researchers, former AI executives, and public intellectuals. The Statement's specific framing—conditional prohibition rather than permanent halt—was designed to preserve the possibility of superintelligence development if and when the required conditions could be satisfied.

Key Ideas

Conditional prohibition. Development foreclosed until specific safety and democratic conditions are met, not permanently halted.

Scientific consensus requirement. The alignment problem must be solved to the satisfaction of the broader technical community.

Public buy-in requirement. Democratic populations must understand the technology and give informed consent to its development.

Escalation from 2023. Represents qualitative shift from time-bounded pause to condition-bounded prohibition.

Democratic innovation. Introduces affected-population consent as a requirement for transformative technology deployment.

Debates & Critiques

The Statement has been criticized as impractical (the conditions may be impossible to satisfy, effectively making prohibition permanent), strategic (designed by slower competitors to constrain faster ones), or insufficient (doesn't address current AI harms). Tegmark's defenders argue the conditions are demanding because the risks are severe, and that any adequate governance framework must be proportionate to the magnitude of potential consequences.

Appears in the Orange Pill Cycle

Further reading

  1. Future of Life Institute, 'Statement on Superintelligence' (October 2025)
  2. Max Tegmark interviews and public talks, 2024–2025
  3. Nick Bostrom, Superintelligence (2014)—theoretical foundation
  4. Stuart Russell, Human Compatible (2019)—alignment framework
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
EVENT