Superintelligence Isn't Enough — Orange Pill Wiki
WORK

Superintelligence Isn't Enough

Fukuyama's October 2025 Persuasion essay — the most concise statement of his argument that the binding constraint on economic and political outcomes is not intelligence but implementation capacity.

Published in Persuasion in October 2025, "Superintelligence Isn't Enough" is Fukuyama's direct challenge to Silicon Valley's growth projections. The argument is compressed: "The binding constraint on economic growth today is simply not insufficient intelligence or cognitive ability." Economic growth depends on the ability to build real objects in the real world, navigate institutional complexity, and engage in the iterative back-and-forth between policymakers and citizens that implementation requires. Intelligence scales easily in software. It does not scale easily in the material and social world where the constraints are not cognitive but relational. The essay introduced the three circles of policy framework that Fukuyama developed more fully in his March 2026 follow-up "What AI Hypists Miss."

In the AI Story

Hedcut illustration for Superintelligence Isn't Enough
Superintelligence Isn't Enough

The essay's target was a specific claim that had become commonplace in AI-industry discourse: that sufficiently capable AI would produce extraordinary economic growth by automating cognitive work that had previously required human intelligence. Fukuyama challenged the claim on its own terms. Intelligence, he argued, is not the binding constraint on most economic activity. The binding constraints are material (the ability to produce and distribute physical goods), institutional (the capacity of regulatory and legal systems to function), and relational (the trust infrastructure that enables cooperation among strangers).

Each constraint is orthogonal to intelligence. A more intelligent AI does not produce more houses if zoning regulations and construction supply chains constrain housing production. A more intelligent AI does not produce better medical outcomes if healthcare delivery requires human judgment, patient relationships, and institutional coordination that no algorithm can substitute for. A more intelligent AI does not produce better governance if the governance process requires practical wisdom developed through embodied experience in specific institutional contexts.

The essay also registered a specific correction of Fukuyama's own earlier position. In 2023, he had dismissed AI existential risk concerns as "absurd." The 2025 essay was more guarded: "As I've learned more about what the future of AI might look like, I've come to better appreciate the real dangers that this technology poses." The shift was characteristic of Fukuyama's intellectual honesty — a willingness to revise publicly stated positions in response to new evidence — and it calibrated the argument. The essay was not optimistic about AI's benefits; it was skeptical about specific Silicon Valley growth projections. The skepticism left room for serious concern about the technology's disruptive effects, including the governance and trust challenges his 1995 framework had identified as the primary determinants of social outcomes.

The essay's reception was mixed. Silicon Valley commentators dismissed it as a misunderstanding of the pace at which AI capability was extending into material and social domains. Institutional economists and political scientists cited it as the long-overdue correction of technology-industry discourse that had treated intelligence as the master variable. The accuracy of Fukuyama's specific predictions will be tested empirically over the next decade. The framework he articulated — the distinction between cognitive and implementation constraints — has already shaped subsequent AI governance debates, including the development of the middleware proposal and broader institutional-innovation discussions.

Origin

The essay was published in Persuasion, the online publication founded by Yascha Mounk, in October 2025. It built on Fukuyama's decades of work on institutional capacity and political development, particularly the arguments in Political Order and Political Decay (2014) about the difficulty of building effective state capacity. The specific framing — AI intelligence versus implementation — responded to the acceleration of AI discourse following the November 2022 release of ChatGPT and the subsequent boom in Silicon Valley projections about AI-driven economic transformation.

Key Ideas

Binding constraint argument. Intelligence is not the primary constraint on economic and political outcomes.

Orthogonality of constraints. Material, institutional, and relational constraints operate independently of cognitive capacity.

Public correction. The essay registered Fukuyama's shift from dismissive skepticism of AI existential risk to guarded acknowledgment of real dangers.

Framework preview. The essay introduced the three-circles framework developed more fully in subsequent work.

Appears in the Orange Pill Cycle

Further reading

  1. Francis Fukuyama, "Superintelligence Isn't Enough" (Persuasion, October 2025)
  2. Francis Fukuyama, "What AI Hypists Miss" (Persuasion, March 2026)
  3. Daron Acemoglu and Simon Johnson, Power and Progress (PublicAffairs, 2023)
  4. James Scott, Seeing Like a State (Yale, 1998)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
WORK