The Planning Fallacy at Machine Speed — Orange Pill Wiki
CONCEPT

The Planning Fallacy at Machine Speed

Flyvbjerg's diagnosis of AI's governance consequence: optimism bias and strategic misrepresentation operating at a velocity at which the corrective feedback loops that contained them in prior decades no longer have time to form.

The planning fallacy at machine speed names the specific governance crisis AI introduces: the ancient cognitive and political distortions Flyvbjerg has documented for decades continue to operate, but the timeline on which they unfold has compressed from years to days. In traditional megaproject management, the months or years between the plan and the completed project created an involuntary feedback loop — costs that were underestimated became visible when invoices arrived, benefits that were overestimated became apparent when usage fell short. The feedback was painful but corrective. AI compresses this loop to near-zero for certain categories of work. The prototype works. The demo is impressive. The plan appears vindicated. But the phronetic assumptions embedded in the plan — about user needs, market conditions, organizational capacity, distribution of costs and benefits — remain untested, and the speed of the technical validation creates cognitive momentum that the phronetic testing cannot match.

In the AI Story

Hedcut illustration for The Planning Fallacy at Machine Speed
The Planning Fallacy at Machine Speed

The compression is asymmetric. Technical implementation compresses; phronetic assessment does not. Phronesis does not compress the way techne does because phronesis is constitutively slow — the kind of context-dependent, value-laden, embodied judgment that requires sustained engagement to develop and sustained deliberation to exercise. When a team extrapolates from the techne phase (which AI has accelerated dramatically) to the phronesis phase (which AI cannot accelerate at all), the extrapolation produces systematic underestimation of the remaining work.

Segal's recognition on the CES floor that the thirty days of building Napster Station had been the easy part illustrates the point. The phronetic work — the judgment about who the product serves, whether it serves them well, what it costs the people it does not serve, whether the trade-offs are defensible — had not even begun when the prototype was running. The more common response to a fast prototype is to mistake the prototype for the product, committing resources on the basis of a timeline extrapolated from techne to phronesis without recognizing that the two phases operate under fundamentally different temporal logics.

The governance implications are direct. Traditional megaproject timelines allow — in principle, if not always in practice — for oversight structures to form: review boards, stakeholder consultation, regulatory assessment. These structures are imperfect, and Flyvbjerg's record documents their frequent failure. But they represent the possibility of institutional correction — a second pair of eyes that might detect what the planner's optimism bias has concealed. When AI compresses execution from months to days, these governance structures cannot form in time. The product is deployed before the review has convened, before stakeholders have been consulted, before the consequences have been assessed by anyone other than the people who built it.

Flyvbjerg's prescription is not that AI-enabled projects be slowed down — a prescription that would be impractical and undesirable. The prescription is that the phronetic assessment must be deliberately decoupled from the technical timeline. The technical execution can proceed at machine speed. The judgment about whether the thing being executed deserves to exist, serves the right users, and distributes its costs and benefits justly must proceed at human speed — the speed of deliberation, consultation, and the slow, friction-rich process through which practical wisdom is exercised. The two processes must run in parallel, with phronetic assessment retaining the authority to redirect or halt technical execution when judgment warrants it.

Origin

Flyvbjerg articulated the framework in his 2025–2026 AI writings, extending his decades of planning fallacy research to the specific conditions of AI-accelerated development.

Key Ideas

Asymmetric compression. Technical implementation compresses dramatically under AI augmentation; phronetic assessment does not, because phronesis requires slow, situated engagement.

Feedback loop collapse. The traditional corrective mechanism — reality intruding on optimism over months of execution — cannot operate when execution takes days.

Governance structures cannot form. Review boards, stakeholder consultation, and regulatory assessment require time the AI timeline does not provide.

Dam placement. The prescriptive response is deliberate decoupling: technical execution at machine speed, phronetic assessment at human speed, with the latter retaining authority over the former.

Same bias, new velocity. The cognitive and political distortions are ancient; the speed is new; the combination is the most dangerous governance challenge of the AI transition.

Appears in the Orange Pill Cycle

Further reading

  1. Flyvbjerg, Bent. 'AI as Artificial Ignorance.' Project Leadership and Society, 2025.
  2. Flyvbjerg, Bent and Dan Gardner. How Big Things Get Done. Currency, 2023.
  3. Perrow, Charles. Normal Accidents. Princeton University Press, 1984.
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT