The Technology of Foolishness — Orange Pill Wiki
CONCEPT

The Technology of Foolishness

March's 1971 concept of the organizational and individual practices that enable action without prior justification — the necessary complement to the technology of reason, and the capacity AI most dramatically threatens.

March argued in a deliberately provocative 1971 essay that one of the most important capabilities an organization could possess was the capacity to act without reasons — to do things that could not be justified by rational calculation, to pursue goals that had not yet been defined, to play. The argument proceeded from an observation about the limits of rational choice: the model assumes decision-makers have preferences, that preferences are consistent, and that decisions are made by selecting actions that best satisfy them. But in the most consequential decisions — about what to do with a life, what organization to build, what values to pursue — preferences are not given in advance. They are discovered through action. You do not first know what you want and then act to get it; you act, observe what happens, and discover what you wanted in retrospect. This discovery-through-action requires foolishness: the willingness to act before preferences are clear, to experiment without knowing what the experiment is testing, to play without knowing what the play is for.

In the AI Story

Hedcut illustration for The Technology of Foolishness
The Technology of Foolishness

March positioned the technology of foolishness as the necessary complement to the technology of reason. The technology of reason is how organizations exploit: they identify goals, evaluate alternatives, select the best option, implement it. The technology of reason is indispensable and insufficient. An organization operating exclusively through the technology of reason will never discover goals it did not already have. It will optimize within its current framework forever, improving its performance on the current game while never discovering that the game has changed.

The technology of foolishness is how organizations explore. Not through structured innovation programs, which are the technology of reason dressed in exploration's clothing. Not through R&D budgets, which are rational investments in uncertain returns. Through genuine play — undirected, unjustifiable, often wasteful activity from which genuinely new ideas emerge. The foolish leader funds a project with no clear business case. The foolish engineer spends a week on an idea with no connection to the product roadmap. The foolish organization tolerates these behaviors, creates spaces for them, protects them from the relentless rationality of the exploitation machine.

AI is, with architectural precision, a technology of reason. It generates outputs optimized against specified criteria. It produces the most probable next token, the most statistically likely code completion, the most pattern-consistent response to a prompt. When AI produces something surprising, the surprise is a statistical artifact — a low-probability output generated because the prompt placed it in a region of the distribution where low-probability outputs are locally optimal. AI does not play. It does not pursue ideas for their intrinsic satisfaction. It does not act before its preferences are clear, because it does not have preferences in any meaningful sense — it has optimization criteria, which are fundamentally different. Preferences are discovered through action; optimization criteria are specified in advance.

The organizational consequence is specific and previously unencountered. AI makes the technology of reason overwhelmingly productive, with exploitation returns so large that the opportunity cost of foolishness has increased by an order of magnitude. The leader who protects exploratory time in an AI-augmented environment must defend, at every budget review, the decision to leave productivity on the table. The defense is structurally weak: it relies on the possibility of future returns from activities with no track record. The prosecution has the most compelling evidence imaginable — the evidence of twenty-fold productivity applied to known problems with measurable outcomes.

Origin

March published The Technology of Foolishness in Civiløkonomen in 1971, and the essay was later reprinted in his 1979 collection with Johan Olsen, Ambiguity and Choice in Organizations. The essay's rationalist-offending claim — that foolishness deserves the same analytical seriousness as reason — was characteristic of March's method: take an apparently marginal phenomenon, demonstrate that it performs an irreplaceable function, and argue that its protection requires deliberate institutional design.

March's engagement with Cervantes' Don Quixote — which he taught at Stanford for years as a text on organizational leadership — extended the argument. Quixote was not a figure of comic delusion but of principled commitment to a vision that rational calculation could not justify. The knight charges the windmills not because he knows they are giants but because charging is the only form of integrity available to a creature that must act before it knows. March's entire late career can be read as an extended defense of the Quixotic stance against the rationalist orthodoxy of management science.

Key Ideas

Preferences discovered through action. The most consequential decisions involve preferences that do not exist until the action reveals them — a condition rational choice cannot accommodate.

Complement to reason. Foolishness and reason are not opposed; they are necessary complements, each inadequate without the other.

Genuine play. The technology of foolishness requires undirected activity protected from the demand for rational justification.

AI as pure reason. AI systems are architecturally incapable of foolishness; they can only optimize against specified criteria.

Increased opportunity cost. AI's productivity gains raise the apparent cost of foolishness by an order of magnitude, making the capacity structurally harder to sustain.

Debates & Critiques

Whether AI could be made foolish in the required sense is contested. Some argue that sufficiently diverse training, explicit encouragement of low-probability outputs, and interfaces designed to preserve ambiguity could produce systems that function foolishly even if their internal operation is optimizing. Others argue that the distinction March drew is fundamental: optimization criteria specified in advance are categorically different from preferences discovered through action, and no architectural trick closes the gap. The debate matters less than the organizational question it implies: whether the capacity for foolishness must be preserved in human practitioners and institutional structures precisely because it cannot be delegated to the tool.

Appears in the Orange Pill Cycle

Further reading

  1. James G. March, 'The Technology of Foolishness,' Civiløkonomen 18 (1971).
  2. James G. March and Johan P. Olsen, Ambiguity and Choice in Organizations (1976).
  3. James G. March, 'Don Quixote, Leadership, and the Humanities' (Stanford lecture, recurring).
  4. Karl Weick, The Social Psychology of Organizing (1979).
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT