What Computers Still Can't Do — Orange Pill Wiki
WORK

What Computers Still Can't Do

Dreyfus's 1972 landmark, revised in 1992, arguing that human intelligence is fundamentally embodied, situated, and rooted in practical engagement with the world—and therefore cannot be replicated by systems that manipulate symbols according to formal rules.

What Computers Can't Do (1972) and its revised edition What Computers Still Can't Do (1992) together constitute the most sustained philosophical critique of artificial intelligence ever produced. The book extended the argument of Alchemy and AI into a full-length treatment grounded in the phenomenology of Heidegger and Merleau-Ponty. Dreyfus identified four foundational assumptions of AI research—the biological, psychological, epistemological, and ontological assumptions—and argued that each was philosophically untenable. The book became a lightning rod: ridiculed in the 1970s, grudgingly respected in the 1980s, partially vindicated in the 1990s, and, with the arrival of large language models, newly contested in ways Dreyfus himself lived just long enough to see begin.

In the AI Story

Hedcut illustration for What Computers Still Can't Do
What Computers Still Can't Do

The 1972 edition appeared at the peak of symbolic AI's institutional confidence. Research groups at MIT, Stanford, and Carnegie Mellon were predicting that machines would achieve human-level intelligence within a decade or two. Funding was abundant. The phenomenological tradition Dreyfus drew on was almost entirely unknown in American computer science departments. The book was an attempt to import, into a field that had never read Heidegger, the philosophical resources needed to see what the field's own confidence was preventing it from seeing.

The 1992 revision, published after the first AI winter had demolished much of the 1970s confidence, carried a different tone. The specific predictions of the original had been vindicated by events. The chess programs had plateaued. The expert systems had proven brittle. The natural language parsers had collapsed on their own complexity. Dreyfus used the revision to press the deeper argument: that the field's technical failures were symptoms of a philosophical error, and that until the error was addressed, each new approach would run aground on the same underlying problem.

The book's structural argument moves from the identification of the four assumptions through a phenomenological analysis of what actual human intelligence involves—embodiment, situation, the background, skilled coping—to a prognosis for AI research that remains, in its essentials, the framework within which this volume evaluates the large language model moment.

The title of the revised edition—What Computers Still Can't Do—was chosen with care. The 'still' signaled both vindication and ongoing work. The problems had not been solved. The assumptions had not been abandoned. The philosophical diagnosis remained in force even as the technical landscape shifted.

Origin

The book grew out of the 1965 RAND paper Alchemy and AI, expanded into book length at Harper & Row's invitation. Dreyfus drew on his teaching at MIT in the mid-1960s, where he had direct contact with the AI research community and could observe the gap between the field's aspirations and its achievements firsthand.

The 1992 revision was prompted by two developments: the collapse of the first-generation symbolic AI paradigm and the rise of connectionism and neural networks. Dreyfus wanted to address whether his critique applied to the new paradigm as well. His answer was that the new paradigm had abandoned the surface features of the old approach while retaining the deeper assumption that intelligence is disembodied information processing—an assumption he argued was the real target of his original critique.

Key Ideas

Four foundational assumptions. Biological (brain as hardware, mind as software), psychological (mind as symbol manipulation), epistemological (knowledge as formalizable rules), ontological (reality as atomic facts). Each false, each load-bearing for classical AI.

The unformalizable background. Human intelligence depends on a vast web of shared practices and common-sense understanding that cannot be made explicit without infinite regress.

Expertise as rule-transcendence. The transition from novice to expert is not the acquisition of better rules but the progressive abandonment of rules in favor of holistic, embodied perception.

Vindication with ambiguity. Symbolic AI collapsed in the ways Dreyfus predicted. Whether the critique survives the transition to statistical methods remains the central philosophical question of the current AI moment.

Debates & Critiques

The most serious challenge to Dreyfus's argument comes from researchers who argue that large language models, trained on vast corpora of text that implicitly encodes the background, have achieved what symbolic AI could not—a working approximation of common-sense knowledge. Dreyfus's framework, as this volume develops it, responds that the approximation is real but the underlying critique still applies: the models produce plausible outputs without possessing the embodied engagement that grounds genuine understanding, and the gap becomes visible at the edges where genuine novelty demands genuine situated intelligence.

Appears in the Orange Pill Cycle

Further reading

  1. Hubert L. Dreyfus, What Computers Still Can't Do: A Critique of Artificial Reason (MIT Press, 1992)
  2. Hubert L. Dreyfus, What Computers Can't Do: The Limits of Artificial Intelligence (Harper & Row, 1972, revised 1979)
  3. Hubert L. Dreyfus, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, with Stuart Dreyfus (Free Press, 1986)
  4. Terry Winograd and Fernando Flores, Understanding Computers and Cognition (Ablex, 1986)
  5. Brian Cantwell Smith, The Promise of Artificial Intelligence (MIT Press, 2019)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
WORK