Hubert Dreyfus was an American philosopher whose lifelong project connected Continental phenomenology — particularly the work of Heidegger and Merleau-Ponty — to the research program of artificial intelligence. He personally translated Merleau-Ponty's Sense and Non-Sense into English in the early 1960s, absorbing the phenomenological framework that would structure his critical engagement with AI. His 1972 book What Computers Can't Do (revised 1992 as What Computers Still Can't Do) applied Merleau-Pontian analysis to argue that human intelligence depends on 'informal and unconscious processes' — embodied skills, contextual understanding, background assumptions — that symbolic AI could not replicate. The failures of expert systems in the 1980s vindicated his critique, though Dreyfus was ambivalent about the subsequent rise of neural networks, which exhibited some Merleau-Pontian features while remaining categorically limited.
Dreyfus's critique of AI was never mere dismissal. He took AI seriously as a research program and engaged extensively with its practitioners — sometimes productively, sometimes contentiously. His MIT RAND paper Alchemy and AI (1965) caused significant controversy; Edward Feigenbaum's response — 'What does he offer us? Phenomenology! That ball of fluff. That cotton candy!' — captures the gulf between Continental philosophy and computational research in the mid-twentieth century.
The central Merleau-Pontian insight Dreyfus wielded was the primacy of embodied skill over representational knowledge. Human experts do not possess extensive propositional knowledge that they apply through reasoning; they possess embodied understanding deposited through years of practice, which operates through skilled coping with specific situations rather than through rule-following. Symbolic AI tried to replicate expertise by encoding rules — and failed, because the rules never captured enough of what experts actually know.
Dreyfus's later engagement with neural networks was more nuanced. In his 2007 paper 'Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian,' he observed that neural networks exhibit 'crucial structural features' of Merleau-Ponty's intentional arc — the pre-reflective orientation of the body-subject toward the world. Neural systems, unlike symbolic ones, learn through exposure rather than explicit rules. The structural resemblance is genuine. But Dreyfus also insisted it was incomplete: neural networks learn from data while embodied organisms learn through engagement with a world in which they have stakes.
Dreyfus's extended five-stage model of skill acquisition (novice, advanced beginner, competent, proficient, expert), developed with his brother Stuart Dreyfus, has become foundational in fields ranging from nursing education to military training. The model operationalizes Merleau-Pontian insights about the development of embodied expertise, showing how explicit rules characterize beginner performance while expert performance operates through embodied pattern recognition and situated responsiveness.
Dreyfus studied at Harvard, completing his PhD in 1964 under Aron Gurwitsch. He taught at MIT from 1960 to 1968 — the environment in which his critique of AI took shape — before moving to UC Berkeley, where he spent the remainder of his career. His proximity to the MIT AI Lab placed him in direct intellectual contact with the field he was analyzing.
His translation of Sense and Non-Sense, done with Patricia Allen Dreyfus, was completed in the late 1950s and published in 1964. The translation work gave him deep familiarity with Merleau-Ponty's philosophical vocabulary and conceptual structure — an intimacy that shaped his subsequent critical work on AI.
Phenomenology against symbolic AI. Dreyfus applied Merleau-Ponty and Heidegger to demonstrate the limits of symbolic AI's foundational assumptions.
Embodied skill over rules. Human expertise is primarily embodied skill, not propositional knowledge — a direct application of Merleau-Ponty's motor intentionality.
Vindicated by expert system failures. The 1980s collapse of symbolic AI confirmed his diagnosis that encoded rules could not capture embodied understanding.
Ambivalent about neural networks. He recognized their Merleau-Pontian features while insisting on their categorical limits.
Five-stage skill model. His operationalization of Merleau-Pontian skill development has shaped professional training across multiple fields.