The computational theory of mind (CTM) holds that mental states are computational states and that thinking is a species of information processing analogous to what digital computers do. First articulated by Hilary Putnam, Jerry Fodor, and others in the 1960s and 1970s, CTM became the dominant framework in cognitive science and philosophy of mind, underwriting the research program of classical AI and, in revised form, the neural-network approach that produced large language models. Noë's enactivism constitutes one of the most rigorous philosophical challenges to CTM, arguing that consciousness is not computation but an embodied activity that no amount of processing can replicate in a disembodied system.
The computational theory of mind emerged in the wake of Alan Turing's work on computability and the Church-Turing thesis, which established that any effectively calculable function could be computed by a Turing machine. If mental processes are effectively calculable, the argument went, then they must be Turing-computable, and the mind must be a species of computer. Hilary Putnam's 1967 paper 'The Nature of Mental States' introduced functionalism — the view that mental states are defined by their causal-functional roles rather than by their physical substrate. Jerry Fodor's The Language of Thought (1975) extended this into a full theory of cognition as symbol manipulation.
CTM provided the philosophical foundation for classical symbolic AI — the attempt to produce intelligence by writing down explicit rules, frames, and ontologies. When symbolic AI faltered in the 1980s and 1990s (AI winter), the field shifted to neural networks and statistical learning, but CTM's core commitment — that cognition is a species of information processing — survived the transition. Contemporary deep learning is a different computational architecture, not a different philosophy of mind.
Noë's enactivist challenge to CTM operates at the foundations. If consciousness is an activity of the whole embodied organism in its ongoing engagement with a world — rather than a program running on the hardware of the brain — then CTM is not merely incomplete but categorically mistaken. Its picture of the mind as an internal system that processes sensory inputs and produces behavioral outputs misconceives what perception, thought, and consciousness actually are. The implications for AI are severe: if the computational picture is wrong about human minds, the attempt to produce minds by implementing the computation is chasing a ghost.
The debate between CTM and its embodied-enactive critics is one of the central disputes in contemporary philosophy of mind. CTM remains dominant in much of cognitive science and AI research; enactivism, extended mind theory, and other '4E' (embodied, embedded, enacted, extended) approaches constitute the leading alternative. The AI revolution has made the dispute practically urgent: how we think about what AI is doing, and what it could do, depends on which framework we accept.
Hilary Putnam, 'Psychological Predicates' (1967), later published as 'The Nature of Mental States'; Jerry Fodor, The Language of Thought (1975); David Marr, Vision (1982). The broader research program was founded at the 1956 Dartmouth Workshop and developed through the work of Herbert Simon, Allen Newell, Marvin Minsky, and others.
Mind as computer. Mental states are computational states; thinking is information processing.
Multiple realizability. The same computation could run on silicon or carbon; substrate doesn't matter.
Representation-manipulation. Cognition consists in the manipulation of internal representations of external states.
The functionalist license. If mental states are defined by their functional role, machines with the right functional organization could have genuine mental states.
The enactivist challenge. If cognition is embodied activity rather than internal computation, CTM is foundationally wrong.
The dispute between computationalists and enactivists is ongoing and unresolved. Computationalists argue that enactivism either reduces to a special form of computation or fails to explain the cognitive phenomena it claims to illuminate. Enactivists argue that computational accounts miss what is distinctive about living, embodied cognition — the genuine engagement of an organism with a world that matters to it.