In January 2025, Thompson, along with Federico Benitez, Cecilia Heyes, and Gualtiero Piccinini, published a letter in Nature titled 'Why AI will never be able to acquire human-level intelligence.' The word never carried the weight of the argument. Not not yet, not unlikely to, but never — because the argument was not about engineering timelines but about the conceptual coherence of attributing human-level intelligence to a computational system. The letter identified three capacities large language models appear to lack — generalization, representation, and selection — and argued that each is grounded in the organism's embodied existence in ways no computational architecture can replicate.
The letter's intervention cut beneath the familiar positions of the AI discourse. Accelerationists predicted general intelligence within the decade. Cautious optimists hedged their timelines. Ethicists worried about alignment without questioning the premise that alignment would eventually be necessary. Thompson and his co-authors refused the shared assumption on which all three positions rested: that the question of machine intelligence is a question of when, not whether.
The three capacities the letter identified are structurally grounded in embodied existence on the enactive account. Generalization requires a body that has encountered the world in multiple modalities and can draw on multimodal experience to recognize abstract patterns. Representation, in the sense the letter intends, is the creation of a world model enabling decisions by anticipating consequences — not a statistical model of likely next tokens but a genuine understanding of causal structure allowing action in a comprehended world. Selection is the capacity to choose relevant information from the flood of available data through the organism's own sense of what matters, grounded in its needs, history, and embodied engagement.
The letter was widely discussed in the following months. Critics argued that its claims were unfalsifiable or question-begging. Defenders argued that the letter articulated with precision what the enactive framework had been saying for three decades and that the clarity of the argument was commensurate with the urgency of the moment — a moment in which the functional indistinguishability of AI outputs from the outputs of genuine cognition threatens to make the categorical difference between them invisible.
Published in Nature in January 2025, co-authored by Federico Benitez, Cecilia Heyes, Gualtiero Piccinini, and Evan Thompson.
Never, not not-yet. The argument is conceptual, not about engineering timelines.
Three missing capacities. Generalization, representation, and selection — each grounded in embodied existence.
The framework is enactive. The letter applies three decades of enactive work to the specific question of LLM cognition.
The claim is modest in scope, radical in implication. AI will not achieve human-level intelligence; it can still be extraordinarily useful.