LLMs Don't Know Anything — Orange Pill Wiki
WORK

LLMs Don't Know Anything

The 2024 Trends in Cognitive Sciences paper by Mariel Goddu, Alva Noë, and Evan Thompson — the technical enactivist critique of knowledge attribution to large language models, and the argument that prediction is not understanding.

'LLMs Don't Know Anything' is the 2024 paper by Mariel Goddu, Alva Noë, and Evan Thompson in Trends in Cognitive Sciences that brings the enactive framework into direct engagement with contemporary large language models. The paper's deliberately provocative title encodes a precise philosophical claim: LLMs produce outputs consistent with the distributional patterns of their training data without possessing anything that should be called knowledge in the sense that requires an embodied organism engaged with a world. The paper identifies two specific errors in claims that LLMs know things: treating models as agents rather than as tools, and inferring causal understanding from predictive capacity.

In the AI Story

Hedcut illustration for LLMs Don't Know Anything
LLMs Don't Know Anything

The paper emerged from the increasingly urgent need for philosophical clarity about what large language models are doing when they produce fluent, sophisticated text. Popular discourse and even some technical literature had begun attributing knowledge, understanding, beliefs, and even consciousness to these systems. The authors argue that these attributions reflect a confusion between the sophistication of the tool and the cognitive achievements of the organisms who use them.

The first error the paper identifies is treating LLMs as agents. An agent is an entity with goals, interests, and stakes in the world — a locus of cognitive activity that can be said to know things, want things, and decide things. A tool is something used by agents to accomplish their purposes. LLMs, the paper argues, are tools, not agents. They have no stakes in their outputs, no interests in the world, no embodied engagement that could ground agency. Treating them as agents projects onto them a cognitive status they do not possess.

The second error is inferring causal understanding from predictive capacity. LLMs predict next tokens with remarkable accuracy. This accuracy is often taken to indicate that the model understands the causal structure of the domain it models. The paper argues this inference is invalid. A weather model predicts rain accurately; it does not understand rain. Prediction requires only the tracking of statistical regularities. Understanding requires embodied engagement with the causal structure of the world — the kind of engagement that only living organisms can have.

The paper has become a key reference in the ongoing debate about AI consciousness and cognition. Proponents of computational accounts of mind argue the enactivist critique sets the bar for knowledge so high that it becomes unachievable by anything short of a biological organism, and that this is question-begging. Enactivists respond that the bar is where it is because that is what knowledge is — an achievement of embodied, engaged, caring organisms that cannot be captured by statistical pattern-matching, however sophisticated.

Origin

Mariel Goddu, Alva Noë, and Evan Thompson, 'LLMs Don't Know Anything', Trends in Cognitive Sciences (2024). Co-authors include leading figures in enactivist philosophy of mind.

Key Ideas

Tool versus agent. LLMs are sophisticated tools, not cognitive agents with knowledge.

Prediction is not understanding. Statistical accuracy in next-token prediction does not entail causal understanding of the domain.

Knowledge requires embodiment. The enactive condition for knowledge attribution cannot be met by disembodied systems.

Against distributional semantics. The meaning of words is not exhausted by their distributional patterns in text.

Urgency for AI discourse. The conceptual confusions the paper targets are not academic; they shape policy, investment, and cultural expectations.

Debates & Critiques

The paper has been contested by researchers who argue that cognitive attribution should be based on functional capacity rather than substrate, and that the paper's argument proves too much — ruling out in principle cognitive achievements that LLMs might eventually demonstrate. The authors' response is that functional equivalence arguments assume what needs to be shown: that what LLMs do is functionally equivalent to knowing, which begs the question.

Appears in the Orange Pill Cycle

Further reading

  1. Mariel Goddu, Alva Noë, and Evan Thompson, 'LLMs Don't Know Anything', Trends in Cognitive Sciences (2024)
  2. Alva Noë, 'Can Computers Think? No. They Can't Actually Do Anything', Aeon (2024)
  3. Emily Bender et al., 'On the Dangers of Stochastic Parrots', FAccT (2021)
  4. Evan Thompson, Mind in Life (Harvard University Press, 2007)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
WORK