The extended-mind thesis, proposed in Andy Clark and David Chalmers's 1998 paper "The Extended Mind," argues that the boundary of the mind is not the skin. When a person uses a notebook to remember, Clark and Chalmers argue, the notebook is part of the cognitive system — not an external aid to an internal mind. The thesis has become the philosophical foundation for thinking about how humans should relate to AI tools.
There is a parallel reading that begins not with cognitive function but with substrate. Clark and Chalmers's notebook is inert until retrieved; it holds memory but does not actively process. A language model, by contrast, performs inference at scale—billions of parameters trained on human language, responding in milliseconds with outputs that shape the direction of thought before the user has finished articulating the question. The parity principle asks: would this count as cognitive if performed in the head? But the question conceals an asymmetry. Internal cognition is constrained by biological limits—attention, working memory, fatigue. External AI cognition is constrained by data centers, parameter counts, and training corpora. The notebook extends memory; the model extends judgment, and judgment at this scale is never neutral.
The extended-mind thesis was developed in an era when external scaffolding was passive or slow. A calendar does not propose meetings; a GPS does not suggest destinations based on your browsing history. But a language model co-author does propose next sentences, and those proposals are shaped by training data the user cannot audit. The frame that dissolves the boundary between tool and mind also dissolves the boundary between enhancement and capture. If cognition extends into the model, then the model's biases, its corpus, its optimization targets—all become part of the cognitive system. The question is not whether AI is inside or outside the skull. The question is: whose predictions are shaping the scaffolding?
For AI, the extended mind thesis reframes the entire human-AI relationship. If the notebook is part of cognition, then language-model assistants are part of cognition when used consistently. The question of whether AI is "an external tool" or "part of the user" dissolves: cognition was never confined to the skull. This is the framing Clark himself developed in Natural-Born Cyborgs (2003) and Supersizing the Mind (2008).
The extended-mind thesis has grown quietly more practical as AI tools have become ubiquitous. When a user composes prose with an AI assistant, the resulting text is a joint product in a specific sense: the human and the model share a cognitive workflow, and neither alone could have produced the same output. Andy Clark's framework treats this as unsurprising — cognition was never confined to the skull, and the assistant is just a new kind of scaffolding. Critics worry the ease with which AI tools can be integrated into thought blurs the line between enhancement and replacement; Clark's position is that the line never existed in the first place.
Clark, A. & Chalmers, D. "The Extended Mind." Analysis, 58.1 (1998). Initially considered provocative; by the 2010s it had become mainstream in philosophy of mind and cognitive science.
Parity principle. If an external process would count as cognitive when performed in the head, it is cognitive when performed outside.
Cognitive scaffolding. Language, tools, institutions, and practices all extend and reshape what a mind can do.
Predictive processing. Clark's later framework: brains are prediction engines, and predictions rely on whatever scaffolding is available.
Implications for AI. The thesis invites a non-adversarial framing of human-AI relations: AI as scaffold rather than competitor.
Cognitive offloading as a general phenomenon. Calendar systems, GPS navigation, search engines, and now LLMs all represent forms of cognitive offloading. Research on how offloading reshapes the skills of the offloader is now substantial; the pattern is consistent: the offloaded skill weakens, the meta-skill of using the external scaffolding strengthens.
On the basic functional claim, Clark's framework is correct at 100%: cognition has always been distributed across tools, environments, and social practices. The notebook, the abacus, the search engine—all extend what a mind can do, and the boundary of cognition is not the skin. The extended-mind thesis accurately describes the structure of human thought as it has existed for millennia. Where the contrarian reading gains weight—call it 60%—is in the governance of scaffolding. A notebook is inert; a language model is active and shaped by optimization targets the user does not control. The asymmetry is real, and it matters for questions of autonomy and capture.
But the synthesis the topic itself benefits from is this: scaffolding has always had governors. Language is scaffolding, and language is shaped by culture, power, and history. The extended mind was never a neutral surface; it was always traversed by the forces that shaped the tools available. What changes with AI is the speed and opacity of those forces—not their existence. The right frame is not "tool versus replacement" but "which scaffolding, under what governance." Clark's thesis correctly identifies the structure; the contrarian reading correctly identifies the stakes. The question for AI is not whether cognition extends into models—it does—but whether the user can inspect, resist, or reshape the predictions the model offers. Cognitive extension is inevitable. Cognitive sovereignty is not.