Autopoiesis names the organizational form through which living systems produce and maintain themselves. A cell generates the membrane that contains the chemical processes that generate the membrane; the system is simultaneously product and producer. Thompson takes this biological concept and builds it into the foundation of the life-mind continuity thesis: all living systems are autopoietic, all autopoietic systems are cognitive (in the minimal sense of making sense of their environment through their own operations), and human consciousness is an elaboration of this basic capacity. The argument's force for the AI discourse is that computational systems are not autopoietic — they do not maintain themselves, do not have boundaries they continuously regenerate through their own operations, and therefore lack the organizational property from which cognition, on Thompson's account, actually emerges.
The concept was coined by Humberto Maturana and Francisco Varela in 1972 to name what distinguishes living systems from non-living ones. The distinction is not composition but organization. A living cell and a non-living assembly of the same molecules differ not in their chemistry but in the specific organizational form — the self-producing, self-maintaining, boundary-generating network of processes — that the living cell instantiates and the non-living assembly does not.
Thompson's contribution was to trace the consequences of this organizational form upward into the domain of cognition. The autopoietic system is not a passive object waiting to be acted upon. It is an agent — a system whose operations are oriented toward its own continued existence, and whose interactions with its environment are therefore evaluated in terms of what supports or threatens that existence. This evaluation is the primitive form of sense-making, and sense-making, on Thompson's account, is the minimal form of cognition.
The Orange Pill's metaphor of the river of intelligence traces cognition from hydrogen atoms through biology to artificial computation. Thompson's framework identifies a discontinuity this metaphor conceals. The transition from chemistry to biology — from molecular interactions to autopoietic organization — was not a widening of an existing channel but the emergence of a new kind of process: a process with an inside, a perspective, a situation in which things matter. The transition to artificial computation does not reproduce this transition. It creates a new kind of outside — a powerful information-processing apparatus that operates alongside autopoietic systems without being one.
The practical consequence is the distinction between stopping and dying. When an autopoietic system ceases to maintain itself, something irreversible occurs: the specific organization, the history of coupling with the environment, the meaning the system's existence constituted — all are lost. When a computer is turned off, nothing analogous happens. The data persists, the software can be reinstalled, the computation can be resumed. Nothing was at stake because nothing was alive.
The term autopoiesis was introduced by Maturana and Varela in De Máquinas y Seres Vivos (1972). Thompson's extension of the concept into a theory of mind was developed across two decades and culminated in Mind in Life (2007), where autopoiesis serves as the organizational foundation on which the life-mind continuity thesis is built.
Self-production is the mark of life. Living systems are organized as networks of processes that produce the components that produce the network.
Autopoiesis grounds cognition. The organism's self-maintenance gives it stakes, and stakes give it a perspective from which its environment acquires significance.
Stopping is not dying. Computational systems can be paused and resumed without loss; autopoietic systems cannot.
AI systems are not autopoietic. They are built, maintained, and powered by external agents — they have no operations oriented toward their own continuation, because they have no own continuation to orient toward.
Some philosophers — including Peter Godfrey-Smith and Daniel Dennett — argue that autopoiesis is not necessary for cognition, and that a system's cognitive status can be determined by its behavioral and functional properties independent of its organizational form. Thompson's reply is that behavioral similarity does not entail cognitive identity, because cognition is a process, not a set of outputs, and the process requires the specific organizational features that autopoiesis names.