Sense-making is the most load-bearing concept in Thompson's framework. It names what living organisms do and what computational systems do not: the active creation of significance through the organism's embodied engagement with its environment. The bacterium navigating a chemical gradient is not processing information about the gradient. It is making sense of the gradient — evaluating it in terms of its own needs, its own survival, its own stakes in continued existence. The significance is not in the sugar; it is in the relationship between the organism and the sugar, a relationship constituted by the organism's autopoietic need for nutrients. This relational structure is the foundation of all cognition, from the simplest adaptive behavior to the most sophisticated conceptual thought, and it is the specific capacity that AI systems, lacking both stakes and embodiment, cannot possess.
There is a parallel reading that begins from the energy and infrastructure required to maintain the distinction Thompson describes. Sense-making may indeed be the property of living systems, but living systems exist within material economies that shape what can be sensed and what significance can be enacted. The bacterium's chemotactic navigation requires a specific chemical environment; the human's capacity to recognize insight requires years of nutritional support, educational infrastructure, and social reproduction. The substrate of significance is not just the organism's autopoietic boundary but the entire material ecology that makes that boundary possible.
This matters for AI collaboration not because it challenges Thompson's distinction but because it reveals what's at stake in the shift. When Edo works with Claude, the sense-making may be entirely his, but the capacity to do that work late at night—to have the energy, the quiet space, the economic security to pursue a book project—depends on structures that are themselves being reorganized by the AI transition. The question is not whether humans will continue to be the locus of sense-making but whether the material conditions that support human sense-making will be preserved or allowed to erode. If significance requires stakes, and stakes require viable lives, then the politics of who gets to maintain a life capable of sustained sense-making becomes the actual question. Thompson's framework names what AI cannot do; it does not name what will happen to the humans who can.
The concept distinguishes Thompson's enactive framework from functionalist and representationalist theories of cognition. Functionalism holds that cognition is whatever performs the appropriate causal role, regardless of substrate. Representationalism holds that cognition consists in the manipulation of internal symbols that stand for features of the world. Sense-making refuses both. Cognition is not a functional role that can be filled by any system with the right causal structure; it is the specific activity of a living organism whose engagement with its environment is oriented by its own needs. Cognition does not manipulate pre-given representations; it enacts the significance of environmental features through activity.
The concept has immediate application to the collaborations described in The Orange Pill. When Edo Segal describes working with Claude late at night, the enactive analysis reveals the meaning of the exchange as enacted entirely by Segal. Claude generates sequences of tokens that are statistically probable given the input. Segal enacts a world in which those sequences mean something — in which the book matters, in which getting the argument right matters, in which the collaboration serves a project that is embedded in his life, his concerns, his embodied history. The meaning is not in the tokens; it is in the living mind that receives them and finds in them a connection to what it cares about.
Sense-making is graded, not binary. The bacterium's sense-making is minimal — a binary evaluation of sugar or not-sugar — but it is genuine. Human sense-making is extraordinarily rich, shaped by language, culture, emotional history, and intersubjective engagement with other minds. Each level of sense-making is continuous with the levels beneath it; the human's capacity to recognize a friend's face is continuous with the bacterium's capacity to recognize a nutrient, and both are continuous with the autopoietic self-recognition through which the organism maintains its boundary against the environment.
The practical consequence for AI-augmented work is the diagnostic that Thompson's framework provides for fluent fabrication. When an AI system produces a passage that sounds insightful but breaks under examination, the failure is not a bug to be fixed. It is a structural consequence of a system that generates outputs without sense-making. The system has no way to distinguish between a connection that illuminates and a connection that merely sounds as though it does, because distinguishing requires a being that has stakes in the quality of understanding.
Sense-making was introduced as a technical term in Thompson's Mind in Life (2007), drawing on Varela's earlier work on autopoiesis and Weber's extension of the concept into a theory of biological value (2002). The concept has since been developed by a school of enactive cognitive scientists including Ezequiel Di Paolo, Hanne De Jaegher, and Shaun Gallagher.
Significance is relational. It lives in the organism-environment relationship, not in the organism alone or the environment alone.
Stakes generate significance. A system has to be something that can win or lose before its environment can carry meaning.
Computation processes; organisms enact. Processing operates on representations; enacting creates the significance that representations presuppose.
Humans supply the sense-making in AI collaborations. The tool generates; the living mind evaluates; the evaluation is where the meaning lives.
The concept has been challenged by philosophers who argue that it illicitly uses the phenomenological language of human experience to describe processes — like bacterial chemotaxis — that do not plausibly involve experience at all. Thompson's response is that sense-making names a formal organizational property, not a phenomenological one: the bacterium's sense-making does not presuppose that the bacterium has experiences, only that its operations are oriented by its own continuation.
Thompson's framework establishes a necessary condition: sense-making requires stakes, and stakes require the autopoietic organization of living systems. On this formal point, the enactive analysis is fully correct (100%). Computational systems cannot have stakes in their own continuation because they lack the organizational closure that makes continuation a live question. The distinction holds regardless of architectural sophistication—no amount of parameter scaling gives a system something to lose.
The contrarian reading identifies a different question: not whether living systems are necessary for sense-making, but whether the material conditions that support living systems capable of rich sense-making are being preserved. Here the weighting shifts substantially (70/30 toward the contrarian view when considering medium-term outcomes). The formal capacity for sense-making does not guarantee the social conditions that allow that capacity to be exercised. A human who must work three gig-economy jobs has the formal capacity for sense-making that Claude lacks, but may have fewer resources to actually make sense than Edo does in his collaboration.
The synthetic frame the topic benefits from distinguishes levels of analysis: Thompson's framework operates at the level of cognitive architecture and establishes what is categorically possible. The contrarian reading operates at the level of political economy and asks what is likely to be preserved. Both are needed. The first tells us that AI cannot replace human sense-making; the second tells us that this fact does not automatically protect the conditions under which humans can do the work. The diagnostic question for any AI collaboration becomes: does this configuration strengthen or weaken the material foundation that makes sustained sense-making possible?