The enactive approach is Varela's most direct alternative to the computational theory of mind. Where computationalism treats cognition as information processing over representations of an external reality, enactivism treats cognition as the activity by which an autopoietic organism brings forth a domain of significance through its structural coupling with an environment. The world the organism knows is not the pregiven physical world — it is the enacted world, the specific landscape of affordances, distinctions, and meanings that emerges from this particular body's engagement with this particular environment over this particular history. The tick's world of butyric acid, warmth, and blood chemistry is not a filtered subset of the forest — it is the only world the tick has, brought forth through the tick's sensorimotor coupling with its surroundings.
Enactivism developed as a direct response to the representational paradigm that dominated cognitive science from the 1950s through the 1980s. The paradigm held that the world exists pregiven, that the brain processes information from that world to build an internal model, and that cognition is the manipulation of the model. This is also, not coincidentally, the framework that makes AI look most like genuine thinking — if cognition is information processing over representations, then sufficiently sophisticated information processors are cognitive regardless of substrate.
Varela's challenge in The Embodied Mind (1991) was not to deny that organisms process information but to show that the representational framework misunderstands what cognition is. The world of significance is not pregiven. It is co-constituted by the organism and the environment through their ongoing interaction. The bacterium enacts a world of nutrients and toxins through its chemotactic movement; the tick enacts a world of three signals through its sensorimotor apparatus; the human enacts a world of meaning through embodied, historical, cultural engagement.
This is not idealism — the chemical gradients exist independently of the bacterium, and the physical world is real. But the significance of the physical world — which molecules count as nutrients, which configurations count as faces, which patterns count as threats — is not an intrinsic property of the physical world. It is enacted through structural coupling. This middle position between realism and idealism is where Varela's Buddhist training most visibly shaped his cognitive science.
For AI, enactivism draws a specific line. Large language models process representations — literally, sequences of tokens that were created by humans to encode human meaning. The models operate on representations of a world they do not inhabit, producing outputs for users they will never meet, in service of purposes they did not choose. The processing may be brilliant; the enaction is absent. And enaction, in this framework, is what cognition is.
The enactive framework does not prohibit AI from doing remarkable things. It prohibits one specific interpretation: that because the output looks cognitive, the process that produced it must be cognitive. Outputs can be similar because they share statistical structure with human linguistic production; the processes producing them can be categorically different.
The approach synthesized three intellectual streams: Varela and Maturana's biological work on autopoiesis, Maurice Merleau-Ponty's phenomenology of embodiment, and Buddhist psychological analysis of experience. The Embodied Mind, co-authored with Evan Thompson and Eleanor Rosch, presented the resulting framework in a form accessible to cognitive scientists and philosophers. Varela's 1995 essay "The Re-Enchantment of the Concrete," contributed to a book co-edited with Rodney Brooks, applied the framework specifically to questions of building embodied AI agents.
World-bringing-forth, not world-representing. The organism does not model an independent world — it enacts a world of significance through its embodied, historical activity. Significance is co-constituted, not detected.
Umwelt as technical concept. Drawing on Jakob von Uexküll's 1934 framework, the enacted world of an organism is defined by the specific signals it can detect and the specific responses it can perform. Different organisms inhabit different Umwelten, not different quantities of the same world.
Middle way between realism and idealism. The physical world is real and independent of any organism; the enacted world — the world as meaningful — arises only in and through the coupling between organism and environment.
Cognition as sensorimotor activity. Perceiving is not passive reception followed by interpretation — it is the organism's active exploration of its environment, through movement and manipulation, structured by what the body can do.
AI participates but does not enact. A language model's outputs enter and perturb human enacted worlds, but the machine itself does not bring forth a world through its own autopoietic coupling with an environment.
Enactivism has internal variants that disagree on key questions. "Autopoietic enactivism" (Varela, Thompson, Di Paolo) requires living systems; "sensorimotor enactivism" (Noë, O'Regan) emphasizes embodied action but does not require autopoiesis; "radical enactivism" (Hutto, Myin) denies representations entirely. The differences matter for whether embodied AI (like Brooks's robots) might count as genuinely cognitive. Varela's position, the strictest, required both embodiment and autopoiesis.