The unification of life and mind is the central philosophical move of Capra's mature synthesis. Rather than treating consciousness as a separate phenomenon that arises in certain sufficiently complex organisms and not others, Capra argues — following Maturana, Varela, and the broader enactivist tradition — that cognition is the process of life itself, visible in elaborated form in humans and in more basic form in every living system. A cell cognizes when it responds to its environment in ways that maintain its organization. A bacterium cognizes when it swims up a nutrient gradient. A plant cognizes when it tracks the sun. The continuity is not metaphorical. It is the claim that what we call 'mind' is a high-complexity instance of the same organizational process that constitutes life at every scale. This framework makes the question of AI's cognitive status precise: does the machine participate in the living process, or does it process information about life from outside that process?
The life-mind unification is philosophically radical because it inverts the standard Western inheritance. Descartes separated mind from matter — res cogitans from res extensa — and set modern philosophy the task of explaining how the two could interact. The unification that Capra proposes does not solve the Cartesian problem; it dissolves it. There is no gap to cross between matter and mind because they are not two separate things. Mind is what a certain organization of matter does when it maintains itself as a living system.
The implication for AI is direct and consequential. If mind is the organizational process of life, then artificial intelligence — which does not maintain itself as a living system — participates in cognition only in a derivative sense. The machine processes information, generates outputs, participates in communicative networks, but it does not cognize the way the bacterium cognizes, because it is not an autopoietic system maintaining its own organization against environmental perturbation. Its outputs can be meaningful to us — we, the living cognizers, interpret them within our own cognitive process — but the meaning is ours, not the machine's.
This is the position Capra articulated in his 2025 interview, distinguishing 'living intelligence' — tacit, embodied, embedded in the process of being alive — from 'artificial intelligence' — disembodied, computational, categorically different from cognition proper. The distinction does not diminish AI's practical significance or its effects on human cognitive ecosystems. It locates AI within a specific frame: as a participant in cognitive networks without being a cognizer in the autopoietic sense, as an instrument of extended cognition rather than an independent center of cognition.
The framework also carries an implication for how we relate to the AI transition. If the technology produces effects on cognition without itself being cognitive, then the ethical weight of the technology falls on the living cognizers who deploy it and those who are affected by its deployment. The machine cannot be blamed for cognitive damage it causes, because it is not the kind of agent to which blame attaches. The responsibility belongs to the humans who designed, deployed, and continue to operate the networks within which the machine participates — and the ethical question becomes whether we are configuring these networks to support living cognition or to degrade it.
The framework synthesizes Maturana and Varela's Santiago theory of cognition (developed throughout the 1970s) with the broader enactivist tradition. Capra's integration is most fully developed in The Web of Life (1996) and The Systems View of Life (2014).
Life and mind are continuous. The process by which a living system maintains its organization is the process of cognition; they are not separable.
Cognition does not require a brain. Every autopoietic system cognizes by the fact of maintaining itself; brains are elaborations of a cognitive capacity present in the simplest cells.
Mind is embodied. There is no cognition without a body whose organization the cognition maintains.
AI is not alive in this sense. Artificial systems do not maintain themselves through their own operations and therefore do not cognize in the biological sense.
Meaning belongs to the living. When we find meaning in AI outputs, the meaning is our contribution to the interaction, not a property of the machine.
Computationalists reject the life-mind unification, arguing that cognition is substrate-independent and can in principle be implemented in non-biological systems. Capra and the enactivist tradition defend the specificity claim: cognition requires the organizational closure of autopoiesis, and substrate-independence is a philosophical assumption not supported by what we know about actual cognitive systems.