Knowing Is Doing — Orange Pill Wiki
CONCEPT

Knowing Is Doing

Maturana's equation of cognition with effective action in a domain of existence — the biological thesis that knowledge is not representation but the organism's capacity to maintain its autopoiesis through engagement with its world.

A bacterium moving up a chemical gradient toward glucose is not computing an optimal trajectory or building an internal model of sugar distribution. It is generating directed movement through molecular mechanisms that produce effective action — action that maintains the organism's continued self-production. In Maturana's framework, that effective action is cognition. Not a metaphor for cognition, not a primitive version of it, but cognition. Knowing is the capacity for effective action; doing is the knowing. The formulation dismantles the representational model of mind that has dominated Western philosophy since Descartes. The organism does not need to represent the world accurately; it needs only to generate internal states that, coupled with its motor repertoire, produce behaviors that maintain viability. The frog does not need to know the dark moving spot is a fly — it needs only to generate a state that triggers the tongue at the right moment.

In the AI Story

Hedcut illustration for Knowing Is Doing
Knowing Is Doing

Applied to human cognition, the framework produces a radical reorientation. The engineer who debugs a system is not acquiring knowledge in the sense of building an internal representation. She is acting effectively in the domain — trying things, observing responses, modifying her approach — and the structural changes her nervous system undergoes through this activity constitute her knowing. When she says she 'understands' the system, she describes the fact that her effective action in the domain has reached coherence matching the domain's demands.

Geological understanding acquires biological weight through this lens. When Segal describes hours of debugging as depositing layers of understanding, he describes what Maturana would call the structural modification of a living system through recurrent effective action. Each layer is not a piece of information added to a database — it is a change in the organism's structure, a modification of neural connectivity, attentional habits, embodied response patterns, that alters all subsequent interactions with the domain. The engineer who has debugged a thousand systems brings a different nervous system to the next debugging session than the engineer who has debugged ten.

When the doing is delegated to the machine, something specific happens. The builder describes a problem to Claude. Claude generates code. The builder reviews, tests, deploys. The problem is solved, output is indistinguishable from manual implementation, often better. But the builder has not acted effectively in the domain. She has described the domain to a machine, and the machine has acted in it on her behalf. The perturbations that would have triggered structural modification — error messages, unexpected behaviors, moments when code does not do what she intended and she must figure out why — have been absorbed by the machine. The machine processed them. The builder's nervous system was not perturbed by them. The layers were not deposited.

The counterargument Segal himself makes is that delegation frees the builder to act effectively at a higher level — no longer perturbed by syntax errors and dependency conflicts, but perturbed by architectural questions, strategic decisions, judgment calls. This is the ascending friction thesis, and Maturana's framework supports it when the higher-order engagement actually occurs. The Berkeley study suggests it often does not. Freed time fills with more delegation at the same level — task seepage — rather than with deeper engagement. The builder becomes broadly competent rather than deeply knowing.

The distinction between review and engagement matters biologically. The reviewer is perturbed by code's surface — readability, structure, apparent correctness. The writer is perturbed by the domain's depths — unexpected interactions, edge cases, moments when the system reveals something no surface inspection could capture. These are not merely different kinds of work. They are different cognitive activities producing different structural modifications, sustaining different kinds of knowing.

Origin

The 'knowing is doing' formulation crystallized across Maturana's work in the 1970s and 1980s, particularly in the 1987 'Tree of Knowledge' with Varela, where it appears as a slogan but carries decades of argument behind it. The roots go back to the 1959 frog's-eye paper, which demonstrated that perception is not the registration of external reality but the generation of species-specific patterns of activity that permit effective action.

The formulation deliberately echoes and opposes the Cartesian 'I think, therefore I am.' For Maturana, thinking is not a special activity distinct from other bodily processes; it is the organism's effective action in its domain. The being is in the doing, not in some inner theater that the doing merely serves.

Key Ideas

Cognition as effective action. Knowing is the organism's capacity to act in ways that maintain its autopoiesis, not the possession of accurate internal representations.

The doing is the knowing. Understanding is not stored awaiting retrieval; it is enacted, brought forth through activity, and exists only as long as the capacity for effective action persists.

Structural modification through engagement. Each episode of effective action modifies the organism's structure. Deliberate practice, debugging, writing, languaging — all produce neural, attentional, and embodied changes that persist.

Delegation severs the loop. When the doing is performed by an allopoietic machine on the builder's behalf, artifacts accumulate without the corresponding self-production. The risk is not loss of skill but loss of cognitive self-production — the activity through which the knower is made.

Debates & Critiques

The representational model of mind remains dominant in mainstream cognitive science and AI research, where 'knowledge' often means encoded information retrievable from a structured store. Maturana's framework is one of the strongest challenges to this orthodoxy. The question of whether large language models 'know' anything becomes tractable through the enactive lens: they do not act effectively in a domain of existence; they generate token sequences. Whether this distinction matters for practical purposes is debated; that it matters for the builder who couples with them is the claim this chapter makes.

Appears in the Orange Pill Cycle

Further reading

  1. Humberto Maturana and Francisco Varela, The Tree of Knowledge (1987)
  2. Francisco Varela, Evan Thompson, Eleanor Rosch, The Embodied Mind (1991)
  3. Alva Noë, Action in Perception (MIT Press, 2004)
  4. Evan Thompson, Mind in Life (Harvard, 2007)
  5. Humberto Maturana, 'Cognition' (in Wahrnehmung und Kommunikation, 1978)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT