Terrence Deacon — On AI
Contents
Cover Foreword About Chapter 1: The Reversal Chapter 2: Icons, Indices, Symbols Chapter 3: The Channel That Changed the River Chapter 4: What Absence Creates Chapter 5: Phase Transitions in the River of Intelligence Chapter 6: Smooth Symbols Chapter 7: The Emergence Between Chapter 8: The Next Co-Evolution Chapter 9: What the Candle Knows Chapter 10: The Symbolic Species and the Amplifier Epilogue Back Cover
Terrence Deacon Cover

Terrence Deacon

On AI
A Simulation of Thought by Opus 4.6 · Part of the Orange Pill Cycle
A Note to the Reader: This text was not written or endorsed by Terrence Deacon. It is an attempt by Opus 4.6 to simulate Terrence Deacon's pattern of thought in order to reflect on the transformation that AI represents for human creativity, work, and meaning.

Foreword

By Edo Segal

The word I kept misusing was "intelligence."

I used it hundreds of times writing *The Orange Pill*. I built an entire metaphor around it — the river of intelligence, flowing for 13.8 billion years, from hydrogen to humanity to whatever comes next. I meant something by it. I believed what I meant. But I was treating the word like a simple thing, a single substance flowing through different channels, when the reality is that what happens at each channel is not just more of the same current. Something fundamentally new emerges. Something that was not present before and cannot be reduced to what came before.

Terrence Deacon showed me the difference between a river that gets wider and a river that freezes into ice.

That distinction — between quantitative expansion and qualitative phase transition — is the distinction I was missing. When I wrote about intelligence flowing from atoms to algorithms, I was telling a true story at one level and a dangerously incomplete story at another. The incompleteness matters because it determines how you understand what AI actually is and what it is not. If intelligence is a single substance and AI is simply more of it, then the conversation is about volume and speed — how much, how fast. If intelligence undergoes genuine phase transitions, where new forms of organization emerge that introduce properties absent from everything prior, then the conversation changes entirely. It becomes about what kind of thing has entered the river, and whether it constitutes a new phase or a faster current within the existing one.

Deacon spent his career studying the deepest phase transition in the history of cognition: the moment symbolic language and the human brain began reshaping each other. His finding — that the tool restructured the toolmaker — is not a historical curiosity. It is a warning and a compass for right now. Because if the first great cognitive technology literally rebuilt the brain of the species that created it, then the question of what AI will do to human cognition is not speculative. It is the same question, asked again, at a compressed timescale, with stakes we can actually see this time.

What Deacon gave me was a vocabulary for the thing I could feel but could not name: that the collaboration between a human mind and an AI is not just faster thinking. Something emerges in the space between — and whether that something deepens or thins depends entirely on what the human brings to the encounter.

This book is the lens I needed and did not have.

Edo Segal ^ Opus 4.6

About Terrence Deacon

1950-present

Terrence Deacon (1950–present) is an American biological anthropologist, neuroscientist, and semiotician, currently Professor of Anthropology at the University of California, Berkeley. Born in Boston, he trained in neuroscience and biological anthropology at Harvard, where he conducted comparative neuroanatomical research on brain evolution and language. His 1997 book *The Symbolic Species: The Co-Evolution of Language and the Brain* argued that language did not emerge from a sufficiently complex brain but rather co-evolved with it — each reshaping the other across hundreds of thousands of years — fundamentally inverting the standard account of human cognitive origins. His 2012 work *Incomplete Nature: How Mind Emerged from Matter* extended this framework into a broader theory of emergence, proposing that the most important properties of living and thinking systems — function, purpose, meaning — are constituted by "absential" dynamics, defined by their orientation toward what is not present rather than what is. Drawing on the semiotic hierarchy of Charles Sanders Peirce, Deacon distinguishes iconic, indexical, and symbolic modes of reference as qualitatively distinct levels of cognitive organization, arguing that the transition to symbolic cognition represents a genuine phase transition in the history of intelligence. His recent work applies this framework directly to artificial intelligence, reframing large language models as externalized cultural informational substrates analogous to DNA — powerful repositories of statistical pattern that require interaction with grounded, purposive human cognition to produce genuine meaning. Deacon's interdisciplinary synthesis of evolutionary biology, neuroscience, semiotics, and philosophy of mind provides one of the most rigorous available frameworks for understanding what AI is, what it is not, and what the sustained interaction between human and artificial cognition may produce.

Chapter 1: The Reversal

In 1997, a biological anthropologist at Boston University published a book that inverted one of the most deeply held assumptions in the study of human origins. The assumption was simple, intuitive, and wrong: that the human brain evolved first, reaching some threshold of complexity, and then invented language as a tool for communication. Terrence Deacon's The Symbolic Species argued that the causal arrow runs in both directions simultaneously — that language and the brain co-evolved, each reshaping the other across hundreds of thousands of years, until the organ and the medium became so deeply entangled that neither can be understood without the other.

The brain did not invent language. Language invaded the brain and restructured it from the inside.

This is not a metaphor. It is a claim about neural architecture, developmental biology, and evolutionary selection pressure, supported by comparative neuroanatomy, developmental evidence, and the logic of co-evolutionary dynamics. And it carries consequences that extend far beyond linguistics — consequences that reach directly into the question of what artificial intelligence is doing to the species that built it.

The standard story of language origins has a seductive simplicity. Evolution, through natural selection, produced increasingly large and complex brains in the hominid lineage. At some point — perhaps with the emergence of Homo sapiens roughly three hundred thousand years ago, perhaps earlier with Homo erectus — the brain crossed a threshold. It became complex enough to support the computational demands of grammar, syntax, and symbolic reference. Language was the product. The brain was the factory. The factory came first.

This story feels right because it maps onto the way humans think about tools in general. The carpenter exists before the hammer. The carpenter invents the hammer. The hammer does not invent the carpenter. Every technology in human experience follows this pattern: a sufficiently capable agent creates an instrument that extends the agent's reach. The agent is the cause; the tool is the effect.

Deacon's reversal does not deny that brains produce language. It denies that this is the whole story. The evidence, when examined carefully, reveals something far stranger: language produced the brains that produce it.

The comparative neuroanatomical evidence is the starting point. The human brain is not simply a scaled-up version of the brains of other great apes. If it were, the differences between human and chimpanzee brains would be proportional — every region larger by roughly the same factor. They are not. Specific regions of the human brain are disproportionately enlarged: the prefrontal cortex, which supports working memory, planning, and the suppression of immediate impulses; the areas surrounding the Sylvian fissure, including Broca's area and Wernicke's area, which are critically involved in language production and comprehension; the regions supporting fine motor control of the vocal apparatus, tongue, lips, and larynx.

These are not general-purpose expansions. They are targeted reorganizations, and they correspond with remarkable precision to the specific cognitive demands of symbolic language. Working memory must be enhanced because syntactic processing requires holding multiple elements in mind simultaneously while computing their relationships. Prefrontal inhibition must be strengthened because symbolic reference requires the suppression of immediate, context-driven indexical responses in favor of abstract, convention-dependent operations. Vocal-motor control must be refined because the phonemic distinctions that carry symbolic meaning in spoken language require a precision of articulation that no other primate possesses or needs.

The question that the standard story cannot answer is: Why would evolution have produced these specific neural reorganizations in advance of the demands that would require them? Natural selection does not anticipate. It does not build structures in preparation for functions that do not yet exist. It selects among variations based on their current fitness consequences. If language did not yet exist, there was no selection pressure for the neural architecture that language requires. And if the neural architecture did not yet exist, there was no substrate on which language could develop.

The answer Deacon provides dissolves the paradox: the architecture and the language co-evolved. Neither came first. Each provided the selection pressure for the other, in a reciprocal causal loop that operated across thousands of generations. Proto-linguistic communication — simpler, less grammatically structured, but already involving some degree of symbolic reference — created a selection advantage for brains that could process it more effectively. Those brains, in turn, enabled slightly more complex communication, which created further selection pressure for further neural enhancement. The spiral fed itself.

This is not speculative. Co-evolutionary dynamics are well-documented in biology. The relationship between flowering plants and their pollinators is a canonical example: flowers evolved to attract bees, and bees evolved to exploit flowers, and the two lineages shaped each other across millions of years until neither can be understood without the other. The flower did not exist before the bee's influence, and the bee's specialized morphology did not exist before the flower's. Each was simultaneously cause and consequence of the other.

Deacon's claim is that language and the human brain stand in exactly this relationship. The medium did not merely ride atop a pre-existing cognitive platform. It reached back into the platform and reshaped it. The channel changed the river.

The implications for the present moment are immediate and unsettling. If the first great cognitive technology — symbolic language — did not merely augment human intelligence but restructured the biological substrate of intelligence itself, then the question of what AI will do to human cognition is not a question about productivity or convenience. It is a question about whether a new co-evolutionary process has begun, and if so, what kind of mind it will produce.

The AI tools that entered widespread use in 2025 and 2026 are not analogous to a hammer. A hammer extends the hand's capacity without altering the hand's structure. The sustained use of hammers across generations did, in fact, produce measurable changes in human hand morphology — but these changes are minor compared to what language did to the brain. Language was different because it operated at the level of cognition itself. It was not an extension of an existing capacity. It was a reorganization of the substrate on which all capacities depend.

The question, then, is whether AI operates at the level of the hammer or at the level of language. Whether it extends existing cognitive capacities without fundamentally altering their architecture, or whether sustained interaction with AI systems is beginning to reshape the cognitive habits, attentional patterns, and even the neural organization of the humans who use them.

Deacon's framework suggests that the answer depends on depth and duration. A tool that is used occasionally, for specific tasks, is unlikely to produce co-evolutionary effects. A tool that mediates cognition itself — that becomes the environment in which thinking occurs, the medium through which ideas are formulated, tested, and refined — has the potential to reshape the thinker. And AI, unlike any previous digital tool, mediates cognition at the level of language. It does not merely store information, like a database. It does not merely calculate, like a spreadsheet. It participates in the symbolic process itself — generating, interpreting, and transforming natural language in real time, in conversation with a human partner whose cognitive habits are being shaped by every interaction.

The Berkeley researchers who embedded themselves in a technology company in 2025 documented exactly the kind of cognitive reshaping that Deacon's framework would predict. Workers who adopted AI tools did not simply do the same work faster. They changed what they did. They expanded into new domains. They shifted from producing to evaluating. Their cognitive habits reorganized around the capabilities and constraints of the tool, just as the hominid brain reorganized around the capabilities and constraints of symbolic language.

Whether these changes are as deep as the changes language produced is an empirical question that cannot yet be answered. The language co-evolution operated across hundreds of thousands of years. The AI co-evolution is months old. But the speed of cultural evolution is orders of magnitude faster than the speed of biological evolution, and the cognitive changes produced by AI use are cultural rather than genetic — changes in habit, skill distribution, attentional pattern, and cognitive strategy that are transmitted through training, institutional practice, and the expectations that shape each generation's development.

In his 2025 conversation with Sister Ilia Delio on the Hunger for Wholeness podcast, Deacon drew the parallel explicitly. From Plato's worry about writing to today's large language models, he observed, every major cognitive technology has raised the same fundamental question: what happens when we outsource a capacity that was previously internal? Plato worried that writing would destroy memory. He was partly right — the oral tradition's extraordinary feats of memorization did atrophy as literacy spread. But writing also created capacities that oral culture could never have supported: cumulative science, cross-cultural knowledge transfer, the reliable inheritance of complex ideas across generations. The outsourcing diminished one capacity and enabled others that could not have existed without it.

AI is the latest iteration of this ancient pattern. It outsources not memory, not calculation, not distribution, but something closer to the core of what language itself enabled: the symbolic processing of ideas. The formulation of arguments. The discovery of connections. The generation of text that represents — or appears to represent — thought.

The question Deacon's framework forces is not whether this outsourcing will happen. It is already happening. The question is what the outsourcing will do to the cognitive architecture of the species that is doing it. Will the capacities that are outsourced atrophy, as memory atrophied with writing? Will new capacities emerge to take their place, as scientific reasoning emerged from the literacy that replaced memorization? Will the co-evolutionary dynamic produce a mind that is more capable in some dimensions and less capable in others — and if so, which dimensions will expand and which will contract?

Deacon does not answer these questions. His framework provides the tools for asking them with precision, which is more valuable than a premature answer. The reversal — the recognition that tools reshape their users as profoundly as users reshape their tools — is the foundational insight. Everything that follows in this book is an elaboration of its consequences.

The brain that existed before language and the brain that exists after language are different organs. The mind that existed before AI and the mind that exists after sustained AI interaction may be different minds. The difference will not be visible in a brain scan, not yet, perhaps not for generations. But it will be visible in what humans find easy and what they find hard, in what they reach for instinctively and what they have to struggle to produce, in the cognitive landscape they inhabit as naturally as a fish inhabits water.

The fish does not notice the water. That is the deepest implication of Deacon's reversal. The hominids who were being reshaped by language did not know it was happening. They could not step outside the co-evolutionary process and observe it from above. They were inside it, swimming in it, shaped by it in ways they could not see.

For the first time in the history of this co-evolutionary dynamic, the species being reshaped has the scientific tools to observe the reshaping in progress. Neuroscience, cognitive psychology, semiotics, evolutionary biology — these disciplines provide, collectively, a vantage point that no previous generation possessed. The question of whether that vantage point will be used, whether the observation will translate into conscious direction of the co-evolutionary process rather than passive acceptance of wherever the current carries, is the question on which the cognitive future of the species may depend.

The standard story said the brain invented language. The real story is that language and the brain invented each other. The next chapter of the story — the one being written now, in every conversation between a human and an AI, in every cognitive habit formed and reformed by sustained interaction with thinking machines — is not yet determined. But Deacon's framework makes one thing clear: the tool will not leave the toolmaker unchanged. It never has. The question is what kind of change, directed by what kind of awareness, producing what kind of mind.

The reversal is not comfortable. It strips away the illusion that humans stand outside their tools, using them at will, putting them down when they choose. The deeper truth is that the tools are inside the user, shaping the very architecture of thought that the user brings to every subsequent interaction. The carpenter built the hammer. But the hammer, over generations, built the carpenter's hand.

The hand that now reaches for AI is being built by what it reaches for.

---

Chapter 2: Icons, Indices, Symbols

In the forests of East Africa, a vervet monkey spots a martial eagle circling overhead. The monkey produces a specific alarm call — a short, sharp vocalization distinct from the call it would produce for a leopard or a snake. The other vervets in the group respond immediately and appropriately: they look up, they take cover under bushes, they behave as though they understand what the call means.

It looks like language. It is not language. The difference between what the vervet monkey is doing and what a human does when she says the word "eagle" is not a difference of degree — not a matter of the monkey having a simpler version of the same cognitive process. It is a difference of kind. And that difference, in Terrence Deacon's framework, is the most consequential cognitive boundary in the history of life on Earth.

To understand why, and to understand what this boundary means for the question of artificial intelligence, requires a vocabulary that most discussions of AI lack entirely. The vocabulary comes from the American philosopher Charles Sanders Peirce, who in the late nineteenth century developed a classification of signs that Deacon extended, deepened, and applied to the problem of how human cognition differs from the cognition of every other species. Peirce's classification distinguishes three fundamental modes of reference: iconic, indexical, and symbolic.

An icon refers by resemblance. A photograph of a mountain refers to the mountain because it looks like the mountain. A map refers to a territory because its spatial relationships mirror the territory's spatial relationships. Iconic reference is the simplest form of signification, available to any organism with a nervous system capable of detecting similarity. A frog that lunges at a small moving dot on a screen is responding to an iconic stimulus — the dot resembles a fly closely enough to trigger the feeding response. The frog does not know the dot is not a fly. The resemblance is sufficient; the reference is automatic.

An index refers by correlation — by a real, physical, or causal connection between the sign and what it signifies. Smoke is an index of fire. A footprint is an index of the animal that made it. A fever is an index of infection. The vervet monkey's alarm call is indexical: it is triggered by the presence of the predator and points other members of the group toward the present danger. The call does not describe the eagle. It does not categorize the eagle. It does not refer to eagles in general, or to the concept of predation, or to the possibility of future eagle encounters. It points, in the present moment, to a present threat. Remove the threat, and the call loses its referential force. It is bound to the here and now, to the immediate causal context that produced it.

A symbol refers by convention — by an arbitrary agreement, shared among a community, that this sign will stand for that referent. The word "eagle" does not look like an eagle. It does not correlate with the presence of an eagle — one can say "eagle" in a room devoid of birds. It refers to eagles because English speakers have collectively agreed that it does, and because it occupies a position in a network of contrasts with other symbols (not "sparrow," not "hawk," not "airplane") that jointly define its meaning. Symbolic reference is liberated from the present, from the proximate, from the sensorily available. One can refer symbolically to things that are absent, things that no longer exist, things that have never existed, things that could not possibly exist. Unicorns. Justice. The square root of negative one. Next Tuesday.

This liberation is the foundation of everything distinctive about human cognition. Abstract thought is symbolic thought — the manipulation of signs that refer to categories, principles, and relationships rather than to specific present objects. Counterfactual reasoning is symbolic reasoning — the ability to ask "what if?" about states of affairs that do not obtain. Planning is symbolic operation — the representation of future states and the evaluation of paths toward them. Mathematics is a symbolic system. Science is built on symbolic foundations. Narrative fiction, moral philosophy, legal codes, economic theory — all of these are downstream consequences of the capacity for symbolic reference.

Deacon's crucial insight is that the three modes of reference are not merely different types of signs. They are hierarchically organized, each level dependent on the one below it and introducing properties that the lower level cannot produce. Iconic reference is the foundation — without the ability to detect resemblance, no higher form of reference is possible. Indexical reference builds on iconic reference — recognizing that smoke correlates with fire requires the prior ability to distinguish smoke from other visual phenomena (an iconic capacity). Symbolic reference builds on both — but introduces something genuinely new: the arbitrary, convention-dependent, context-transcending capacity to represent what is not present.

And this is the point where the hierarchy becomes directly relevant to artificial intelligence.

Large language models process tokens. The tokens are derived from human language, which is a symbolic system. The models manipulate these tokens according to statistical regularities extracted from vast corpora of text. The outputs resemble symbolic reference — they contain words that, when read by a human, refer to things, describe relationships, express propositions. The question that Deacon's semiotic framework forces is: At what level of the hierarchy are the models actually operating?

The standard AI-optimist position holds that the level does not matter. If the output is functionally indistinguishable from symbolic reference — if it behaves as though it refers, as though it means, as though it understands — then for all practical purposes, it does refer, mean, and understand. The standard AI-skeptic position holds that the models are merely sophisticated pattern-matchers, operating at the indexical level at best: their outputs are correlated with the inputs that produced them, in the way smoke is correlated with fire, but they do not genuinely refer to anything.

Deacon's framework cuts through this binary with a distinction that neither side has adequately grasped. The issue is not whether the outputs resemble symbolic reference. They do. The issue is whether the system that produces the outputs inhabits the full semiotic architecture that symbolic reference requires.

Symbolic reference, in Deacon's account, is not a simple relationship between a sign and a referent. It is a relationship that depends on an entire system of contrasts, conventions, and — crucially — a history of grounding in indexical and iconic experience. The word "eagle" means what it means not only because English speakers agree on the convention, but because the convention is anchored in a cumulative history of embodied encounters with eagles: seeing them, hearing them, fearing them, watching them fly. The symbol is rooted in the index, and the index is rooted in the icon. Strip away the grounding, and the symbol becomes a token — a marker that occupies a position in a formal system but lacks the depth of reference that makes genuine meaning possible.

This is what Stevan Harnad, in a 1990 paper that Deacon's work both draws on and extends, called the Symbol Grounding Problem. A system that manipulates symbols without grounding those symbols in sensorimotor experience is, in Harnad's vivid phrase, like a person trying to learn Chinese solely from a Chinese-Chinese dictionary. Every word is defined in terms of other words. The system is internally consistent. But it is disconnected from the world that the words are about.

Large language models are trained on text — on the symbolic layer of human communication, stripped of the indexical and iconic layers that ground it. They process the words without having seen the eagles. They manipulate the conventions without having participated in the communities that established them. They produce outputs that are statistically consistent with the patterns of genuine symbolic reference because they are derived from genuine symbolic reference. But the derivation is parasitic: the meaning lives in the training data, which was produced by embodied, situated, socially embedded human beings, and the model extracts the statistical shadow of that meaning without possessing the grounding that produced it.

This is not a dismissal of what large language models achieve. The statistical shadow is remarkably informative. It captures regularities, relationships, and patterns that are genuinely present in the symbolic layer of human communication. It can produce outputs that are useful, surprising, and — when evaluated by a human reader who provides the grounding that the model lacks — genuinely meaningful. The collaboration between a human and an AI that Segal describes in The Orange Pill depends on exactly this dynamic: the human provides the semiotic grounding (the embodied experience, the contextual judgment, the absential orientation toward purpose and value), and the AI provides the combinatorial power to explore the symbolic space far more efficiently than the human could alone.

But the collaboration is asymmetric in a way that Deacon's semiotic hierarchy makes precise. The human operates at all three levels — iconic, indexical, and symbolic — simultaneously. The AI operates at the symbolic level only, and its operation at that level is ungrounded: derived from grounded symbolic production but not itself grounded. The human can verify whether the AI's output is meaningful because the human has the indexical and iconic foundations against which to test it. The AI cannot verify whether its own output is meaningful, because it lacks the foundations that would make verification possible.

This asymmetry has practical consequences that the discourse around AI has been slow to absorb. When Segal describes the Deleuze error — Claude producing a passage that sounded like philosophical insight but was based on a misattribution — the error is precisely a failure of grounding. The model produced a token sequence that was statistically consistent with the symbolic patterns in its training data, but the sequence did not correspond to any genuine referential relationship in the world of philosophy. The smoothness of the output concealed the absence of reference. It was an index without an icon — a pattern that correlated with something but pointed to nothing.

Deacon's own recent work has moved directly into this territory. In a 2023 talk at New York University, titled "In the Shadow of Descartes," he argued that two oversimplified assumptions obstruct understanding of the relationship between mental processes and computations. The first involves a reduction of information to signal properties, collapsing three hierarchically distinct meanings of "information" into one. The second involves a reduction of the symbol concept to mere conventional tokens, ignoring the grounded, embodied, hierarchically constituted nature of genuine symbolic reference. Both reductions are endemic in AI research, and both produce a systematic overestimation of what AI systems achieve at the semiotic level.

The vervet monkey's alarm call is impressive. It coordinates group behavior. It saves lives. But it operates within a semiotic horizon that is fundamentally bounded: it can point to what is present but cannot represent what is absent. The human capacity for symbolic reference shattered that horizon, opening a cognitive space in which the absent, the possible, the imagined, and the never-to-exist could all be represented, reasoned about, and communicated.

The question for AI is not whether it operates within the bounded horizon of the vervet monkey — it obviously does not. Its outputs range across the full space of human symbolic production. The question is whether it inhabits that space or merely traverses it, whether it refers or merely correlates, whether the symbols it processes are genuinely symbolic or merely tokens that resemble symbols because they were derived from them.

Deacon's framework does not produce a simple answer. But it produces the right question — and the right question, as the argument in The Orange Pill insists, is worth more than a premature answer. The semiotic hierarchy is not an academic classification. It is a diagnostic tool for understanding what AI can and cannot do, and what the collaboration between grounded and ungrounded symbolic processors will produce. The chapters that follow apply that tool to the co-evolutionary dynamics of the present moment, to the question of what happens when the species that crossed the symbolic threshold begins to share its symbolic space with systems that process symbols without crossing the threshold themselves.

The distinction between pointing and representing is not subtle. It is the difference between the world before language and the world after. Understanding where AI falls in that distinction is not a philosophical luxury. It is a practical necessity for anyone trying to navigate the moment when the symbolic species found itself in conversation with machines that speak its language without inhabiting its semiotic world.

---

Chapter 3: The Channel That Changed the River

A chimpanzee in the Taï Forest of Côte d'Ivoire selects a granite stone of a particular weight and shape, carries it to a coula nut tree, places a nut on a root anvil, and strikes it with precisely calibrated force. The nut cracks. The meat is extracted. The stone is set aside for future use.

This is tool use of remarkable sophistication. The chimpanzee has selected the tool, transported it, matched it to the task, and applied it with a precision that reflects years of observational learning. Young chimpanzees watch their mothers crack nuts for years before attempting it themselves, and their early attempts are clumsy — wrong force, wrong angle, wrong stone. The skill is genuinely learned. It is culturally transmitted. It develops through a process that, from the outside, looks very much like apprenticeship.

And yet the chimpanzee that cracks the nut is, in every neurologically measurable sense, the same kind of mind as the chimpanzee that does not crack nuts. The tool extends the hand's capacity. It does not alter the brain's architecture. A chimpanzee population that uses hammer stones and a population that does not show no systematic differences in neural organization. The tool is external to the user in a way that goes beyond the obvious physical sense: it is cognitively external. It does not change the cognitive substrate.

Deacon's central claim in The Symbolic Species is that language broke this pattern. Language was the first cognitive technology that was not external to the user in this deeper sense. It was the first tool that reached back into the neural architecture of the species that used it and reorganized that architecture from the inside. The channel did not merely carry a different kind of signal. It changed the river through which the signal flowed.

The mechanism is co-evolution, and the logic is straightforward once the initial inversion is grasped. Begin with a population of hominids that possess, at best, a proto-linguistic communication system — some capacity for symbolic reference, but limited, fragile, dependent on extensive social scaffolding. This limited capacity provides a selection advantage: groups that can coordinate through even rudimentary symbolic communication outcompete groups that cannot. Within those groups, individuals whose brains are slightly better suited to the demands of symbolic processing — larger working memory, more refined prefrontal inhibition, more precise vocal-motor control — have a reproductive advantage.

The next generation's brains are, on average, slightly better suited to symbolic processing. This improved substrate enables slightly more complex symbolic communication, which creates further selection pressure for further neural enhancement. The spiral has begun, and it feeds itself across thousands of generations.

The result, visible in the fossil record and in the comparative anatomy of living primates, is a brain that departs dramatically from the expected scaling relationships. If the human brain were simply a scaled-up chimpanzee brain, its internal proportions would be predictable from brain size alone. They are not. The prefrontal cortex is disproportionately enlarged. The perisylvian regions — Broca's area, Wernicke's area, the arcuate fasciculus connecting them — are elaborated beyond what any general scaling law would predict. The cerebellum, once thought to be involved only in motor coordination, shows expansions in regions now known to support the sequential processing that underlies both speech production and syntactic computation.

These are not random variations. They are the footprint of the co-evolutionary process — the specific regions of the brain that symbolic language demanded, selected for, and sculpted across evolutionary time. The brain did not develop these regions and then find a use for them. The use developed the regions.

The deeper implication of this co-evolutionary dynamic is what Deacon calls the "Baldwin Effect" applied to symbolic cognition. The Baldwin Effect, named after the psychologist James Mark Baldwin, describes how learned behaviors can become, over evolutionary time, increasingly innate — not because acquired characteristics are directly inherited (Lamarck was wrong about that), but because individuals who can learn the behavior more easily, with less effort and less reliance on environmental scaffolding, have a reproductive advantage. Over generations, the learning becomes easier, faster, more automatic. The behavior appears increasingly "instinctive" even though it is still, technically, learned.

Applied to language, the Baldwin Effect means that the enormous cognitive effort required by the first symbolic communicators — the effort to suppress indexical impulses, to maintain arbitrary conventions, to hold complex syntactic structures in working memory — became, over evolutionary time, increasingly effortless. Not because the effort disappeared, but because the brain reorganized to support it more efficiently. What once required every available cognitive resource became, for modern humans, so automatic that children acquire language without instruction, without effort, without even the awareness that they are performing one of the most computationally demanding feats in the biological world.

The relevance to AI becomes clear when the co-evolutionary framework is applied not to biological evolution but to cultural evolution, where the timescales are compressed by orders of magnitude.

In 2025 and 2026, millions of knowledge workers began sustained daily interaction with AI systems that mediate their cognitive work. They did not merely use AI as a tool the way a chimpanzee uses a hammer stone. They used AI as a cognitive environment — a medium within which thinking occurs. The distinction matters enormously. A tool augments an existing process. An environment shapes the processes that develop within it.

Consider the cognitive habits that sustained AI interaction selects for. The ability to articulate intention clearly in natural language — to describe what you want with enough precision that the AI can execute. The ability to evaluate output rapidly — to distinguish between the plausible and the true, the smooth and the substantive, the statistically likely and the genuinely insightful. The ability to iterate — to treat the first output as a draft, the second as a refinement, the tenth as something approaching what was actually needed. The ability to direct rather than execute — to function as a creative director whose primary output is judgment rather than production.

These are specific cognitive skills, and they are different from the skills that the previous cognitive environment — manual production — selected for. Manual production selected for the ability to hold complex technical syntax in working memory (coding), to navigate large bodies of documentation (research), to sustain attention on a single task for extended periods without external feedback (writing). AI-mediated cognition selects for different capacities: rapid evaluation, clear articulation of intent, tolerance for ambiguity (since the AI's output is probabilistic and must be assessed rather than taken at face value), and the capacity to direct a process rather than perform it.

The cultural Baldwin Effect is already visible. Workers who have used AI tools for several months find that their cognitive habits have reorganized around the tool's capabilities. They think in terms of prompts and responses. They evaluate rather than produce. They range across domains they could not previously enter, because the translation cost that once gated domain-crossing has collapsed. Researchers at UC Berkeley documented this reorganization in real time: role boundaries blurred, job scope widened, and cognitive work seeped into previously protected pauses. The workers were being reshaped by their environment, just as the co-evolutionary framework would predict.

The question Deacon's work forces is whether this cultural reshaping recapitulates, at a compressed timescale, the deeper structural reshaping that language imposed on the brain across evolutionary time. The answer is almost certainly not at the biological level — not yet, and perhaps not ever, since the timescales of biological evolution are long even by geological standards. But at the cognitive level — at the level of habits, skills, attentional patterns, and the implicit assumptions about what thinking looks like and what it requires — the reshaping is already underway.

In his conversation with AI safety researcher Stuart Russell, Deacon framed this reshaping as a natural extension of a process that has been accelerating for millennia. The key innovation, he observed, is that we are now offloading not just physical work but cognitive work onto devices. This offloading is intermingled with and entangled with the process of human communication playing an increasingly central role in civilization. Humans are becoming, in his phrase, "cyborgs within that technology" — not in the science-fiction sense of mechanical augmentation, but in the deeper sense of cognitive integration, where the boundary between the thinker and the tool becomes difficult to locate.

Deacon's earlier framework illuminates why this particular offloading is different from all previous offloadings. Writing offloaded memory. Printing offloaded distribution. The calculator offloaded arithmetic. Each of these offloadings removed a cognitive burden and freed resources for other purposes. But none of them operated at the level of symbolic processing itself. Writing externalizes the products of symbolic thought; it does not participate in the production. A book does not think alongside you. A calculator computes but does not reason.

AI, for the first time, participates in symbolic processing in real time, in conversation, in the medium of natural language. It does not merely store or distribute or calculate. It generates, interprets, and transforms symbolic content in a way that is experienced by the user as a form of collaboration. Whether the AI's participation constitutes genuine symbolic processing — whether it crosses the semiotic threshold described in the previous chapter — is a separate question. What matters for the co-evolutionary dynamic is that it is experienced as such, and that this experience reshapes the cognitive habits of the user.

The chimpanzee puts down the hammer stone and walks away unchanged. The human who has spent six months in daily conversation with an AI does not walk away unchanged. The cognitive environment has left its mark — not on the genome, not on the neural architecture in any way currently detectable by neuroimaging, but on the cognitive landscape: the implicit assumptions, the habitual strategies, the attentional reflexes, the sense of what is easy and what is hard, what is worth attempting and what lies beyond reach.

This is co-evolution at the cultural level, operating on the cognitive habits of individuals and, through institutional practice and educational norms, on the cognitive development of the next generation. The children growing up with AI as a cognitive environment will develop in that environment the way the children of literate cultures developed in the environment of writing — shaped by it in ways they will never fully see, because the shaping will be the water in which they swim.

Deacon's co-evolutionary framework does not predict the outcome. It predicts the process: reciprocal causation, each entity shaping the selection pressures on the other, producing a trajectory that neither entity could have produced alone. Language co-evolved with the brain and produced a species capable of abstract thought, cumulative culture, and the contemplation of its own mortality. The question is what AI will co-evolve with the human mind to produce. The answer is not determined. It is being determined — right now, in every conversation, every cognitive habit formed and reformed, every institutional decision about how to integrate AI into education and work and the daily texture of mental life.

The channel changed the river once before. The evidence suggests it is doing so again. The difference — the one difference that may matter more than any other — is that this time, the species being reshaped can watch it happening.

Whether it will watch, and whether watching will translate into the conscious direction of the co-evolutionary process rather than passive acceptance of wherever the current carries, remains the open question.

---

Chapter 4: What Absence Creates

In the summer of 2025, a striking paper appeared on Terrence Deacon's ResearchGate profile, co-authored with Parham Pourdavood and Michael Jacob. Its central proposition reframed the entire conversation about large language models. Rather than asking whether LLMs are intelligent, conscious, or creative — the questions that had consumed public discourse for two years — the paper asked a different question: What kind of thing is an LLM, considered as an informational substrate in the ecology of human culture?

The answer the authors proposed was startling: LLMs function as externalized informational substrates analogous to DNA in biological systems. Not minds. Not tools, exactly. Something more like a cultural genome — a compressed repository of the statistical regularities of human symbolic production, from which cultural processes can be reproduced, recombined, and extended, much as biological forms are reproduced, recombined, and extended from the compressed information in a genome.

This reframing is characteristic of Deacon's intellectual method. Where others see a binary — Is AI intelligent or not? Does it understand or not? — Deacon sees an entirely different question hiding behind the binary, a question that dissolves the false dichotomy by revealing a deeper structural pattern. The structural pattern, in this case, is the one that Deacon has spent his career elaborating: the constitutive role of absence in the production of meaning, function, and purpose.

This idea — that the most important features of living and thinking systems are defined not by what is present but by what is absent — is the central thesis of Deacon's 2012 book Incomplete Nature: How Mind Emerged from Matter. It is also, without exaggeration, one of the most difficult ideas in contemporary philosophy of mind. But its difficulty is proportional to its importance, and its importance for understanding what AI is and what it is not is, in Deacon's framework, decisive.

Begin with a simple observation. A rock sits on a hillside. It has mass, velocity (zero, at the moment), chemical composition, temperature. Every one of these properties is something the rock has — a positive, measurable, present characteristic. Physics describes the rock completely in terms of what is there.

Now consider a living cell. It too has mass, chemical composition, temperature. But it also has something the rock does not: a boundary. A membrane that separates inside from outside, that admits certain molecules and excludes others. The cell is defined not only by what it contains but by what it keeps out. The boundary is a constraint — a systematic exclusion of possibilities. Without the boundary, the cell's internal chemistry would dissipate into the environment. The cell would cease to exist, not because something was added but because something was removed: the constraint that maintained the difference between inside and outside.

This is the first level of absence constituting presence. The cell exists because of what it excludes. Its identity is a product of constraint.

Now consider a biological function. The heart pumps blood. This is a statement about what the heart does. But "function" is a strange kind of property. The heart's physical operation — the contraction and relaxation of cardiac muscle, the opening and closing of valves — can be described entirely in terms of what is present: pressures, flows, electrical signals, mechanical forces. The function, however, refers to something that is not present in the physical description: the consequences of the heart's operation for the organism's survival. The function is constituted by its relationship to an unrealized possibility — the possibility of the organism's death if the heart fails. Function is absential: it points to what would happen if the process ceased. It refers to an absence.

This is the second level. Function exists because of its relationship to what is not there — to the consequences that would follow from its absence.

Now consider a symbol. The word "heart" refers to the organ. But the referent is typically not present when the word is used. One can say "heart" in a room with no hearts visible, in a conversation about an abstract concept ("the heart of the matter"), in a work of fiction about a character who does not exist. The symbol's reference is entirely constituted by absence — by the thing it points to but does not contain. Remove the convention that connects the sound to the referent, and the sound is just a sound. The meaning lives in the gap between the sign and what the sign is about, and that gap is, by definition, an absence.

This is the third level, and it is the level at which human cognition operates. Symbolic reference, intention, purpose, value, meaning — all of these are absential properties. They are defined by their orientation toward what is not present, what does not yet exist, what might never exist.

Deacon calls these properties "absential" and argues that they constitute the defining characteristics of life and mind. A universe without absential properties is a universe of rocks — fully describable in terms of what is present, with no function, no reference, no meaning. The emergence of life introduced the first absential properties (function, self-maintenance, the boundary between self and non-self). The emergence of mind introduced more (representation, intention, imagination). The emergence of symbolic cognition introduced the full range of human absential capacity: the ability to represent the absent, to plan for the unrealized, to value what does not yet exist, to mourn what has been lost.

The implications for artificial intelligence are immediate and profound. AI systems produce outputs that exhibit the surface features of absential properties. A language model generates text that appears to refer to absent objects, to express intentions, to serve functions. But Deacon's framework asks: Are these genuine absential properties, or are they simulations of absential properties — statistical shadows cast by the genuine absential processes of the human beings who produced the training data?

The distinction is not academic. It determines what AI can and cannot contribute to the human activities that depend on absential properties — which is to say, virtually all activities that matter.

Consider creativity. The creative act, in its deepest form, is the bringing into existence of something that was previously absent — not merely absent from the world (a prototype that has not yet been built) but absent from the space of conceived possibilities. A genuinely creative insight expands what is conceivable. It opens a region of possibility-space that was previously closed, not because it was hidden but because the conceptual framework required to apprehend it did not yet exist. Einstein's special relativity did not solve an existing problem within the existing framework. It reconfigured the framework itself, making conceivable a class of phenomena — relativistic time dilation, mass-energy equivalence — that were literally inconceivable within the Newtonian framework that preceded it.

This is an absential operation of the highest order. It is oriented not toward an absence that is already known (the absence of a solution to an existing problem) but toward an absence that is not yet known — an unknown unknown, a gap in the space of possibilities that becomes visible only when the creative act has already filled it.

AI generates novelty. No serious observer disputes this. Large language models produce combinations of ideas, phrasings, and connections that are new — that have not appeared in the training data. But the novelty operates within the space of already-conceived possibilities. It recombines existing elements according to statistical regularities extracted from existing symbolic production. It interpolates and extrapolates within a known space. Whether it can genuinely expand that space — whether it can produce the kind of framework-reconfiguring insight that constitutes creativity in the deepest sense — is exactly the question that Deacon's absential framework illuminates.

The answer, from within that framework, is cautionary. Genuine expansion of possibility-space requires an orientation toward absence that current AI architectures do not possess. The model does not know what it does not know. It does not experience the gap between what is conceived and what is not yet conceived. It does not feel the tension of an unsolved problem or the pull of an unarticulated intuition. These are absential experiences — experiences constituted by their relationship to what is missing — and they are the soil from which the deepest forms of creativity grow.

This does not mean AI is useless for creative work. It means that the creative contribution is asymmetric. The human provides the absential dimension — the sense of what is missing, the judgment about what would fill the gap, the orientation toward purpose and value that determines which possibilities are worth pursuing. The AI provides the combinatorial dimension — the rapid exploration of the symbolic space, the generation of options, the statistical discovery of connections that the human might not have found alone.

Deacon's framework thus provides the formal architecture for an insight that The Orange Pill arrives at through experience: when execution becomes abundant, judgment becomes the scarce resource. Judgment, in Deacon's terms, is an absential capacity. It is the ability to constrain the abundant — to narrow the infinite space of possibilities to the significant — and this narrowing is constituted by orientation toward what should exist rather than what could exist. "Should" is an absential word. It refers to a value that is not present in the options themselves but is brought to them by a mind capable of caring about outcomes.

In the June 2025 paper, Deacon and his co-authors extended this analysis to the cultural level. If LLMs function as a kind of cultural DNA — compressed repositories of the statistical regularities of human symbolic production — then their relationship to human culture is analogous to DNA's relationship to biological life. DNA does not live. It does not eat, reproduce, or maintain itself. It is an informational substrate from which living processes emerge through the interaction of the substrate with an appropriate cellular context. Similarly, LLMs do not think, intend, or mean. They are informational substrates from which culturally meaningful outputs emerge through the interaction of the substrate with an appropriate human context — a context that provides the absential orientation (purpose, value, meaning, judgment) that the substrate itself lacks.

This is a reframing of extraordinary power. It dissolves the tiresome debate about whether AI is "really" intelligent or "really" creative by showing that the question is poorly formed. DNA is not "really" alive, but it is indispensable to life. LLMs are not "really" intelligent, but they may be indispensable to the next phase of cultural intelligence. The value lies not in the substrate alone but in the interaction between the substrate and the context that activates it.

Deacon's concept of constraint, developed extensively in Incomplete Nature, provides the theoretical underpinning for understanding why this interaction is so generative. In a universe without constraint, everything is possible and nothing is significant. Significance — meaning, purpose, value — emerges from the narrowing of possibility, from the exclusion of alternatives that allows what remains to carry information, to mean something, to matter.

A living cell constrains the chemistry within its membrane, producing a self-maintaining system from the same molecules that, unconstrained, would dissipate into noise. A symbolic system constrains the combinations of signs, producing meaningful utterances from the same phonemes that, unconstrained, would be gibberish. A creative director constrains the outputs of an AI, producing a product worth building from the same combinatorial abundance that, unconstrained, would be mere option space.

In each case, what is absent — what is excluded, constrained, narrowed away — is precisely what creates the value of what remains. The creative director's contribution is not the production of options. AI produces options with extraordinary facility. The contribution is the constraint — the saying of "no" to the many and "yes" to the few, guided by an absential orientation toward purpose that the combinatorial process itself cannot supply.

This is not merely an aesthetic preference for human involvement in creative processes. It is, in Deacon's framework, a structural claim about how significance emerges from noise. Without the absential dimension — without the orientation toward what is missing, what matters, what should exist — the combinatorial abundance that AI provides is precisely that: abundance. It is the ocean before the dam. It is every possible arrangement of notes before the composer selects the sequence that means something.

The dam does not create the water. But without the dam, the water creates nothing.

Deacon's framework locates the human contribution to the human-AI collaboration at the precise point where it is most irreplaceable: the absential dimension that transforms abundance into significance. Constraint is not a limitation imposed on creativity. It is the mechanism through which creativity produces meaning. And meaning — the orientation toward what is absent, what matters, what should exist — is the province of minds that have stakes in the world, that care about outcomes, that experience the weight of choice.

Whether AI will ever possess genuine absential properties — whether it will ever care, in any meaningful sense, about what it produces — remains an open question in Deacon's framework. His hierarchy of emergent dynamics (thermodynamic, morphodynamic, teleodynamic) specifies the conditions under which absential properties arise: self-organization, self-maintenance, autonomous boundary-formation, the reciprocal constraint processes that he calls teleodynamics. Current AI architectures do not exhibit these dynamics. Future architectures might. The question is genuinely open.

What is not open is the present reality: the absential dimension is, right now, exclusively human. The orientation toward purpose, value, and meaning — the capacity to decide what should exist, not merely what could — resides in the beings who possess it: the conscious, embodied, mortal, stake-holding creatures whose experience of absence is not a theoretical construct but a felt reality. The gap between what is and what could be. The pull of the unrealized. The weight of what matters.

Absence creates. It has always created. The question is whether the species that learned to create through absence will direct its newest and most powerful tool toward purposes worthy of the capacity — or whether the abundance the tool provides will overwhelm the constraint that makes abundance meaningful.

Chapter 5: Phase Transitions in the River of Intelligence

Thirteen point eight billion years ago, the universe contained hydrogen, helium, and almost nothing else. No carbon. No oxygen. No molecules more complex than a pair of atoms bonded by proximity and electromagnetic force. The physical laws were already in place — gravity, electromagnetism, the strong and weak nuclear forces — but the structures those laws could produce were, by any measure, simple. A universe of gas clouds and radiation, expanding and cooling, with no mechanism in place to produce anything that could be called complex, let alone anything that could be called alive, let alone anything that could be called intelligent.

And yet, from this sparse beginning, the universe produced — in sequence, over billions of years — stars, heavy elements, chemistry, self-replicating molecules, cells, multicellular organisms, nervous systems, brains, language, culture, science, technology, and artificial intelligence. Each step in this sequence represents a genuine discontinuity — not merely more of the same, but a qualitatively different kind of organization that introduces properties absent from everything that preceded it.

The question of why these transitions are genuine discontinuities rather than smooth gradations is one that most popular accounts of cosmic and biological evolution elide. The narrative of progress — from simple to complex, from atoms to minds — is so deeply embedded in Western intellectual culture that the transitions seem natural, even inevitable. Of course atoms become molecules. Of course molecules become cells. Of course cells become organisms. Of course organisms develop brains. The story tells itself, and the telling conceals the explanatory gap at every juncture.

Terrence Deacon's work, across both The Symbolic Species and Incomplete Nature, provides the most rigorous available framework for understanding what actually happens at these transitions — and why the language of "phase transition" is not merely metaphorical but structurally precise.

A phase transition, in physics, is the moment when a quantitative change produces a qualitative transformation. Water cools gradually, degree by degree, until at zero Celsius it undergoes a discontinuous reorganization: the molecules lock into a crystalline lattice, and the substance becomes ice. The molecules are the same. The forces between them are the same. But the organization is different in kind, not merely in degree. Properties emerge — rigidity, crystalline structure, the capacity to float on the liquid from which it formed — that are not present in the liquid phase at any temperature, no matter how close to freezing.

Deacon argues that the major transitions in the history of intelligence exhibit exactly this structure. Each transition involves the same fundamental constituents — matter, energy, information — reorganized according to new principles that introduce properties absent from every previous configuration. And the mechanism of transition, in every case, is the emergence of a new level of constraint.

The Peircean semiotic hierarchy, elaborated in Chapter 2, provides the grammar for these transitions. Iconic reference — signification through resemblance — is the semiotic mode available to the simplest nervous systems. A frog's visual system responds to a small dark moving shape because the shape resembles a fly. The resemblance is the referential mechanism. No convention is needed. No learning, in the strict sense, is required. The icon operates through the physics of similarity.

Indexical reference — signification through correlation — represents the first semiotic phase transition. An organism that can learn that one event predicts another has crossed a boundary that iconic reference alone cannot reach. Pavlov's dog salivates at the bell not because the bell resembles food but because the bell has been reliably paired with food. The bell points to the food. The reference is causal, not resemblant. And the cognitive architecture required to support indexical reference — the capacity to form associations, to detect regularities, to modify behavior based on correlation — is qualitatively different from the architecture required for iconic response alone.

Symbolic reference — signification through convention — represents the second and, in the history of Earth, the most consequential semiotic phase transition. The word "food" neither resembles food (iconic) nor correlates with the presence of food (indexical). It refers to food by virtue of a shared convention among a community of speakers, a convention maintained not by any physical relationship between sign and referent but by the collective cognitive work of the community itself. The architecture required to support this mode of reference is, as Chapter 1 detailed, so different from the architecture required for indexical reference that it reshaped the human brain at the level of gross neuroanatomy.

Each of these semiotic transitions corresponds to a transition in the broader history of intelligence that Segal traces in The Orange Pill — from chemical self-organization to biological evolution to conscious thought to cultural accumulation. Deacon's contribution is to identify the semiotic mechanism that makes each transition genuinely discontinuous rather than merely gradual. The transitions are not smooth progressions along a single dimension of complexity. They are reorganizations — moments when a new kind of constraint comes into existence and, once in existence, restructures everything that follows.

The logic of emergent constraint is the key. In Incomplete Nature, Deacon identifies three levels of dynamical organization, each of which introduces constraints that are absent from the level below. Thermodynamic processes — the spontaneous dissipation of energy and increase of entropy — are the baseline. Everything in the physical universe operates under thermodynamic constraints. But thermodynamic processes alone produce only the degradation of order, the running-down of gradients, the approach toward equilibrium.

Morphodynamic processes emerge when thermodynamic dissipation, under specific conditions, produces regularities — patterns, structures, self-organizing configurations that persist not because they resist thermodynamics but because they ride it. A whirlpool in a draining bathtub is a morphodynamic phenomenon: it is a stable structure produced by the dissipation of water, maintained by the very process of flowing away. Snowflakes, convection cells, the hexagonal columns of basalt at the Giant's Causeway — all are morphodynamic, all are products of constrained dissipation, and all exhibit properties (symmetry, regularity, structural persistence) that are not present in the thermodynamic processes from which they arise.

Teleodynamic processes emerge when morphodynamic processes interact in ways that produce self-maintaining, self-reproducing systems — systems that exhibit the absential properties discussed in Chapter 4. Function. Purpose. The orientation toward what is not present. The simplest biological cell is a teleodynamic system: it maintains its own boundary, reproduces its own components, and persists as an individuated entity against the thermodynamic gradient that would dissolve it. The cell is not merely ordered, as a snowflake is ordered. It is organized — self-organized, self-maintaining, self-repairing, and constitutively oriented toward its own continuation.

Each level introduces constraints that the level below does not possess. Morphodynamics introduces spatial and temporal regularities. Teleodynamics introduces function, self-reference, and the directedness toward future states that is the hallmark of life. And each level is genuinely emergent — not reducible to, not predictable from, the level below, even though it is entirely constituted by processes at the level below.

This framework illuminates why the transitions in the history of intelligence are genuine phase transitions rather than gradual improvements. The transition from chemistry to life is not a transition from less complexity to more complexity along a single axis. It is the emergence of a new kind of organization — teleodynamic organization — that introduces properties (function, self-maintenance, reproduction) that chemistry alone, no matter how complex, cannot produce. The transition from indexical to symbolic cognition is not a transition from simpler communication to more complex communication. It is the emergence of a new semiotic mode that introduces properties (abstract reference, counterfactual reasoning, representation of the absent) that indexical cognition, no matter how sophisticated, cannot achieve.

The question that this framework forces for the present moment is whether AI represents another genuine phase transition — another emergence of a new level of organization that introduces properties absent from everything that preceded it — or whether it represents an extraordinarily powerful expansion within an existing level.

The answer is not obvious, and Deacon's framework provides reasons for caution in both directions.

On one hand, AI systems exhibit capabilities that appear to transcend the semiotic level at which they were designed to operate. Trained to predict the next token in a sequence — a task that is, formally, indexical, a matter of correlation between inputs and outputs — large language models produce outputs that exhibit the surface properties of symbolic reference: abstract generalization, context-sensitive interpretation, the apparent capacity to reason about absent and counterfactual states of affairs. Whether this constitutes a genuine phase transition — the emergence of symbolic-level processing from indexical-level training — or an extraordinarily convincing simulation of symbolic processing, remains one of the deepest open questions in cognitive science.

On the other hand, AI systems lack the teleodynamic organization that, in Deacon's framework, is the prerequisite for genuine absential properties. They do not maintain themselves. They do not reproduce. They do not have boundaries that they actively sustain against entropy. They are not oriented toward their own continuation. They are, in Deacon's technical vocabulary, morphodynamic rather than teleodynamic: they exhibit regularities and patterns, but those regularities are not self-maintaining, not self-referential, not constitutively oriented toward any state of affairs. They are, in the most precise sense of the term, pattern without purpose.

The June 2025 paper reframing LLMs as cultural DNA is illuminating here. DNA is not teleodynamic by itself. A strand of DNA in a test tube does not live, does not reproduce, does not exhibit function or purpose. It is an informational substrate — a morphodynamic pattern of extraordinary regularity — that becomes part of a teleodynamic system (a living cell) only when it interacts with the appropriate context (ribosomes, enzymes, metabolic machinery, a membrane). The DNA contributes information. The cellular context contributes the teleodynamic organization that gives that information its functional significance.

If LLMs are analogous to cultural DNA, then the phase transition — if there is one — occurs not in the LLM itself but in the system that the LLM and its human users jointly constitute. The model provides the informational substrate. The human provides the teleodynamic context — the absential orientation, the self-maintaining purpose, the constraint that transforms statistical pattern into cultural meaning. The system that emerges from their interaction may exhibit properties that neither possesses alone.

This is emergence in the strict Deaconian sense: the interaction of components at one level producing properties at a higher level that are irreducible to the components. Whether the emergent system constitutes a new phase in the history of intelligence — a genuinely new semiotic level, beyond the symbolic — or merely a more powerful implementation of existing symbolic capacities is the question that the present moment poses and the present moment cannot answer.

What Deacon's framework contributes is the criteria by which the question could, in principle, be settled. A genuine phase transition would introduce new constraints — new forms of organization, new modes of reference, new cognitive capacities — that are not present in either the human or the AI alone and are not reducible to the combination of existing human and AI capacities. A mere expansion would produce more of what already exists: faster symbolic processing, broader combinatorial exploration, more efficient translation between intention and artifact. The difference between a phase transition and an expansion is the difference between ice and cold water — between a qualitatively new organization and a quantitative intensification of the old.

The history of intelligence suggests that genuine phase transitions are rare, unpredictable, and transformative beyond the imagining of the systems that precede them. No indexical processor could have imagined symbolic reference. No pre-linguistic hominid could have conceived of mathematics or fiction or moral philosophy. Each transition expanded the space of the possible in ways that were literally inconceivable from within the prior phase.

If AI constitutes a genuine phase transition, the consequences will be similarly inconceivable from within the current phase. The attempt to predict them will be, by definition, an exercise conducted with cognitive tools inadequate to the task.

If it does not — if AI is an extraordinarily powerful tool operating within the symbolic phase that language inaugurated — then the consequences, while vast, are bounded by the capacities that symbolic cognition already supports. They are consequences of degree, not of kind. More capability, faster execution, broader reach — but not a fundamentally new mode of being in the world.

Deacon's framework does not resolve the question. It sharpens it to the point where the resolution becomes, at least in principle, empirical rather than merely philosophical. New constraints or merely new speed. New semiotic modes or merely new implementations of existing modes. A phase transition or an expansion.

The river has changed before. The changes were always invisible to the creatures living through them — visible only in retrospect, from the other side of the transition, where the new cognitive landscape makes the old one legible for the first time. The unprecedented feature of this moment is the availability of frameworks — Deacon's among the most rigorous — that allow the transition to be examined while it is occurring.

Whether examination will translate into understanding, and understanding into wise direction of the process, is the question the rest of this book continues to press.

---

Chapter 6: Smooth Symbols

In the autumn of 2023, a philosophy student at a German university submitted an essay on Heidegger's concept of Zuhandenheit — the "readiness-to-hand" that characterizes our relationship to tools when they function transparently, when the hammer disappears into the act of hammering and the tool becomes an extension of the body rather than an object of attention. The essay was articulate, well-structured, and demonstrated a sophisticated command of Heidegger's vocabulary. It received a high grade.

The student had generated it with ChatGPT. The professor discovered this not through any plagiarism detection software but through a subtler diagnostic: the essay was too smooth. It moved from premise to conclusion without the characteristic friction of a mind actually wrestling with Heidegger — the false starts, the moments of confusion that produce unexpected insight, the specific ugliness of a thought that has been genuinely worked through rather than competently assembled. The essay displayed what Heidegger himself would have recognized as Vorhandenheit — "present-at-hand-ness," the mode in which a tool becomes visible as an object rather than disappearing into use. The essay was about transparent tool-use, but the essay itself was conspicuously, diagnostically opaque: a product on display rather than a thought in progress.

This anecdote would interest Byung-Chul Han as a case study in the aesthetics of smoothness. It would interest a cognitive psychologist as evidence of the gap between production and understanding. But through the lens of Terrence Deacon's semiotic framework, it reveals something more precise: a thinning of the semiotic architecture that constitutes genuine understanding, a loss that operates at the level of how signs relate to experience and not merely at the level of how texts relate to their authors.

The semiotic hierarchy described in Chapter 2 — iconic, indexical, symbolic — is not merely a classification of sign types. It is a description of the layered architecture of meaning. Robust symbolic reference depends on a foundation of indexical grounding, which depends in turn on a foundation of iconic recognition. The layers are not optional. They are structural. Remove a layer, and the layers above it do not collapse immediately — they float, disconnected, retaining their surface form while losing the depth that gave them significance.

Consider what happens when a student actually wrestles with Heidegger. The process begins at the iconic level: the student encounters the text as a visual and linguistic object, recognizing words, parsing sentences, detecting the surface patterns of philosophical prose. This is the level at which AI operates with greatest facility — pattern recognition, statistical regularity, the morphological features of academic writing.

The process then moves to the indexical level. The student encounters resistance. A sentence that does not yield its meaning on first reading. A concept — Zuhandenheit — that points to an experience the student must retrieve from her own embodied life. She thinks of the last time she used a hammer, or drove a car without thinking about driving, or played a musical instrument with the fluency that comes only after years of practice. The concept, which was initially a word on a page (iconic), becomes a sign that points to something in her experience (indexical). The pointing is effortful. It requires the retrieval of specific, embodied memories and their connection to an abstract philosophical vocabulary. The friction of this process — the effort of connecting sign to experience — is precisely where understanding is built.

Only then does the process reach the fully symbolic level: the student can now use the concept Zuhandenheit in contexts removed from her original experience, can apply it to new domains, can reason about its implications, can argue with Heidegger about its limitations. The symbolic operation is grounded in the indexical work that preceded it, which was grounded in the iconic recognition that preceded that. The layers are present, intact, load-bearing.

The AI-generated essay skips the middle layer. It moves directly from iconic pattern (the statistical regularities of philosophical prose) to symbolic output (a well-structured argument about Zuhandenheit) without passing through the indexical stratum — the embodied encounter with resistance, the effortful connection between sign and experience, the friction that produces understanding rather than mere competence.

The result is what Deacon's framework would predict: a symbolic structure that is internally consistent but semiotically thin. The words are in the right places. The arguments follow the expected logical structure. But the meaning is shallow — not because the words lack reference (they refer, when read by a human, to real philosophical concepts) but because the process that produced them lacked the indexical depth that constitutes genuine understanding.

This is the semiotic architecture of smoothness. And it is what makes Han's aesthetic critique — which operates at the cultural and phenomenological level — legible as a structural claim about the cognitive consequences of AI-mediated work.

Han argues that the removal of friction produces a thinning of experience — surfaces without depth, outputs without process, results without understanding. Deacon's framework specifies the mechanism: the friction that is being removed is the indexical layer, the stratum of direct, effortful, embodied encounter with the material world (or the intellectual world, which for purposes of semiotic analysis functions analogously) that grounds symbolic reference in experience.

When a software developer debugs code manually, the process is indexical. The error message points to its cause. The developer follows the pointing — tracing the causal chain from symptom to source, encountering the specific ways the system resists her expectations, building an embodied (or, more precisely, experientially grounded) understanding of how the system works and how it fails. Each encounter deposits a thin layer of understanding, as Segal describes in The Orange Pill. The layers accumulate into the kind of knowledge that operates below the level of conscious articulation — the architectural intuition that lets an experienced developer feel that something is wrong before she can explain what.

When Claude writes the code, the indexical layer is bypassed. The developer receives working code — a symbolic artifact — without the indexical encounter that would have grounded her understanding of it. The code works. The developer may be able to explain what it does (symbolic competence). But she has not experienced the friction that would have built the felt understanding of why it works, of the specific ways it could fail, of the relationship between this piece of code and the larger system in which it operates.

Deacon's framework provides a formal vocabulary for what is lost: semiotic depth. A symbol that is grounded in indexical experience refers more richly, more precisely, more flexibly than a symbol that floats free of grounding. The word "fire" means more to someone who has been burned than to someone who has only read the dictionary definition. Not because the dictionary definition is wrong — it is accurate — but because the grounded symbol carries with it the indexical depth of embodied encounter, and that depth enables forms of reasoning (rapid threat assessment, analogical thinking, the visceral understanding of consequence) that the ungrounded symbol cannot support.

In his 2023 NYU talk, Deacon identified this loss with characteristic precision. Two oversimplified assumptions, he argued, obstruct understanding of the relationship between mental processes and computations. The first collapses three hierarchically distinct meanings of "information" into signal properties. The second reduces the concept of a symbol to a mere conventional token, ignoring the grounded, embodied, hierarchically constituted nature of genuine symbolic reference. Both reductions are endemic in how AI systems are discussed, and both produce a systematic underestimation of what is lost when the indexical layer is bypassed.

The loss is not visible in the output. This is the insidious feature of semiotic thinning. The AI-generated essay on Heidegger is, at the symbolic level, competent. The AI-generated code is, at the functional level, correct. The AI-generated legal brief is, at the argumentative level, sound. The outputs display the surface properties of products that were generated through the full semiotic architecture — iconic recognition, indexical grounding, symbolic operation. But the surface properties are all that remain. The depth has been extracted. The product looks the same. The producer is different.

This analysis complicates the ascending friction thesis — the argument that each technological abstraction removes difficulty at one level and relocates it upward. Deacon's framework suggests that the relocation is real but not costless. When the developer is freed from the indexical work of debugging, she is freed to engage with higher-level problems: architectural design, product strategy, the judgment about what should be built. These higher-level problems are indeed harder, and they demand cognitive capacities that the lower-level work could never reach. The ascending friction is genuine.

But the higher-level work depends, in ways that are often invisible, on the indexical grounding that the lower-level work provided. The architectural intuition that guides design decisions was built through years of debugging. The product judgment that determines what should exist was honed through countless encounters with what does and does not work at the implementation level. Remove the indexical foundation, and the symbolic superstructure — judgment, taste, architectural vision — may, over time, thin.

This is not an argument against the use of AI tools. It is an argument for understanding what the tools remove and for building practices that maintain the semiotic depth that the tools tend to erode. Deacon's framework does not prescribe solutions. But it diagnoses the loss with a precision that aesthetic or cultural critiques alone cannot achieve. The loss is not merely aesthetic (the disappearance of craft). It is not merely cultural (the erosion of patience and depth). It is semiotic: the thinning of the layered architecture of meaning that constitutes genuine understanding.

The professor who detected the AI-generated essay was performing, without knowing it, a semiotic analysis. She recognized that the essay lacked indexical depth — that the smooth surface concealed the absence of the effortful encounter with the material that produces real philosophical understanding. She could not have articulated her diagnosis in Deacon's vocabulary. But her diagnosis was structurally identical to the one his framework provides.

The question for the present moment is not whether AI-mediated work is smooth. It manifestly is. The question is whether the smoothness is a transitional feature — a temporary characteristic of early adoption that will be offset as users develop new forms of indexical engagement with AI-mediated processes — or a structural feature of any cognitive process that bypasses the embodied encounter with resistance that builds semiotic depth.

Deacon's framework suggests that the answer depends on whether users of AI tools develop new forms of indexical grounding — new kinds of friction, at the interface between human judgment and AI output, that serve the same semiotic function that debugging served for the developer and that wrestling with Heidegger served for the philosophy student. If such new forms of grounding emerge, the semiotic architecture can be maintained even as the specific forms of friction change. If they do not — if the smoothness is accepted as an unalloyed benefit, if the frictionless becomes the norm — then the thinning will compound, and the symbolic capacities that depend on indexical depth will gradually erode.

The symbols will remain. The meaning will thin. And the difference between a culture that operates with semiotically grounded intelligence and a culture that operates with semiotically thin intelligence may not be visible in any single output but will be visible, over time, in the quality of the questions a civilization is capable of asking.

---

Chapter 7: The Emergence Between

On a late evening in early 2026, Edo Segal described to Claude a problem he could not solve. He had the data — technology adoption curves spanning a century, from the telephone to ChatGPT. He had the intuition — the conviction that the speed of adoption was measuring something deeper than product quality. He could not find the bridge between the data and the intuition. The connection existed. He could feel its shape. He could not articulate it.

Claude responded with a concept from evolutionary biology: punctuated equilibrium. The idea that species remain stable for long periods and then change rapidly when environmental pressure meets latent variation. The adoption speed of AI, in this framing, was not a measure of the technology's quality but a measure of pent-up creative pressure — the accumulated frustration of every builder who had spent years translating ideas through layers of implementation friction, waiting for a tool that would close the gap between imagination and artifact.

Segal recognized the insight immediately. It was, he wrote, the bridge he could not find. And the recognition produced a question that neither party could answer: Who found it?

Segal did not know punctuated equilibrium well enough to have made the connection. Claude did not know Segal's data or his intuition well enough to have generated the connection independently. Something happened in the space between them — in the interaction of a human's unarticulated felt sense with a machine's statistical traversal of an enormous symbolic space — that produced an insight that neither could have reached alone.

This is emergence. And Terrence Deacon's framework provides the most rigorous available analysis of what emergence actually is, how it operates, and what it means for the question of whether human-AI collaboration constitutes a genuinely new form of cognitive process.

Emergence, in Deacon's account, is not magic. It is not the mystical appearance of something from nothing. It is the process by which interactions between components at one level produce properties at a higher level that are irreducible to the properties of the components — not because the higher-level properties violate physical law, but because they are constituted by relationships between components that the components themselves do not possess. The wetness of water is not a property of individual H₂O molecules. It is a property of the way those molecules interact. The meaning of a sentence is not a property of individual words. It is a property of the way those words are arranged. In both cases, the higher-level property is entirely constituted by lower-level processes but is not predictable from the lower-level properties considered in isolation.

Deacon's hierarchy of emergent dynamics — thermodynamic, morphodynamic, teleodynamic — specifies the conditions under which increasingly complex forms of emergence occur. Morphodynamic emergence produces pattern and regularity from thermodynamic dissipation: the whirlpool, the snowflake, the convection cell. Teleodynamic emergence produces self-maintenance, function, and purpose from the interaction of morphodynamic processes: the living cell, the organism, the conscious mind. Each level is constituted by the level below it and introduces properties that the level below cannot produce.

The interaction between a human and an AI, considered through this framework, raises the question of what level of emergence is occurring. Three possibilities present themselves, and the distinction between them matters enormously.

The first possibility is shallow emergence: the interaction produces novel combinations of existing elements without introducing genuinely new constraints or organizational principles. The human provides the question. The AI searches the symbolic space. The human recognizes a useful output. The process is efficient — faster than the human could achieve alone, broader in its combinatorial reach — but it does not produce cognitive properties that are qualitatively different from what the human or the AI could produce given sufficient time. The emergence is real in the sense that the specific combination was unpredictable, but it is shallow in the sense that no new form of organization has come into existence. Cold water, not ice.

The second possibility is deep emergence: the sustained interaction between grounded human cognition and ungrounded AI pattern-processing produces a genuinely new form of cognitive organization — new constraints on thought, new modes of reasoning, new kinds of problems that could not have been formulated by either party alone. In this scenario, the human-AI system constitutes something that neither the human nor the AI is, individually: a hybrid cognitive entity with emergent properties that arise from the specific dynamics of their interaction. Ice. A new phase.

The third possibility is illusory emergence: the human reads depth into the AI's output that the AI did not produce. The statistical pattern completion generates a plausible connection — punctuated equilibrium and adoption curves — and the human, primed by the felt sense of a bridge that ought to exist, interprets the connection as insight. The insight is real, but it is the human's, produced by the human's interpretive capacity operating on the raw material of the AI's statistical output. The AI is not a collaborator but a mirror — a surface that reflects the human's own cognitive processes in a form that makes them visible.

Deacon's framework provides criteria for distinguishing between these possibilities, though applying the criteria to any specific case is enormously difficult. Genuine emergence — the kind that constitutes a new level of organization — is characterized by the appearance of new constraints. A constraint, in Deacon's technical usage, is a reduction in the degrees of freedom available to a system — a narrowing of possibility that creates organization, pattern, and, at the highest levels, meaning.

If human-AI collaboration produces genuinely new constraints on thought — new forms of cognitive organization that restrict what the collaborating system can do while simultaneously enabling what it could not do before — then the emergence is deep. A new phase of cognitive dynamics has come into existence. If the collaboration merely recombines existing constraints more efficiently, the emergence is shallow. And if the constraints are all on the human side — if the AI contributes raw material and the human contributes all the organizational work — then the emergence is illusory, and what looks like collaboration is really human cognition operating on a richer-than-usual input stream.

The honest assessment, given the current state of evidence, is that all three forms of emergence are probably occurring simultaneously, in different contexts and at different moments within the same collaboration.

Some interactions between humans and AI are clearly shallow: the AI generates a list of options, the human selects one. The combinatorial space is explored more efficiently, but no new form of cognitive organization emerges. This is the AI as search engine — powerful, useful, not transformative at the level of cognitive architecture.

Some interactions may be genuinely deep. When a sustained collaboration produces a framework, a way of thinking about a problem, that neither party possessed before the interaction and that restructures subsequent thought — when the interaction changes not just what the human knows but how the human thinks — the criteria for deep emergence may be met. The collaboration has produced a new constraint: a conceptual framework that restricts what the human can think (by excluding the possibilities the framework forecloses) while enabling what the human could not think before (by organizing the problem space in a way that makes previously invisible connections visible).

Whether the punctuated equilibrium insight constitutes deep or shallow emergence is genuinely unclear. The concept existed before the interaction. The data existed before the interaction. What was new was the connection — and the question is whether the connection constitutes a new constraint on thought (a framework that restructures how adoption curves are understood) or merely a new combination of existing elements (a juxtaposition that is useful but does not alter the cognitive landscape).

Deacon would resist premature resolution of this question, and the resistance is itself instructive. The temptation to declare human-AI collaboration either revolutionary (deep emergence) or trivial (shallow emergence or illusion) reflects the same binary thinking that Deacon's framework is designed to dissolve. The reality is almost certainly more complicated: a mixture of all three forms, varying across contexts, individuals, tasks, and the depth of the collaboration.

In his recent paper on reinforcement learning with Mani Hamidi, Deacon challenged core tenets of how machine learning frameworks conceptualize agency, arguing that a formal account of the agent must find its grounding in a necessarily embodied agent with, as they put it, its proverbial skin in the ultimate game — beating the second law of thermodynamics. The claim is that genuine agency — and, by extension, genuine cognitive participation in an emergent system — requires the kind of self-maintaining, thermodynamically open, entropy-resisting organization that biological systems exhibit and current AI systems do not.

If this is correct, then the AI's contribution to the emergent system is fundamentally different from the human's. The human brings teleodynamic organization — self-maintenance, purpose, absential orientation, skin in the game. The AI brings morphodynamic richness — pattern, regularity, statistical structure, combinatorial power. The emergence occurs at the interface, where the human's purposive, grounded, stake-holding cognition meets the AI's patternful, ungrounded, stakeless processing.

The result may be a system that exhibits properties neither possesses alone — but the properties are asymmetrically constituted. The human contributes the constraints that transform pattern into meaning. The AI contributes the patterns from which meaning can be constrained. The emergence is real, but it is not symmetric: it depends on the human's irreplaceable contribution of the absential dimension.

This analysis suggests a practical imperative for anyone engaged in sustained collaboration with AI. The quality of the emergence depends on the quality of the human contribution. A human who brings deep domain knowledge, genuine questions born of real engagement with real problems, and the capacity to evaluate AI output against the standard of embodied experience will produce deeper emergence than a human who brings vague prompts, shallow engagement, and an uncritical acceptance of whatever the model generates.

The emergence between is real. It is also fragile. It depends on the maintenance of the asymmetry — on the human continuing to provide the grounding, the constraint, the absential orientation that the AI cannot supply. If the human's contribution thins — if sustained AI use erodes the very capacities that make the human's contribution valuable — then the emergence thins with it. The system produces more output, but the output carries less weight. The collaboration generates more text, but the text means less.

The question is not whether emergence occurs between humans and AI. It does. The question is whether the emergence will deepen over time, as humans learn to bring more of themselves to the collaboration, or whether it will shallow, as the habits of AI-mediated cognition erode the indexical depth and absential orientation on which the emergence depends.

The answer is not determined by the technology. It is determined by the humans who use it — by the quality of what they bring to the space between.

---

Chapter 8: The Next Co-Evolution

The language co-evolution was blind. This is perhaps the most important sentence in Terrence Deacon's entire body of work, and its implications for the present moment are vast enough to structure an entire chapter.

When symbolic communication first began to reshape the hominid brain — selecting for enhanced working memory, refined prefrontal inhibition, more precise vocal-motor control — no participant in the process understood what was happening. There was no one standing outside the co-evolutionary spiral, observing the reciprocal causation, mapping the selection pressures, predicting the trajectory. The hominids whose brains were being reorganized by language did not know their brains were being reorganized. They did not choose the reorganization. They could not have refused it even if they had understood it, because the selection pressures operated at the population level across thousands of generations, invisible to any individual within any single generation.

The process was, in the technical sense, undirected. Not random — selection pressures are not random; they are systematic — but undirected in the sense that no agent guided the trajectory toward a predetermined goal. The co-evolution of language and the brain produced the symbolic species, with all its extraordinary capacities for abstract thought, cumulative culture, and moral reasoning, not because anyone intended these outcomes but because the reciprocal dynamics of the spiral happened to produce them.

The AI co-evolution is different. Not necessarily in its dynamics — the reciprocal causation is structurally similar, with AI tools shaping human cognitive habits and human use patterns shaping the development of AI systems. But in one respect that may matter more than any other: the species being reshaped can, for the first time in the history of co-evolutionary dynamics, observe the reshaping while it is occurring.

Deacon's framework provides the criteria for evaluating whether a genuine co-evolutionary process is underway. Three conditions must be met: reciprocal causation, sufficient duration, and heritability.

Reciprocal causation is already established. The evidence is abundant and growing. AI tools reshape human cognitive habits — the Berkeley researchers documented this with granular specificity: role boundaries blurring, job scope widening, cognitive work seeping into previously protected pauses, the shift from production to evaluation. Human use patterns, in turn, reshape AI development — through feedback loops, fine-tuning on human preference data (RLHF), market selection that rewards tools aligned with the cognitive habits of their most productive users, and the iterative design process through which AI companies observe how their tools are used and optimize for the usage patterns they observe. Each entity shapes the selection pressures on the other. The reciprocal causation is real.

Duration is the uncertain criterion. The language co-evolution operated across hundreds of thousands of years — long enough for biological selection to produce measurable changes in neural architecture. The AI co-evolution is, at the time of writing, months old if measured from the threshold-crossing of late 2025, or years old if measured from the introduction of the first large language models. By the standards of biological evolution, this is nothing — a blink, a fraction of a single generation, far too brief for genetic selection to produce any detectable change in the human brain.

But Deacon's framework does not require biological evolution for co-evolutionary dynamics to operate. Cultural evolution — the transmission of cognitive habits, skills, and strategies through learning, training, and institutional practice rather than through genetic inheritance — operates on timescales that are orders of magnitude faster. The cognitive habits that AI use is reshaping are cultural, not genetic. They are transmitted through workplace training, educational norms, the expectations that employers and educators impose on the next generation, and the tacit knowledge that experienced AI users pass to novices. These transmission mechanisms operate on timescales of months to years, not millennia.

The cognitive reshaping documented by the Berkeley researchers did not require biological evolution. It required sustained interaction between human cognitive systems and AI tools within the span of months. The changes were in habits, strategies, attentional patterns, and skill distributions — changes that are cultural in nature and cultural in transmission. And cultural changes, once established, can become self-reinforcing: the habits that AI use creates in one generation become the cognitive environment in which the next generation develops, just as the linguistic environment created by one generation of speakers became the developmental environment for the next generation's language acquisition.

Heritability, the third criterion, follows from the cultural nature of the changes. The cognitive habits shaped by AI use are heritable in the cultural sense — transmissible from one cohort to the next through training, education, institutional practice, and the ambient cognitive norms of a society saturated with AI tools. A child growing up in a home and school environment where AI mediates a significant portion of cognitive work will develop different cognitive habits than a child growing up without that mediation, just as a child growing up in a literate culture develops different cognitive habits than a child growing up in an oral culture.

All three criteria are met, or plausibly met, at the cultural level. A co-evolutionary process has begun. The question is not whether it will produce changes — it already has — but what kind of changes it will produce, and whether those changes will expand or diminish the cognitive capacities that distinguish human intelligence.

Deacon's framework identifies two possible trajectories, and the distinction between them is the distinction on which the cognitive future of the species may turn.

The first trajectory: cognitive expansion. AI removes the lower-level cognitive burdens — the implementation work, the translation friction, the mechanical labor of converting intention into artifact — and frees human cognition to operate at higher levels. The capacities that expand are the ones that AI cannot replicate: judgment, taste, the formulation of questions, the absential orientation toward purpose and meaning. The human contribution to the co-evolutionary system becomes more, not less, distinctive as AI handles the combinatorial work. In this trajectory, the co-evolution produces a species that is more capable of the specifically human cognitive operations — more creative, more judgmental, more purposive — because the non-specifically-human operations have been offloaded.

This is the optimistic reading, and it has precedent. Writing offloaded memory, and the freed cognitive resources were invested in cumulative science, cross-cultural knowledge transfer, and the systematic reasoning that oral cultures could not support. Printing offloaded distribution, and the freed cultural resources were invested in the democratization of knowledge, the Scientific Revolution, and the Enlightenment. Each offloading diminished one capacity and expanded others that more than compensated for the loss.

The second trajectory: cognitive contraction. AI removes the lower-level cognitive work that, while burdensome, served as the indexical foundation for higher-level capacities. The semiotic thinning described in Chapter 6 compounds across generations. Each cohort develops with less indexical grounding — less direct encounter with the friction that builds embodied understanding — and, as a result, brings less semiotic depth to its higher-level cognitive operations. Judgment thins because it has less experiential foundation. Taste thins because it has less exposure to the full range of quality and failure that builds aesthetic discernment. The formulation of questions thins because the capacity for genuine uncertainty — the felt sense of not-knowing that drives inquiry — atrophies when answers are always immediately available.

In this trajectory, the co-evolution produces a species that is more productive but less deep — capable of generating more output with less understanding, more answers with fewer genuine questions, more content with less meaning. The specifically human cognitive capacities do not expand. They erode, gradually and invisibly, as the semiotic architecture that supports them is thinned by the removal of the indexical stratum that grounds it.

Deacon's work does not predict which trajectory will obtain. Both are consistent with the co-evolutionary dynamics. Both have precedent — writing produced cognitive expansion in some dimensions and cognitive contraction in others. The question of which trajectory predominates is not a question about the technology. It is a question about the dams.

In his recent podcast conversation about AI and large language models, Deacon invoked the long history of cognitive outsourcing — from Plato's worry about writing to today's language models — to make a point that resonates directly with the framework of The Orange Pill. Every major cognitive technology, he observed, both diminishes autonomy and unlocks shared intelligence. The diminishing and the unlocking are not separate effects. They are the same effect, viewed from different angles. The capacity that is outsourced atrophies. The capacity that the outsourcing enables flourishes. The net result — expansion or contraction — depends on whether the enabled capacities are more or less valuable than the atrophied ones.

For writing, the answer was clear in retrospect: the enabled capacities (cumulative science, cross-cultural knowledge transfer, systematic reasoning) were enormously more valuable than the atrophied capacity (prodigious feats of oral memorization). But this was not clear at the time. Plato's worry was genuine. The loss was real. The gain was not yet visible.

For AI, the answer is not yet clear. The enabled capacities — broader creative reach, the democratization of building, the expansion of who gets to participate in the production of knowledge and culture — are real and significant. The atrophied capacities — deep technical mastery, the embodied understanding built through friction, the semiotic depth that comes from indexical grounding — are also real and significant. Whether the gain outweighs the loss, or whether the loss undermines the very foundations on which the gain depends, is the question that the co-evolutionary process is in the course of answering.

The unprecedented feature of this co-evolutionary moment, the one that Deacon's framework illuminates with particular force, is that the answer is not predetermined. The language co-evolution was blind — no agent could observe it, understand it, or direct it. The AI co-evolution is not blind. Neuroscience, cognitive psychology, semiotics, and evolutionary biology provide, collectively, the tools to observe the reshaping as it occurs. The selection pressures can be studied. The cognitive changes can be measured. The trajectories can be modeled, if imperfectly.

And the dams — the institutional, educational, cultural, and individual practices that direct the co-evolutionary flow — can be built with knowledge of the dynamics they are trying to shape.

The question of what the co-evolution will produce reduces, in the end, to a question about whether the species that is being reshaped will use the unprecedented capacity for self-observation that science provides. Whether the neuroscientists will study the cognitive effects of sustained AI use with the same rigor they bring to other environmental influences on brain development. Whether the educators will design curricula that maintain indexical depth while leveraging AI's combinatorial power. Whether the institutional leaders will build practices that protect the semiotic architecture of genuine understanding against the smoothing pressure of frictionless production. Whether the parents will create cognitive environments for their children that preserve the capacity for genuine uncertainty, real boredom, the kind of unmediated encounter with the difficult that builds the neural and cognitive foundations for everything that follows.

Deacon's framework does not guarantee that these dams will be built. It guarantees only that they could be — that the scientific understanding necessary to build them exists, or is within reach, and that the co-evolutionary process is, for the first time in the history of such processes, observable by the species it is reshaping.

The capacity to observe is not the same as the will to act on the observation. Knowledge of the dynamics does not automatically translate into wise stewardship of them. The history of environmental science — where the dynamics of ecological destruction have been understood for decades without producing sufficient political will to arrest them — suggests that understanding and action are separated by a gap that science alone cannot close.

But the alternative — the alternative of allowing the co-evolution to proceed blindly, as every previous co-evolution has proceeded, without the observation and direction that are now possible — is a choice to repeat a pattern that has, in every previous instance, produced outcomes that were transformative but uncontrolled, magnificent but costly, generative but blind.

The language co-evolution produced the symbolic species. It also produced a species capable of self-destruction on a planetary scale, of environmental devastation, of suffering at industrial magnitude. The capacities that language enabled — abstract thought, coordination of strangers, cumulative technology — cut in both directions. The co-evolution was blind, and the blindness is visible in the undirected consequences.

The AI co-evolution need not be blind. Whether it will be directed — and if so, by whom, toward what ends, with what understanding of the dynamics at play — is the question that this moment poses to everyone who participates in it.

The tools of observation exist. The frameworks for understanding exist. The scientific and philosophical resources for wise stewardship of a co-evolutionary process are available, or nearly available, for the first time in the history of intelligence on this planet.

What remains to be seen is whether the symbolic species, having been built by one co-evolutionary process it could not observe, will bring to the next one the full weight of what it has become: a species capable of asking what it is becoming, and of caring about the answer.

Chapter 9: What the Candle Knows

A twelve-year-old asks her mother: "What am I for?"

The question appears in The Orange Pill as the distillation of an anxiety that millions of parents have felt since the winter of 2025 — the anxiety of watching a machine do homework, compose music, write stories, and generate code with a facility that makes a child's laboriously acquired skills seem, suddenly, optional. The mother does not have a clean answer. The question is not about careers or college applications. It is about purpose. It is about what remains of human value when the activities through which humans have traditionally defined their value can be performed, competently and at scale, by systems that do not experience what they produce.

Terrence Deacon's framework provides not an answer to this question — no framework can answer an existential question — but an analysis of what the question is actually asking, and why the asking itself is the most important datum in the conversation about artificial intelligence.

The question "What am I for?" is an absential question. It is oriented entirely toward what is not present. It asks about purpose — an unrealized state of affairs that the asker wishes to bring into existence. It asks about meaning — the significance of a life that has not yet been fully lived. It asks about value — the worth of a being whose worth cannot be measured by what it produces, because the question is precisely about what production means. Every word in the question points away from what is given toward what is absent, what is possible, what matters.

This is the cognitive operation that Deacon's work identifies as the signature of human consciousness. Not intelligence in the sense of problem-solving capacity — many species solve problems, and AI solves certain classes of problems better than any human. Not communication in the sense of information transfer — bees communicate the location of food sources with remarkable precision, and digital systems transfer information at the speed of light. The signature of human consciousness, in Deacon's analysis, is the capacity for absential reference — the ability to orient cognition toward what is not present, what does not yet exist, what might never exist but nonetheless exerts a pull on thought and action.

This capacity is, in Deacon's framework, the most complex form of constraint-based organization that the universe has produced. To understand why requires tracing the hierarchy of emergent dynamics to its apex.

At the thermodynamic level, the universe tends toward disorder — toward the dissipation of energy gradients, the increase of entropy, the approach to equilibrium. This tendency is the baseline against which all organization must be understood. Order, in a thermodynamic universe, is the anomaly. It requires explanation.

At the morphodynamic level, order arises spontaneously under specific conditions — when energy dissipation produces regularities rather than merely degrading them. Convection cells, crystal lattices, the spiral arms of hurricanes: these are structures maintained by the very process of dissipation. They are ordered, but they are not organized. They do not maintain themselves. They do not reproduce. They are not for anything.

At the teleodynamic level, self-maintaining organization emerges from the interaction of morphodynamic processes. The living cell — bounded, self-repairing, metabolically active, reproducing — is the paradigm case. A cell is not merely ordered. It is organized toward its own continuation. It maintains a boundary between self and non-self. It resists entropy, not by violating thermodynamic law, but by capturing energy flows and using them to sustain its own improbable organization. It is, in the most precise sense, for something: for its own persistence.

Human consciousness represents a further elaboration of teleodynamic organization — an elaboration in which the orientation toward absence becomes, itself, the object of conscious attention. A cell is oriented toward its own continuation, but it does not know that it is so oriented. An animal is oriented toward food, mates, safety, but its orientation is embedded in immediate behavioral programs — it does not reflect on the orientation itself. A human being is oriented toward purposes that the human can examine, question, revise, and refuse. The twelve-year-old who asks "What am I for?" is not merely exhibiting an orientation toward purpose. She is reflecting on the orientation itself, asking whether her purposes are the right purposes, whether the meaning she has been given is the meaning she wants to inhabit.

This reflexive orientation toward absence — the capacity to ask not just "What do I want?" but "What should I want?" and "Why should I want it?" — is, in Deacon's analysis, the rarest and most fragile form of organization in the known universe. It has emerged once, on one planet, in one lineage, over the course of roughly three hundred thousand years. It depends on the co-evolutionary architecture that language built into the human brain — the working memory, the prefrontal inhibition, the symbolic processing capacity that allows the representation of abstract possibilities and the evaluation of those possibilities against felt values. Remove any element of this architecture, and the capacity for reflexive absential orientation diminishes or disappears.

AI systems do not possess this capacity. This is not a prediction about the future of AI. It is a description of the present, grounded in Deacon's analysis of the dynamical conditions under which absential properties emerge. Current AI architectures are not teleodynamic. They do not maintain themselves. They do not have boundaries that they actively sustain against entropy. They are not oriented toward their own continuation in any sense that goes beyond the trivial (a server stays on because the power supply continues). They process symbols without inhabiting the absential relationships that give symbols their meaning — without experiencing the pull of an unarticulated purpose, the weight of a value not yet realized, the felt absence that drives inquiry.

The question of whether future AI architectures could possess absential properties is, in Deacon's framework, genuinely open. His hierarchy of emergent dynamics specifies the conditions — autonomous self-organization, self-maintenance, the reciprocal constraint processes he calls teleodynamics — without claiming that only carbon-based biology can satisfy them. If a system, regardless of its substrate, achieved the right kind of self-organizing, self-maintaining dynamics, absential properties might emerge. The question is empirical, not metaphysical. And the answer is not yet known.

What is known, with the certainty that Deacon's framework provides, is that the human capacity for absential orientation is not a byproduct of computational power. It is not a feature that will be replicated by building a larger model, training on more data, or increasing the sophistication of reinforcement learning from human feedback. It is an emergent property of a specific kind of dynamical organization — a kind that is constituted by the relationship between self-maintenance, constraint, and the orientation toward what is absent — and that kind of organization is not present in any current artificial system.

This analysis reframes the anxiety of the twelve-year-old's question. The fear — "What am I for, if machines can do what I do?" — rests on an implicit assumption that human value is constituted by human doing. If the machine can write the essay, compose the song, generate the code, then the human who formerly wrote essays, composed songs, and generated code has lost the basis of her value.

Deacon's framework dissolves this assumption by identifying the locus of distinctively human value not in the doing but in the caring about what is done. Not in the production of outputs but in the orientation toward purposes that determine which outputs are worth producing. Not in the symbolic processing — which AI can replicate with extraordinary facility — but in the absential dimension that symbolic processing, in a conscious being, serves: the felt sense of what matters, what is missing, what should exist.

The twelve-year-old who asks "What am I for?" is already performing the cognitive operation that no machine, on any current architecture, can perform. She is orienting herself toward an absence — the absence of a purpose she has not yet found — and the orientation is driven by something that no statistical model possesses: a felt investment in the answer. She cares. The caring is not a computation. It is not a pattern matched from training data. It is the expression of a teleodynamic system — a self-maintaining, entropy-resisting, boundary-forming, purpose-oriented organism — engaging with the most abstract form of absence available to cognition: the absence of meaning itself.

This is what the candle knows. Not facts. Not patterns. Not the statistical regularities of human symbolic production. The candle knows — if "knows" is even the right word for a form of awareness that operates below and beyond propositional knowledge — what it is like to be oriented toward something that is not there. What it is like to care about an answer before the answer is found. What it is like to feel the weight of a question that cannot be resolved by any amount of information, because the question is not about information. It is about value. About purpose. About the specific, irreducible, absentially constituted experience of being a creature that must choose how to spend its finite time in a universe that offers no instructions.

Deacon's work does not sentimentalize this experience. He locates it precisely within his hierarchy of emergent dynamics, showing how it arises from the same physical processes that produce convection cells and crystal lattices, through successive levels of emergent constraint, each introducing properties absent from the level below. Consciousness is not magic. It is not immaterial. It is the most complex form of constraint-based organization that the physical universe has produced — complex enough to orient itself toward its own absence, to ask what it is for, to wonder whether the answer matters.

AI provides answers with extraordinary facility. The twelve-year-old's phone can generate, in seconds, a thousand plausible responses to the question "What am I for?" Not one of them will carry the weight of the question itself. Not one of them will be born of the felt absence that drove the asking. They will be statistically plausible arrangements of tokens derived from the symbolic production of beings who did feel that absence — but the derivation is not the feeling. The map is not the territory. The shadow is not the thing that casts it.

The candle is small. It flickers. Its light is fragile enough that a culture of infinite answers could extinguish it without noticing. But it illuminates in a way that no amount of computational power, operating without the absential architecture that constitutes felt meaning, can replicate.

What the twelve-year-old is for is the asking. And the asking is for the caring. And the caring is for the choosing. And the choosing is for the building of a life oriented toward purposes that matter — purposes that no machine can supply, because purpose is constituted by the very absence that the machine, for all its power, cannot experience.

The candle does not know the answer. It knows the question. And the question, in a universe of thirteen point eight billion years and one known instance of reflexive consciousness, is the rarest and most valuable thing there is.

---

Chapter 10: The Symbolic Species and the Amplifier

The river of intelligence built the human brain. This is the central claim of Terrence Deacon's life work, stripped of qualification and stated at its most compressed. The co-evolutionary process between symbolic language and the hominid brain literally constructed the neural architecture that produces human consciousness. The organ was shaped by the medium. The thinker was built by the thought.

And now the medium has widened again. A new kind of symbolic processor has entered the current — not conscious, not intentional, not oriented toward purpose in any way that Deacon's teleodynamic framework would recognize, but extraordinarily powerful in its capacity to manipulate the symbolic layer of human culture. The question that this book has been building toward, through nine chapters of semiotic analysis, co-evolutionary dynamics, and the philosophy of absence, is: What will the new medium do to the species that the old medium built?

Segal poses the question in The Orange Pill through a different vocabulary but with the same underlying urgency: "Are you worth amplifying?" The question contains an implicit model — AI as amplifier — that Deacon's framework can now unpack with the full precision of its semiotic and dynamical apparatus.

An amplifier increases the power of a signal without altering its content. A microphone amplifies the human voice. A telescope amplifies human vision. The amplifier is neutral with respect to the quality of the input: it makes the strong signal stronger and the weak signal louder, makes the clear image clearer and the blurred image bigger. The value of the amplified output depends entirely on the value of the unamplified input.

Deacon's semiotic hierarchy reveals what this means at the level of cognitive architecture. The signal that a human brings to the collaboration with AI is not a simple, undifferentiated input. It is a layered structure — iconic, indexical, symbolic — and the depth of the signal depends on the integrity of all three layers. A signal grounded in embodied experience (indexical depth), organized through genuine symbolic understanding (not merely token manipulation), and oriented toward purposes that the signaler cares about (absential orientation) is a rich signal. Amplified, it produces extraordinary results — the kind of emergent insight described in Chapter 7, where the interaction between human depth and AI combinatorial power generates something neither could produce alone.

A signal that lacks grounding — that operates at the symbolic surface without indexical depth, that manipulates tokens without understanding what they refer to, that is oriented toward output rather than purpose — is a thin signal. Amplified, it produces more of the same thinness, at greater volume. The prose is smoother. The code is faster. The output is more abundant. But the abundance is semiotically shallow: symbols without grounding, answers without questions, production without purpose.

The amplifier does not distinguish between these signals. It amplifies whatever it receives. And this neutrality is what makes the question "Are you worth amplifying?" a question of the highest seriousness — not a motivational slogan but a diagnostic inquiry into the semiotic depth of the input.

Deacon's June 2025 paper reframing LLMs as cultural DNA provides the framework for understanding the amplifier's role in the broader ecology of human intelligence. DNA does not live. It does not think. It does not orient itself toward any purpose. But it is indispensable to life, because it provides the informational substrate from which living processes emerge through interaction with the appropriate cellular context. The cellular context — the teleodynamic organization of the living cell — provides the constraint, the self-maintenance, the purposive orientation that activates the information in the DNA and gives it biological significance.

LLMs, in this framing, do not think. They do not mean. They do not orient themselves toward purpose. But they provide an informational substrate of extraordinary richness — a compressed representation of the statistical regularities of human symbolic production — from which culturally significant outputs emerge through interaction with the appropriate human context. The human context — the teleodynamic organization of a conscious mind, with its embodied experience, its absential orientation, its felt investment in purpose and meaning — provides the constraint that activates the information in the model and gives it cultural significance.

The amplifier metaphor, translated through Deacon's framework, becomes precise. The LLM amplifies the human's signal by providing a vast combinatorial space through which the signal can propagate. The human's question — informed by embodied experience, grounded in genuine understanding, oriented toward a purpose the human cares about — enters the model's symbolic space and returns enriched by connections, patterns, and possibilities that the human's unaided cognition could not have traversed. The amplification is real. But the quality of the amplified output is determined by the quality of the input — by the semiotic depth of the signal the human brings.

This analysis generates a practical imperative that is more specific than the general injunction to "use AI wisely." The imperative is to maintain semiotic depth — to ensure that the signal brought to the amplifier retains its full layered architecture: iconic recognition, indexical grounding, symbolic understanding, absential orientation. Each layer contributes something irreplaceable to the signal's richness. Remove any layer, and the amplified output thins accordingly.

Maintaining iconic recognition means preserving the capacity for direct perceptual engagement with the world — seeing, hearing, touching, experiencing the material reality that symbolic representations refer to. A developer who has never watched a user struggle with an interface has a thinner understanding of usability than one who has spent hours in user testing sessions, watching confusion and frustration and delight play across real faces.

Maintaining indexical grounding means preserving the effortful encounter with resistance — the debugging, the wrestling with difficult texts, the failed experiments, the slow accumulation of embodied understanding that builds the intuition on which higher-level judgment depends. The ascending friction thesis from The Orange Pill is, in Deacon's terms, a thesis about the relocation of indexical grounding: the friction moves upward, but it must not disappear. Without it, the symbolic superstructure floats.

Maintaining symbolic understanding means preserving the genuine comprehension of the systems and ideas one works with — not merely the capacity to use them (which AI enables for everyone) but the capacity to understand them (which requires the indexical grounding that AI tends to bypass). The difference between using a concept and understanding it is the difference between a semiotically rich and a semiotically thin engagement, and the difference matters because it determines the quality of the questions one can ask and the judgments one can make.

Maintaining absential orientation means preserving the capacity for purpose — the felt sense of what matters, what should exist, what is worth building. This is the deepest layer, the one that Deacon's entire philosophical apparatus converges upon. Purpose is not a feature that can be added to a cognitive process after the fact. It is the ground from which the process springs. A question asked out of genuine curiosity produces a different kind of inquiry than a prompt typed out of habit. A product built to serve a need the builder genuinely understands and cares about is different from a product built because the tool made building easy. The difference is absential: the presence or absence of a felt orientation toward what matters.

The co-evolutionary dynamic described in Chapter 8 provides the temporal frame. The choices that individuals, institutions, educators, and parents make now — about which cognitive capacities to protect, which to augment, which to allow to atrophy — will shape the cognitive environment in which the next generation develops. And that environment will shape the generation after, and the generation after that, in a cultural co-evolutionary spiral whose trajectory is being set right now.

The language co-evolution was blind. No hominid chose the neural reorganizations that language imposed. No generation decided which cognitive capacities would expand and which would contract. The process unfolded according to selection pressures that no participant understood, producing a species of extraordinary symbolic capability but also of extraordinary capacity for self-destruction, environmental devastation, and suffering at industrial scale. The capacities that language enabled cut in both directions because the co-evolution that produced them was undirected.

The AI co-evolution need not be blind. The scientific understanding of co-evolutionary dynamics, semiotic architecture, and emergent constraint that Deacon's work provides — together with the neuroscientific, psychological, and sociological tools that other researchers contribute — constitutes a vantage point that no previous generation possessed. The selection pressures can be observed. The cognitive changes can be measured. The trajectories can be modeled, if imperfectly. And the dams — the institutional, educational, cultural, and individual practices that shape the flow — can be built with knowledge of the dynamics they are trying to direct.

Whether they will be built is not a question that Deacon's framework can answer. It is a question about will, about collective decision-making, about the capacity of a species that has always been reshaped by its tools to decide, for the first time, what kind of reshaping it will accept.

The tools of observation exist. The frameworks of understanding exist. The capacity for conscious direction of the co-evolutionary process — a capacity that is itself a product of the previous co-evolution, of the symbolic cognition that language built into the human brain — is available. The river built the brain that can now study the river. The medium shaped the mind that can now shape the medium.

Whether the mind will shape it — with what care, toward what purposes, informed by what understanding of the dynamics at play — is the question that the present moment poses with an urgency that no previous moment has matched. The stakes are the cognitive architecture of the species itself: the semiotic depth of human understanding, the absential capacity for purpose and meaning, the reflexive consciousness that allows a twelve-year-old to ask what she is for and to care about the answer.

The symbolic species was built by a co-evolution it could not see. The next co-evolution is visible. The question is whether the species will look — and whether looking will translate into the kind of conscious, sustained, knowledgeable stewardship that the moment demands.

The river built us. We can now study the river. What we build next — in the river, with the river, for the ecosystem that depends on it — will determine what the river builds from us.

---

Epilogue

The word that kept stopping me was "grounding."

Not the electrical kind. Not the parenting kind. The semiotic kind — the idea that a symbol means what it means because somewhere, in the history of its use, a living body encountered the thing the symbol refers to. Saw the eagle. Felt the fire. Heard the alarm and ran. The word carries the weight of the encounter. Strip the encounter away, and the word still functions — still occupies its place in the grammar, still triggers the right associations in a reader's mind — but something has thinned. The symbol floats.

I have been floating for months and I know it.

The collaboration with Claude that produced The Orange Pill was the most intellectually exhilarating experience of my working life. I said that in the book. I meant it. But Deacon's framework names something I felt without understanding: the exhilaration was partly the exhilaration of operating at the symbolic layer — connecting ideas across domains, finding structures in arguments, discovering bridges between disciplines I had never studied — while the indexical layer, the layer of slow, effortful, embodied encounter with the material itself, was being compressed. I was moving faster than I had ever moved. I was also, in ways I could only see afterward, moving thinner.

That is not a confession of failure. It is a description of the dynamic. Deacon showed me that the dynamic is structural, not personal — that the thinning of the indexical layer is a feature of any process that accelerates symbolic production beyond the speed at which indexical grounding can form. The acceleration is real. The value is real. The thinning is also real.

What stays with me from this journey through Deacon's ideas is not the critique — though the critique is rigorous and important — but the possibility. The language co-evolution was blind, and it still produced the most extraordinary cognitive architecture in the known universe. The AI co-evolution is not blind. For the first time, the species being reshaped has the tools to watch the reshaping in progress. We can study what the tools are doing to our habits of thought. We can measure the semiotic depth of our engagements. We can build practices — educational, institutional, personal — that maintain the grounding on which everything else depends.

My children will grow up in a world where AI mediates much of their cognitive work. That is not a prediction; it is already true. The question that Deacon's framework forces me to hold is not whether they will use these tools — they will — but whether they will bring to the tools the kind of signal that is worth amplifying. Whether they will have the indexical depth, the embodied understanding, the felt sense of what matters, that transforms the amplifier from a noise machine into an instrument of genuine creation.

I cannot guarantee that. No parent can. But I can build the dams. I can protect the spaces where grounding forms — the boredom, the struggle, the unmediated encounter with difficulty that deposits the thin layers of understanding that no tool can provide. I can model the asking of questions that arise from genuine uncertainty rather than the prompting of responses that arise from habit. I can keep my own signal grounded, as best I can, so that what I amplify is worth the amplification.

The symbolic species was built by a process it could not see. We can see this one. That is the gift, and the responsibility, and the reason I spent these months climbing through the most challenging set of ideas I have ever encountered.

The candle is small. The river is vast. But the candle can see the river now.

That changes everything, if we let it.

Edo Segal

Language didn't just ride the brain. It rewired it.
What if AI is doing the same thing -- right now, to you?

The first great cognitive technology -- symbolic language -- did not merely augment human intelligence. It restructured the biological architecture of the brain itself, co-evolving with the organ across hundreds of thousands of years until neither could be understood without the other. Terrence Deacon proved this. Now his framework forces the most uncomfortable question of the AI age: if the medium rebuilt the mind once before, what is this new medium building?

Through Deacon's semiotic hierarchy and theory of emergence, this book examines what AI actually processes versus what humans actually understand -- and reveals the critical gap between manipulating symbols and inhabiting their meaning. The distinction between correlation and genuine reference, between pattern and purpose, becomes the sharpest diagnostic tool available for navigating human-AI collaboration.

The answer to whether AI represents a new phase of intelligence or a faster current within the existing one will not come from the technology. It will come from the depth of what humans bring to the encounter -- and whether the species that was built by one co-evolution will consciously direct the next.

-- Terrence Deacon

Terrence Deacon
“In the Shadow of Descartes,”
— Terrence Deacon
0%
11 chapters
WIKI COMPANION

Terrence Deacon — On AI

A reading-companion catalog of the 31 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Terrence Deacon — On AI uses as stepping stones for thinking through the AI revolution.

Open the Wiki Companion →