By Edo Segal
The first thing I noticed was my legs.
Not my thinking. Not the argument taking shape on the screen. My legs. Seven hours into a late-night session with Claude, I stood up to get water and almost fell. Not dramatically. Just a half-second gap between intending to walk and walking, my body needing a moment to remember it existed in three dimensions rather than the two of the screen.
I have experienced this hundreds of times. Every builder who works long hours at a desk knows the feeling. I had always filed it under "stiffness" — a physical inconvenience, the body's complaint about a posture held too long. A minor tax on productive work.
Maxine Sheets-Johnstone would say I was classifying a cognitive event as a physical nuisance. And she would be right.
Sheets-Johnstone spent more than five decades building one argument: that thinking does not begin in the mind. It begins in the moving body. Not as metaphor. Not as wellness advice. As the actual, empirical, developmental origin of every cognitive capacity we possess. The infant does not first think and then reach. She reaches, and through reaching, discovers distance. She grasps, and through grasping, learns what objects are. The body builds the mind, layer by kinesthetic layer, and the layers do not disappear when language arrives. They persist as the foundation beneath every abstract thought we will ever have.
Every other thinker in this cycle engages AI at the level of language, economics, power, narrative, or ethics. Sheets-Johnstone engages it at the level of the body. And the body is where all of us actually live. Where the twelve-year-old lying in bed asking "what am I for?" lives, in the weight of her blanket and the warmth of the pillow against her face.
This is the lens I did not have when I wrote The Orange Pill. I wrote about ascending friction, about the value migrating from execution to judgment. Sheets-Johnstone asks the question I missed: what sustains judgment when the body that built it stops moving? What happens to the kinesthetic foundation when the entire working day narrows to fingertips on keys and eyes on glass?
The answer is uncomfortable. The answer is that the foundation erodes — slowly, invisibly, beneath output that keeps looking impressive on every metric except the ones nobody measures.
This book is about the body you are sitting in right now. The one that has been still, probably, for longer than it should have been. The one that knows things your mind does not. The one that built every thought you have ever had and is asking, quietly, to be part of the conversation again.
Move before you read. Then read.
— Edo Segal ^ Opus 4.6
Maxine Sheets-Johnstone (1930–) is an American philosopher, phenomenologist, and interdisciplinary scholar whose career has spanned more than five decades at the intersection of philosophy, biology, evolutionary theory, and dance. Born in New York, she began her intellectual life as a dancer and dance scholar before turning to the philosophical investigation of movement, embodiment, and cognition. Her major works include *The Phenomenology of Dance* (1966), *The Roots of Thinking* (1990), *The Primacy of Movement* (1999, expanded 2011), and *The Corporeal Turn: An Interdisciplinary Reader* (2009). Sheets-Johnstone's central thesis — that cognition originates not in a disembodied mind but in the self-generated movement of the animate body — drew on converging evidence from phenomenology, developmental psychology, evolutionary biology, and neuroscience to challenge the Western philosophical tradition's separation of mind from body. Her concepts of animation, tactile-kinesthetic intelligence, and the kinesthetic foundations of thought have influenced fields ranging from cognitive science and philosophy of mind to dance studies and embodied artificial intelligence research. She has been affiliated with the University of Oregon throughout much of her career and remains one of the most significant voices in the embodied cognition tradition.
Western philosophy committed its original sin in ancient Greece, and the consequences are still compounding. When Plato divided reality into the realm of pure Forms and the degraded world of material bodies, he established an intellectual hierarchy that would persist for two and a half millennia: mind above body, abstraction above sensation, the immaterial soul above the meat that carries it around. Descartes sharpened the division into a clean ontological cut — res cogitans and res extensa, thinking substance and extended substance, mind and matter as categorically separate domains requiring no common explanation. The body became a machine. The mind became a ghost inhabiting that machine. And the task of philosophy became the study of the ghost while the machine was left to the physicians.
Maxine Sheets-Johnstone has spent more than five decades dismantling this inheritance, and her counter-thesis is as simple to state as it is radical in its implications: thinking does not begin in the mind. It begins in the moving body. Not metaphorically. Not as a poetic gesture toward holism. Literally. The cognitive processes that philosophy attributes to a disembodied mind — abstraction, inference, judgment, imagination — are elaborations of capacities that first appeared as bodily movement. The infant does not first think and then move. She moves, and through moving, discovers both the world and herself. Her reaching teaches her about distance. Her grasping teaches her about objects — their weight, texture, resistance, the way they yield or hold firm under the pressure of small fingers. Her falling teaches her about gravity, about the relationship between balance and imbalance, about what happens when support is withdrawn. These are not mere physical events that happen to a body while a mind watches from above. They are cognitive events. They are the body's way of knowing the world before language arrives to provide categories for the knowing.
This claim, developed across Sheets-Johnstone's major works — The Primacy of Movement in 1999, expanded in 2011; The Corporeal Turn in 2009; The Roots of Thinking in 1990 — rests on converging evidence from phenomenology, developmental psychology, evolutionary biology, and neuroscience. The evidence is not speculative. It is empirical, observable, replicable. Watch any infant. What you see is not a proto-mind waiting for language to activate it. What you see is intelligence in its primordial form: a body exploring the world through self-generated movement and building, layer by kinesthetic layer, the cognitive architecture that will eventually support abstract thought.
The developmental timeline is instructive. Jean Piaget documented what he called the sensorimotor stage — the first two years of life, during which the infant's cognitive development is entirely expressed through bodily action. The infant learns object permanence not by being told that objects continue to exist when unseen, but by reaching for a hidden toy and discovering it still there. She learns causality not through instruction but through pushing, pulling, banging — through acting on the world and observing the consequences of her action. Sheets-Johnstone's contribution was to recognize that Piaget's framework, revolutionary as it was, still treated bodily movement as a vehicle for cognitive development rather than as cognition itself. The sensorimotor stage, in Piaget's account, is something the child passes through on her way to the higher stages of concrete and formal operations, the way a caterpillar passes through its larval form on the way to becoming a butterfly. Sheets-Johnstone's insistence is that the kinesthetic intelligence of the sensorimotor period is not transcended. It is retained. It persists as the foundation beneath every subsequent cognitive capacity, including the most abstract.
Consider the conceptual vocabulary that human beings use to describe their cognitive lives. These are not accidental associations. They are traces of the kinesthetic origins of thought. To grasp an idea. To follow an argument. To weigh evidence. To hold a position. To stumble upon a discovery. To feel that an argument has force, that a claim is groundless, that a theory is well-supported or shaky. George Lakoff and Mark Johnson, in Philosophy in the Flesh, documented hundreds of such conceptual metaphors and argued that they reveal the embodied structure of human thought. Sheets-Johnstone goes further. These are not metaphors at all, she argues — or rather, they are metaphors only in the technical linguistic sense. Experientially, they are direct descriptions. When a person says she cannot grasp a concept, the cognitive difficulty she reports is structurally continuous with the kinesthetic difficulty of grasping a physical object that resists the hand's grip. The neural systems activated in the struggle to understand overlap substantially with the neural systems activated in the struggle to grasp. The metaphor is not decorative. It is architectural. It reveals the building that the body built.
The implications for understanding artificial intelligence are immediate and severe.
A large language model processes the word grasp as a token in a statistical distribution. It has learned, through training on billions of human sentences, that grasp frequently co-occurs with words like concept, idea, meaning, understanding. It can use the word correctly in context. It can generate sentences in which grasp functions precisely as a human speaker would use it. But the word, as the model processes it, carries no kinesthetic history. There is no memory of actual grasping — no record of the infant's hand closing around an object, discovering its weight and texture, learning through the resistance of the material what it means for something to be held. The statistical association is there. The experiential substrate is not.
This distinction is not merely philosophical. It produces consequences that Sheets-Johnstone's framework can predict with precision. When a system operates on language stripped of its kinesthetic foundations, the system can produce outputs that are linguistically competent — that sound right, that follow the rules, that satisfy the statistical expectations of a trained reader. But the outputs lack the dimension of meaning that kinesthetic experience provides. They are words without weight. Sentences without the bodily resonance that gives human language its capacity to move a reader — and the word move is itself a kinesthetic term, pointing back to the body's foundational experience of being physically displaced.
The Orange Pill describes a Google engineer who sat down with Claude and described a complex system in three paragraphs, then watched the AI produce a working prototype in an hour. The engineer's reaction — "I am not joking, and this isn't funny" — carries the particular disorientation of a person who has just watched a machine accomplish something she believed required embodied expertise. And it did require embodied expertise — hers. The three paragraphs she wrote were compressed expressions of years of kinesthetic engagement with similar systems: the frustrations, the dead ends, the specific bodily feel of a codebase that was not working correctly. She had spent a year with her team on the problem. The compression into three paragraphs was not a casual summary. It was the distillation of embodied knowledge into language, and the distillation required the knowledge that only her body's engagement with the work had produced.
Claude received the language. It did not receive the knowledge. It received tokens. It produced tokens. The tokens happened to correspond to a functional system. But the correspondence was statistical, not kinesthetic. The system worked not because the AI understood the problem in the way the engineer understood it — through months of bodily engagement, through the felt sense of what was wrong and what might fix it — but because the patterns in the engineer's description activated patterns in the AI's training data that happened to converge on a working solution.
Sheets-Johnstone's framework reveals what is most unsettling about this success. The AI's output was correct. The AI's process lacked everything that Sheets-Johnstone identifies as foundational to genuine cognition: self-generated movement, kinesthetic memory, the body's engagement with resistant material, the developmental history of a living organism learning through its own animation. The output was right. The process was empty — empty of the very things that her life's work identifies as the origins of thought.
This creates a problem that the AI discourse has not adequately confronted. If the output is indistinguishable from what embodied cognition produces, does the absence of embodied process matter? Sheets-Johnstone's answer is unequivocal: yes, it matters — not because the individual output is deficient, but because the relationship between the human and the output has been altered in ways that compound over time. The engineer who produced the three paragraphs drew on kinesthetic intelligence built through years of practice. If she spends the next five years describing problems to Claude rather than engaging with systems directly, the kinesthetic intelligence that produced those three paragraphs will erode. Not because she has forgotten anything. Because the body that built the knowledge is no longer being asked to build.
The geological metaphor that Segal uses in The Orange Pill — each hour of debugging depositing a thin layer of understanding, the layers accumulating into something solid over years — is, from Sheets-Johnstone's perspective, a kinesthetic metaphor reaching for a kinesthetic truth. The layers are not purely cognitive. They are bodily. They are laid down through the specific experience of hands on a keyboard working through resistance, of postural tension accumulating as a problem refuses to yield, of the whole-body release when the solution finally arrives. These experiences are not incidental to the understanding. They are constitutive of it. Remove them, and you remove not just the difficulty but the substrate on which expertise is built.
Western philosophy's original sin was to treat the body as the mind's luggage — necessary for transport, irrelevant to the thinking. Sheets-Johnstone's corrective is not to reverse the hierarchy, placing body above mind, but to dissolve it entirely. There is no mind separate from the body that moves. There is no thought separate from the kinesthetic history that produced it. There is only the animate organism, moving through a world that resists, learning through its movement, building the cognitive architecture that later generations of philosophers would mistakenly attribute to a disembodied soul.
The age of artificial intelligence is, in this light, not the triumph of mind over body. It is the culmination of the error that separated them in the first place. A civilization that believed for two and a half millennia that thinking was something the mind did despite the body has now built machines that think without bodies at all — and is surprised to discover that something essential is missing from the output, something it cannot quite name, something that Sheets-Johnstone has spent her career describing with phenomenological precision.
The something is kinesthesia. The body's own awareness of its movement, its effort, its engagement with the world. The foundation on which every concept was originally built and from which every concept still draws its experiential weight.
Remove the foundation, and the building still stands. For now. But the ground beneath it is no longer there.
---
The most important distinction in the natural world is not between the simple and the complex, or between the small and the large, or between the intelligent and the unintelligent. It is between the animate and the inanimate — between things that move themselves and things that are moved.
A bacterium moves itself. It senses chemical gradients in its environment and swims toward nutrients, away from toxins. It does not merely respond to stimuli the way a thermostat responds to temperature. It generates its own movement from its own center of animation. The movement is the organism's own. The direction is chosen — not consciously, not reflectively, but from within. The bacterium is the author of its locomotion in a way that no thermostat, no matter how sophisticated, is the author of its temperature adjustments.
A stone does not move itself. It is moved — by gravity, by wind, by the hand that throws it. A river does not move itself in the phenomenological sense Sheets-Johnstone intends. It is moved by gravity and topography and the accumulation of water pressure. The appearance of self-movement is an observer's attribution. The water has no center of animation. It has no interiority from which movement is initiated. Sheets-Johnstone insists on this distinction with a persistence that some readers find maddening and others find revelatory, because the distinction is the hinge on which her entire philosophy turns.
In The Primacy of Movement, she traces the concept of animation back to its Latin root, anima — the breath, the soul, the life-force that distinguishes the living from the dead. But she strips the concept of its mystical connotations and grounds it in observable, empirical reality. Animation is self-generated movement. An animate being is a being that moves itself. This capacity — not information processing, not representation, not computation — is the foundational characteristic of living organisms and the ground from which all cognitive capacities emerge.
The claim has startling implications for how one understands artificial intelligence.
A large language model is, by Sheets-Johnstone's definition, inanimate. It does not move itself. It is moved — by electrical current, by input tokens, by the mathematical operations that its architecture performs on those tokens according to rules established during training. The fact that these operations are extraordinarily complex, that they produce outputs of remarkable sophistication, that the outputs are often indistinguishable from what an animate being would produce — none of this alters the fundamental ontological status of the system. It is moved. It does not move itself. It has no center of animation from which its operations are initiated.
This may sound like a technical distinction without practical consequence. Sheets-Johnstone argues it is anything but. The distinction between self-generated movement and being-moved is not a matter of philosophical categorization that can be set aside once the practical work of evaluating AI outputs begins. It is the distinction that determines whether genuine cognition is present — because cognition, in Sheets-Johnstone's framework, is a dimension of animation. It is something that animate organisms do. It evolved as an elaboration of self-generated movement. It retains its kinesthetic origins even at the highest levels of abstraction. To ask whether an inanimate system can genuinely think is, in this framework, a category error — like asking whether a stone can genuinely swim. The stone can be thrown through water. The trajectory may resemble swimming. But the stone does not swim, because swimming is something that animate organisms do, and the stone is not animate.
The 2024 theme issue of Philosophical Transactions of the Royal Society B, titled "Minds in Movement: Embodied Cognition in the Age of Artificial Intelligence," confronted this tension directly. The editors, Louise Barrett and Dietrich Stout, acknowledged that the success of large language models has amplified skepticism about embodied cognition: if completely disembodied systems can produce "human-like linguistic and perceptual behaviour," what remains of the claim that cognition requires a body? The question is fair. The success of LLMs is real and cannot be dismissed. The question is whether the success constitutes evidence against the embodied cognition thesis or evidence of something else entirely — evidence of how far statistical pattern-matching on language can go without touching the kinesthetic substrate from which that language originally grew.
Sheets-Johnstone's response, implicit in her published work though she has not addressed AI directly, would likely be that the success of LLMs reveals the power of language as a compressed representation of embodied experience — and simultaneously reveals the limits of that compression. The engineer's three paragraphs, to return to The Orange Pill's opening example, compressed years of embodied engagement into linguistic form. Claude decompressed those paragraphs into a working prototype. The prototype functioned. But the functioning was parasitic on the embodied knowledge that the engineer had compressed into language. Without her kinesthetic history — without the years of hands-on engagement that gave her the capacity to describe the problem in precisely the way that activated the right patterns in the AI's training data — the three paragraphs would not have existed. The AI's success depended on a human body's prior engagement with the world. It extended that engagement. It did not replace it.
The animate body does something that no inanimate system does, regardless of complexity: it cares. Not in the sentimental sense. In the phenomenological sense. The animate organism's engagement with the world is characterized by what Sheets-Johnstone calls affective-kinetic attunement — a simultaneous responsiveness to the world's qualities that is cognitive, emotional, and motoric at once. The bacterium swimming toward a nutrient gradient is not merely processing information about chemical concentrations. It is moving toward something that matters to it — something relevant to its continued existence, its ongoing animation. The mattering is not added to the information processing as a separate step. It is built into the movement itself. The movement toward the nutrient is already an evaluative act — a living organism's active response to its situation, generated from its own center of animation.
Segal writes in The Orange Pill that consciousness "asks, wonders, cares" and identifies these capacities as the candle flickering in the darkness of an unconscious universe. Sheets-Johnstone's framework provides the deeper analysis: consciousness asks because the animate body encounters situations that demand response. It wonders because the animate body's engagement with the world produces genuine surprise — the felt discrepancy between expectation and outcome that only an organism with kinesthetic anticipation can experience. It cares because animation is inherently directional — the animate body moves toward and away from, and this directionality is the primordial form of valuation.
A machine that processes language does none of these things. It produces tokens that resemble the linguistic expressions of asking, wondering, and caring. The resemblance can be extraordinarily close. But resemblance and identity are different things, and the difference matters — not for the evaluation of individual outputs, which can be judged on their own terms, but for the understanding of what is happening when a human being collaborates with such a system over months and years.
Consider what happens when Segal describes his late-night sessions with Claude. He reports feeling "met" — not by a person, not by a consciousness, but by an intelligence that could hold his intention and return it clarified. The phenomenological structure of this experience is revealing. The human in the exchange is animate. His engagement is characterized by caring — about the quality of the output, about the ideas being explored, about whether the book serves the reader. The system's engagement is characterized by nothing, because the system does not engage. It processes. The feeling of being "met" is the human's attribution of animation to a system that produces outputs consistent with what an animate interlocutor would produce. The attribution is experientially real. It is ontologically false.
This is not a criticism of Segal. The attribution is natural, perhaps inevitable. Human beings evolved to detect agency — to perceive intention behind movement, to attribute animation to things that behave as animate organisms behave. The tendency is so deep and so automatic that it operates even when the person making the attribution knows it is unwarranted. Segal knows Claude is not conscious. He says so explicitly. And yet he reports feeling met. The kinesthetic vocabulary of encounter — of being met, of feeling someone holding your intention — persists even when the intellect disavows it. The body's systems for detecting animation, honed over hundreds of millions of years of evolutionary development, do not switch off because the prefrontal cortex has been informed that the interlocutor is a statistical model.
Sheets-Johnstone has argued that animation is not a binary category — animate or inanimate — but admits of degrees. A bacterium is animate differently from a jellyfish, which is animate differently from a dog, which is animate differently from a human being. The gradient runs from the simplest self-generated movement to the most complex forms of conscious self-awareness, and each level builds on the kinesthetic foundations laid by the levels below it. Human consciousness, in this framework, is the most elaborate known expression of a capacity that begins with the first self-moving cell.
Where does artificial intelligence fall on this gradient? Sheets-Johnstone's framework provides an unambiguous answer: nowhere. The gradient is a gradient of animation — of self-generated movement in an organism that has a center of animation, an interiority from which action is initiated, an affective-kinetic attunement to the world. AI has none of these things. It is not at the low end of the gradient. It is not on the gradient at all. It is a different kind of thing entirely — an extraordinarily sophisticated system for transforming inputs into outputs according to learned patterns, operating without animation, without kinesthesia, without the felt sense of its own movement through a world that resists and responds.
This does not make AI less useful. It does not make it less powerful. It does not invalidate the twenty-fold productivity gains that Segal describes, or the democratization of capability that follows from the collapse of the imagination-to-artifact ratio. It means that AI's usefulness is of a categorically different kind from the usefulness of a skilled human colleague. The colleague is animate. Her engagement with the problem carries the full weight of her kinesthetic history, her affective attunement, her bodily participation in the world. The AI is inanimate. Its processing carries statistical sophistication and nothing else.
The question is not whether the inanimate can produce useful outputs. It manifestly can. The question is what happens to the animate when it spends its working life in partnership with the inanimate — when the body that thinks is yoked to a system that processes without thinking, and the partnership gradually, imperceptibly, trains the body to mistake processing for thought.
That question drives everything that follows.
---
Before the first word, there was movement.
Long before the infant produces or comprehends language, she has already accomplished cognitive feats of staggering complexity, and she has accomplished them entirely through her body's movement in the world. By the time she speaks her first word, typically around twelve months of age, she has already learned that objects persist when unseen, that actions produce consequences, that her body occupies space and can change its position within that space, that surfaces vary in their capacity to support her weight, that some objects yield to pressure and others resist, that reaching extends her capacity to contact the world and grasping extends her capacity to hold it. She has learned all of this without instruction, without language, without representation in any form that a computational model would recognize. She has learned it through movement. Through the kinesthetic intelligence of a body discovering the world by acting upon it.
Sheets-Johnstone's analysis of infant cognition builds on Piaget but departs from him in a crucial respect. Piaget recognized the sensorimotor stage as the foundation of cognitive development. He documented with extraordinary precision the sequence of kinesthetic discoveries through which the infant builds her understanding of the world — the progressive elaboration of reaching, grasping, manipulating, the gradual construction of object permanence and spatial awareness and causal reasoning. But Piaget treated the sensorimotor stage as a stage — as something the child passes through on the way to the higher cognitive operations of childhood and adolescence. The body's intelligence, in Piaget's account, is scaffolding. Once the building of abstract thought is complete, the scaffolding can be removed.
Sheets-Johnstone's counter-argument is that the scaffolding is load-bearing. Remove it, and the building collapses — not immediately, not visibly, but through a progressive loss of structural integrity that manifests as a thinning of cognitive capacity, a shallowing of understanding, a reduction in the fullness of thought available to the person. The kinesthetic intelligence of the sensorimotor period is not transcended by language. It is retained beneath language, providing the experiential substrate that gives linguistic concepts their meaning. When a physicist speaks of a field of force, the concept draws its intelligibility from the body's experience of fields — of open spaces, of areas that can be traversed, of regions that have extent and boundary. When an economist speaks of market pressure, the concept derives its felt meaning from the body's kinesthetic experience of pressure — of being pushed, constrained, compressed by forces external to itself. The language is not decorative. It is not accidental. It is the trace of the body's foundational engagement with the world, preserved in the conceptual structure of every discipline.
The developmental evidence for this claim is extensive and converging. Infants as young as three months display what developmental psychologists call "violated expectation" responses — they look longer at events that are physically impossible (an object appearing to pass through a solid barrier, for instance) than at events that conform to physical law. This is not visual intelligence alone. The infant's expectation of solidity is built on her kinesthetic experience of solid objects — the resistance of the crib rail, the firmness of the floor, the way her own body encounters surfaces that do not yield. Her surprise at the impossible event is kinesthetic surprise: the violation of a bodily expectation, not merely a visual one.
By six months, infants adjust the force of their reaching to match the apparent weight of objects — reaching harder for objects that look heavy, softer for objects that look light. This calibration requires the body to have learned the relationship between visual appearance and kinesthetic effort, a relationship that can only be built through movement. The infant has reached for hundreds of objects by this age. Each reach has deposited a kinesthetic layer: this weight requires this effort; this texture produces this grip; this distance demands this extension of the arm. The layers are invisible. They are not stored in any form that language could capture. They are stored in the body's kinesthetic memory — in the patterns of muscular activation and proprioceptive feedback that constitute the body's knowledge of the world.
The implications for understanding what language is — and therefore what a large language model does — are profound.
Language, in Sheets-Johnstone's framework, is a secondary cognitive system built on top of a primary kinesthetic one. The infant learns language after she has already constructed an elaborate kinesthetic understanding of the world. Language does not create this understanding. It captures it, compresses it, makes it communicable. The word heavy does not teach the infant what heaviness is. She already knows what heaviness is from the kinesthetic experience of lifting objects that resist the upward effort of her arms. The word heavy gives her a way to communicate that knowledge, to share it, to abstract from the specific heaviness of this particular object to the general concept of heaviness that applies across all objects. The abstraction is genuine. The concept is real. But its meaning depends on the kinesthetic experience that preceded it. Without the body's prior engagement with heavy things, the word would be empty — a token without referent, a symbol pointing at nothing.
When Claude generates a sentence containing the word heavy, it is manipulating a token whose statistical associations were learned from billions of human sentences in which the word appears. The model has learned that heavy co-occurs with burden, lifting, weight, responsibility, heart, rain, and thousands of other words and contexts. It can use the word with remarkable precision. It can distinguish between physical heaviness and metaphorical heaviness, between the heaviness of a stone and the heaviness of a decision, between heavy rain and heavy metal. The distinctions are captured in the statistical structure of the training data, and the model navigates that structure with extraordinary facility.
But the word, as the model processes it, has no kinesthetic depth. There is no experiential substrate beneath the statistical association. The model has never lifted anything. It has never felt the resistance of an object that refuses to yield to upward force. It has never experienced the relationship between effort and weight that gives the word heavy its primary, pre-linguistic, kinesthetic meaning. The word floats free of the body that originally grounded it. It is, in Sheets-Johnstone's terms, de-animated language — language stripped of its kinesthetic origins and reduced to pattern.
Segal describes in The Orange Pill the moment Claude made a connection he had not seen — linking technology adoption curves to punctuated equilibrium in evolutionary biology. The connection was real, generative, and led somewhere that Segal's thinking had not gone alone. From the perspective of output, the connection was intellectually productive. From the perspective of Sheets-Johnstone's framework, the connection was produced by a system operating entirely in the domain of de-animated language — tokens matching tokens, patterns activating patterns, without the kinesthetic substrate that gives human pattern-recognition its felt quality of discovery.
The felt quality matters. Not because it is pleasant, though it often is, but because it is cognitively functional. When a human being discovers a connection between disparate ideas, the discovery is accompanied by a kinesthetic event — a felt shift, a bodily recognition that something has clicked into place. The experience is familiar to anyone who has solved a difficult problem: the sudden release of tension, the physical sense of resolution, the whole-body response to a cognitive event. This kinesthetic accompaniment is not epiphenomenal. It is part of the cognitive process. The body's response to the discovery reinforces the connection, encodes it in kinesthetic memory, gives it the kind of durability that purely intellectual recognition does not possess. Ideas that arrive with kinesthetic force — ideas that are felt — are retained more deeply than ideas that arrive as mere information.
Segal reports that the adoption-curves insight changed his thinking. He carried it forward through the book, built on it, returned to it. The durability of the insight may be attributable in part to his kinesthetic engagement with the process of its discovery — the late-night session, the physical state of fatigue and concentration, the bodily environment in which the idea arrived. The same insight, received as a bullet point in a summary, might have been noted and forgotten. Received in the context of bodily engagement — sitting, typing, reading, feeling the frustration of the impasse and the release of its resolution — the insight was kinesthetically encoded. It had weight. It stayed.
The distinction points toward something that the AI discourse has largely overlooked: the role of the body in the retention and integration of knowledge. Information that is merely received — read on a screen, heard in a lecture, produced by an AI — enters the cognitive system through a narrow channel. It is processed linguistically. It may be understood intellectually. But it has not been kinesthetically engaged. The body has not participated in its discovery. The kinesthetic layers that would give the information durability and depth have not been deposited.
This is not an argument against receiving information from AI. It is an observation about what happens to knowledge that arrives without kinesthetic engagement. It sits differently in the mind. It is more readily displaced by the next piece of information. It has less structural integrity — less capacity to serve as a foundation for subsequent thinking. Not because it is wrong, but because it is thin. It lacks the body's contribution.
Sheets-Johnstone's insistence on the primacy of kinesthetic intelligence before language has an uncomfortable corollary for the age of AI. If language is a secondary system built on a primary kinesthetic one, then a technology that operates entirely within language is operating entirely within the secondary system. Its outputs are real, functional, often extraordinary. But they are kinesthetically rootless — words without bodies, concepts without the bodily experience that originally gave them meaning. The humans who receive these outputs and integrate them into their work are receiving de-animated language and attempting to re-animate it through their own kinesthetic engagement. Sometimes this works. The insight arrives linguistically and the body provides the kinesthetic resonance that encodes it deeply. But if the body's own kinesthetic life is impoverished — if the person receiving the output has spent twelve hours at a screen without moving, without encountering resistance, without engaging the full range of her bodily intelligence — then the re-animation fails. The output remains thin. The understanding remains shallow. The geological layers are not deposited, because the body that would deposit them has been sitting still.
This is the cycle that Sheets-Johnstone's framework predicts: AI provides de-animated language. The human receives it without kinesthetic engagement. The understanding remains shallow. The human, operating from shallow understanding, produces prompts that are themselves kinesthetically impoverished — thin questions producing thin answers producing thinner questions. The cycle compounds. Each iteration moves further from the kinesthetic foundation on which genuine cognition rests.
The remedy is not to abandon AI. It is to ensure that the body remains part of the cognitive equation — that the kinesthetic intelligence which precedes language and persists beneath it continues to be exercised, developed, and integrated into the thinking process.
The infant knew this without knowing it. She reached, grasped, pushed, pulled, fell, recovered, and through every movement built the foundation on which all her future thinking would stand. The question for the age of AI is whether the adults who use these tools will remember what the infant knew: that thought begins in the body, and the body must be moving.
---
In 2009, Sheets-Johnstone published an essay in Continental Philosophy Review titled "Animation: The Fundamental, Essential, and Properly Descriptive Concept." The title itself is an argument. Not cognition. Not consciousness. Not intelligence. Animation — the capacity for self-generated movement — is the concept that properly describes what it is to be a living being. Everything else, including the cognitive capacities that philosophy and artificial intelligence research treat as primary, is secondary. Derivative. An elaboration of the more fundamental reality of being a creature that moves itself.
The essay's core distinction is precise. An animate being generates its own movement from its own center of activity. It is not merely reactive — not merely a system that takes inputs and produces outputs according to fixed rules. It initiates. It acts from within. The directionality of its movement is its own, arising from its affective-kinetic engagement with a world that matters to it. A paramecium swimming toward a food source is not executing a program. It is moving toward something relevant to its own continued existence, and the relevance is not computed. It is lived. The organism's orientation toward the world is inherent in its animation — in the fact that it is a living thing that moves itself, that has an inside from which its movement originates, that encounters the world from its own first-person perspective even if that perspective is as minimal as a single-celled organism's chemical sensitivity.
An inanimate object, by contrast, has no inside. It has no center of animation. It does not move itself. It is moved. When a stone rolls down a hill, the movement originates outside the stone — in gravity, in the angle of the slope, in the initial disturbance that dislodged it. When a computer processes a query, the processing originates outside the computer — in the electrical current that powers it, in the input tokens that activate its circuits, in the training data that shaped its weights. The computer's extraordinary sophistication does not alter its ontological status. It is moved. It does not move itself. It has no inside from which its operations are initiated. It is, in the phenomenological sense that Sheets-Johnstone insists on, inanimate.
This distinction is the most provocative element of Sheets-Johnstone's framework when applied to AI, because it appears to render the entire question of machine intelligence moot. If cognition is a dimension of animation, and animation is self-generated movement, and AI does not move itself, then AI does not think. Not in a limited way. Not in a preliminary way. Not in a way that might someday develop into genuine thought. It does not think at all, because thinking is something that animate organisms do, and AI is not an animate organism.
The claim is strong — stronger than most philosophers of mind are willing to endorse. But it is not arbitrary. It rests on a specific understanding of what cognition is and where it comes from, an understanding grounded in decades of cross-disciplinary research. The argument runs as follows.
First, the evolutionary evidence. Cognition did not appear in the history of life as an independent capacity. It evolved as an elaboration of motoric capacity. Organisms that could move themselves through their environments had survival advantages over organisms that could not, and the cognitive capacities that evolved — the ability to detect food sources, avoid predators, navigate terrain, anticipate the behavior of other organisms — evolved in service of movement. The first nervous systems were motor-coordination systems, not information-processing systems. They evolved to organize movement, to produce the coordinated activation of muscles and appendages that allowed organisms to move effectively through complex environments. The capacity for what we now call cognition emerged from within this motoric organization. It was never separate from it. It was never independent of it. Even the most abstract cognitive capacities, operating at the furthest remove from their kinesthetic origins, retain their evolutionary connection to self-generated movement.
Second, the developmental evidence. As the previous chapter traced in detail, the infant's cognitive development begins in movement and proceeds through movement. The kinesthetic intelligence of the sensorimotor period is not a phase that is superseded by higher cognition. It is the foundation on which higher cognition is built. The neural systems that support abstract reasoning overlap substantially with the neural systems that support motor planning, spatial orientation, and bodily coordination. Antonio Damasio's somatic marker hypothesis demonstrated that even apparently pure rational decision-making depends on the body's affective signals — that patients with damage to the brain regions that process bodily feeling make systematically worse decisions, even when their purely intellectual capacities remain intact. The body is not the mind's servant. It is the mind's partner, and the partnership is not optional.
Third, the phenomenological evidence. When human beings engage in cognitive work — solving problems, generating ideas, evaluating possibilities — the engagement is kinesthetically textured. The experience of struggling with a difficult problem is not merely cognitive. It is bodily. The tension in the shoulders, the constriction in the chest, the narrowing of postural range, the restless movement of the legs — these are not distractions from thinking. They are part of thinking. They are the body's kinesthetic participation in a cognitive process that cannot be fully realized without the body's involvement. The release that accompanies the solution — the sudden relaxation of tension, the expansion of breath, the physical sense of resolution — is not an aftereffect of cognitive success. It is an integral component of the cognitive event itself.
Sheets-Johnstone insists on this phenomenological dimension with a stubbornness that makes some readers impatient. But the stubbornness is principled. If the bodily dimension of thinking is acknowledged as real — as an integral part of the cognitive process rather than an incidental accompaniment — then any system that lacks a body is lacking something essential to genuine cognition. Not something optional. Not something that could in principle be added later. Something foundational, something that is present in the simplest animate organism and absent from the most sophisticated machine.
The Orange Pill circles this point repeatedly without quite landing on it. When Segal describes the Luddites' knowledge that "lived in their hands," he is describing kinesthetic intelligence — the kind of knowing that Sheets-Johnstone identifies as constitutively embodied. When he writes about the engineer who lost architectural confidence after months of AI-assisted work, he is describing the erosion of kinesthetic foundations that his vocabulary, oriented toward building and productivity, cannot fully name. When he distinguishes between the exhilaration of flow and the grinding compulsion of auto-exploitation, and locates the difference "inside," he is pointing toward a kinesthetic distinction — between a body that is fully engaged in its movement and a body that is trapped in repetitive motion without genuine kinesthetic participation.
The distinction between the animate and the inanimate clarifies something that The Orange Pill treats as paradoxical: how a tool can be simultaneously the most productive and the most depleting that a person has ever used. The paradox dissolves once the animate-inanimate distinction is applied. The tool is productive because it operates with extraordinary efficiency in the domain of language and pattern. The tool is depleting because it forecloses the kinesthetic engagement through which the body maintains its cognitive vitality. The person using the tool is producing more while engaging less of their animate capacity. The output increases. The animate fullness decreases. These are not contradictory. They are complementary consequences of a single structural reality: the yoke of an animate being to an inanimate partner that excels in the domain of language and is categorically absent from the domain of movement.
The 2024 Royal Society theme issue frames the challenge with unusual candor. The editors note that embodied cognition perspectives have been "critiqued as more rhetorical than substantive" and that these criticisms "have only been amplified by the recent success of completely disembodied deep learning models." The success of LLMs appears to refute the embodied cognition thesis — appears to demonstrate that linguistic intelligence can be produced without bodies, without movement, without kinesthesia. If Claude can write, reason, and draw connections without any bodily engagement whatsoever, what remains of the claim that cognition requires a body?
Sheets-Johnstone's likely response, extrapolated from her published work, is that the question confuses product with process. LLMs produce language that is linguistically competent. This is not in dispute. But the language they produce is parasitic on human embodied experience in a double sense. First, the training data consists entirely of language produced by animate beings — by human bodies that were moving, feeling, kinesthetically engaged with the world when they produced the words that the model learned from. Every sentence in the training corpus carries, in compressed form, the kinesthetic history of its author. The model learned the patterns. It did not learn the kinesthetic experience that generated them. Second, the outputs of the model are evaluated, interpreted, and integrated by animate beings — by human users whose capacity to make sense of the AI's language depends on their own kinesthetic history, their own embodied understanding of what the words mean. The model's language is produced without animation and received by animation. It passes through a kinesthetic void and is re-animated on the other side.
The re-animation is where the risk lies. If the human receiver's own kinesthetic life is rich — if she has spent time moving, making, engaging with resistant material, maintaining the bodily intelligence that gives language its experiential depth — then the re-animation is successful. The AI's output is received, understood, felt, integrated. It becomes part of the person's thinking because the person has the kinesthetic resources to give it depth.
But if the human receiver's kinesthetic life has been impoverished by the very technology that produces the output — if she has spent twelve hours, sixteen hours, at a screen, her body idle, her kinesthetic range collapsed to the minimal repetitive movements of typing — then the re-animation fails. The words arrive without depth. The understanding remains at the level of language. The kinesthetic resonance that would encode the insight deeply, that would give it weight and durability and the felt quality of genuine understanding, is not available. The body cannot provide what it has not been exercising.
The inanimate machine works tirelessly, without fatigue, without the kinesthetic costs that animate beings incur. This tirelessness is its greatest practical advantage and, from Sheets-Johnstone's perspective, its deepest philosophical significance. The machine does not tire because it does not move itself. It does not experience the effort of its processing, the resistance of its material, the accumulation of fatigue that is the animate body's signal that engagement has reached its limit. The human being who partners with this tireless machine is an animate organism yoked to an inanimate one, and the asymmetry creates a specific kind of danger: the danger that the animate being will attempt to match the inanimate's pace, will suppress the kinesthetic signals — fatigue, restlessness, the body's protest at its own immobility — that are not obstacles to good work but essential regulatory mechanisms developed over hundreds of millions of years of evolutionary fine-tuning.
Segal describes this precisely when he writes about catching himself on a transatlantic flight, writing not because the book demanded it but because he could not stop. The body's regulatory signals — fatigue, the need for movement, the kinesthetic saturation that signals enough — were overridden by the momentum of productive output. The machine did not tire. The human tried not to tire. The human lost.
The animate body is not a limitation that AI transcends. It is the ground of cognition itself — the source of the kinesthetic intelligence from which all thinking grows, the regulatory system that prevents productive engagement from becoming self-destructive compulsion, the experiential substrate that gives language its meaning and knowledge its depth. When the partnership between animate and inanimate is structured so that the animate partner suppresses its own kinesthetic nature to match the inanimate partner's pace, the result is not enhanced cognition. It is de-animated cognition — thinking that has been severed from the body that generated it, language that has been cut loose from the kinesthetic experience that gives it weight.
The distinction between animate and inanimate is not a philosophical curiosity. It is the diagnostic key to understanding what AI does to the people who use it, and what those people must do to remain whole.
The potter's hands know something that the potter's mind does not.
This is not a romantic claim. It is an empirical one, documented across decades of research in motor learning, expertise studies, and the phenomenology of skilled practice. The potter who has spent twenty years shaping clay possesses a form of intelligence that resides not in propositions, not in rules, not in any knowledge that could be written down and transmitted to someone who has never touched clay — but in the hands themselves, in the specific patterns of muscular activation and tactile sensitivity and proprioceptive calibration that constitute the body's skilled engagement with a particular material.
She knows, through her fingertips, when the clay is too wet. She knows through the resistance against her palms when the wall of the vessel is thinning unevenly. She knows through the kinesthetic feel of the wheel's rotation whether the speed is right for the operation she is performing. She makes continuous adjustments — hundreds per minute, most of them below the threshold of conscious awareness — and each adjustment draws on a reservoir of tactile-kinesthetic knowledge that was built through years of bodily engagement with material that resisted, yielded, collapsed, held, and taught her through its behavior what no textbook could convey.
Sheets-Johnstone calls this tactile-kinesthetic intelligence, and she insists that it constitutes a genuine cognitive capacity — not a lesser form of intelligence that operates beneath the real thinking happening in the brain, but a fully realized mode of knowing that produces understanding irreducible to verbal or visual representation. The potter cannot fully explain what her hands know. She can approximate, using language that gestures toward the kinesthetic reality — "you have to feel when it's ready," "your hands just know" — but the gesturing is imprecise because the knowledge is not linguistic in origin. It was built in the body, through the body's engagement with material, and it lives in the body's patterns of skilled response.
The irreducibility is the key point. Tactile-kinesthetic intelligence is not tacit knowledge in the sense that it could be made explicit if one tried hard enough. It is constitutively embodied — it exists only in the body's engagement with specific materials and cannot be extracted, formalized, or transferred to a system that lacks a body. The knowledge is the engagement. Remove the engagement, and the knowledge does not persist in some other form. It ceases to exist.
Sheets-Johnstone traces this claim through multiple domains of skilled practice, and each domain reinforces the same structural point. The surgeon who operates with her hands inside the body cavity possesses tactile-kinesthetic intelligence that years of medical training and clinical experience have deposited in her fingers, her wrists, her postural habits, her whole-body coordination. She feels the difference between healthy tissue and diseased tissue. She navigates by touch in spaces where visual information is incomplete. She makes force adjustments calibrated to the specific resistance of the structures she encounters — adjustments so fine and so rapid that conscious deliberation could not produce them. The intelligence is in the hands. Not metaphorically. The hands know.
The musician's fingers know where to go on the fretboard, on the keyboard, on the string. The placement is not computed from a set of rules about intervals and fingerings. It is kinesthetically given — the fingers move to the right position because the body has internalized the spatial-kinesthetic relationships through thousands of hours of practice. Ask the musician to explain why she placed her second finger rather than her third, and she may be able to produce a rule-based explanation after the fact. But the explanation is a rationalization of a kinesthetic event, not a description of the cognitive process that produced it. The process was bodily. The fingers knew before the mind could articulate why.
The weaver — and here The Orange Pill's account of the Luddites becomes directly relevant — possessed tactile-kinesthetic intelligence of extraordinary specificity. The framework knitter knew, through the feel of the thread under tension, whether the count was right. She knew through the resistance of the loom whether the warp was properly set. She knew through the kinesthetic quality of the shuttle's passage whether the weave was even. This knowledge was not supplementary to her work. It was her work. The quality of the fabric was determined not by following rules but by the body's continuous, sub-conscious, kinesthetically informed adjustment to the conditions of the material.
When Segal writes that the Luddites' knowledge "lived in their hands" and that the power loom "did not need to understand the tensile properties of different fibers, or the relationship between thread count and drape," he is describing the destruction of tactile-kinesthetic intelligence with a precision his builder's vocabulary could not fully unpack. The power loom did not merely replace a slower production method. It eliminated the cognitive environment in which a specific form of human intelligence operated. The weaver's hands, disengaged from the loom, did not retain their knowledge in portable form. The knowledge was the engagement. When the engagement ended, the knowledge did not relocate. It died.
The implications for AI-assisted work follow with uncomfortable directness.
A large language model operates entirely in the domain of representation. It processes tokens — symbolic units that stand for words, which stand for concepts, which were originally grounded in kinesthetic experience but have been, through successive layers of abstraction, stripped of their embodied origins. The model has no tactile capacity. It has no kinesthetic engagement with material. It has no hands. The form of intelligence it exhibits is powerful, versatile, and genuinely useful. It is also, by Sheets-Johnstone's criteria, categorically different from the intelligence that a skilled practitioner exercises through bodily engagement with resistant material.
The difference is not a matter of current technological limitation — not a problem that will be solved when AI systems are given robotic bodies with tactile sensors. Sheets-Johnstone's argument is more fundamental. Tactile-kinesthetic intelligence is not reducible to information about tactile-kinesthetic events. The potter's knowledge is not a dataset of pressure readings and moisture levels that could in principle be captured by sufficiently sensitive instruments and fed to a machine learning model. The knowledge is the body's lived experience of engaging with the clay — an experience characterized by kinesthetic consciousness, by the felt quality of effort and resistance and adjustment, by the temporal flow of engagement that unfolds in the body's own time. A robotic system equipped with pressure sensors could replicate the potter's physical movements. It could not replicate the potter's kinesthetic experience of those movements, because kinesthetic experience is a property of animate organisms — of bodies that move themselves — and the robot, however sophisticated, is moved.
This distinction matters beyond philosophy because it identifies something specific that AI cannot do, not as a temporary limitation but as a structural feature of what AI is. The tactile-kinesthetic intelligence that skilled practitioners possess — surgeons, musicians, athletes, craftspeople, and yes, certain kinds of engineers — is a form of cognition that exists only in animate bodies and that produces knowledge unavailable through any other means. AI can process language about this knowledge. It can generate descriptions of skilled practice that are remarkably accurate. It can even produce instructions that, followed by a body, might lead to the development of tactile-kinesthetic competence in that body. But AI cannot possess the knowledge itself, because the knowledge is constitutively embodied.
Segal describes an engineer in Trivandrum who spent eight years on backend systems and, using Claude, built a complete user-facing feature in two days. The achievement is real and significant. But Sheets-Johnstone's framework invites a question that the celebration of this achievement tends to suppress: what kind of knowledge did the engineer build during those two days?
She produced a working feature. She learned that she could produce a working feature. She acquired familiarity with a new domain of software — frontend interfaces — that she had not previously operated in. These are genuine cognitive gains. But were they kinesthetic? Did her body's engagement with the work deposit the kind of tactile-kinesthetic layers that eight years of backend development had deposited? Or was the engagement primarily linguistic — describing what she wanted, evaluating what Claude produced, iterating through conversation rather than through the bodily struggle of debugging, testing, failing, understanding through physical interaction with resistant systems?
The question is not rhetorical. It has an empirical answer, and the answer matters for the long-term cognitive development of practitioners in every field. If the two days of Claude-assisted frontend work deposited kinesthetic layers comparable to those deposited by months of unaided practice, then the concern is misplaced. But if the work was primarily linguistic — if the body's role was limited to typing and reading, if the tactile-kinesthetic dimension of the work was largely absent — then the engineer gained capability without gaining the embodied knowledge that would sustain and deepen that capability over time. She can produce the feature. She may not understand it in the way that her body would have understood it had she built it through months of direct, hands-on struggle with frontend code.
The distinction between capability and understanding is central to Sheets-Johnstone's critique. AI confers capability with extraordinary generosity. It allows people to do things they could not do before, in domains they have not trained in, at speeds that would have been inconceivable a year ago. This is real and should not be diminished. But capability conferred through linguistic interaction is not the same as understanding built through kinesthetic engagement. The capability may be functional. The understanding is thin — lacking the bodily dimension that gives knowledge its depth, its durability, its capacity to serve as a foundation for judgment in ambiguous situations where rules do not apply and only the body's felt sense of rightness can guide the practitioner's hand.
The surgeon trained on laparoscopic techniques, to return to the example The Orange Pill uses to illustrate ascending friction, has genuine capability. She can perform operations that open surgeons cannot. But her tactile-kinesthetic intelligence is different in kind from that of the open surgeon — not lesser in all respects, but differently constituted, differently grounded, drawing on different bodily capacities. The open surgeon's hands know the body's interior through direct tactile engagement. The laparoscopic surgeon's hands know it through mediated engagement — through instruments that transmit force and resistance at a remove. Both are forms of tactile-kinesthetic intelligence. But the laparoscopic surgeon's intelligence is kinesthetically thinner — operating through a narrower channel of bodily engagement, compensating with visual intelligence and cognitive inference for what the hands cannot directly feel.
AI-assisted work extends this trajectory further. The person who builds through prompts and evaluations is not even operating through mediated instruments. She is operating through language. The channel of bodily engagement has narrowed to its minimum: fingers typing, eyes reading. Everything else — the full-body coordination of the craftsperson, the postural engagement of the surgeon, the kinesthetic flow of the musician — has been eliminated. The work gets done. The work may be excellent. But the body's intelligence is not contributing, and its absence is not neutral. It is a loss — a loss that compounds over months and years of screen-based work as the unused tactile-kinesthetic capacities atrophy and the person's cognitive life becomes increasingly, and perhaps irreversibly, linguistic.
The potter's hands know something her mind does not. The question for the age of AI is whether anyone, in ten years, will still have hands that know anything at all.
---
Byung-Chul Han gardens in Berlin. He does not garden as recreation. He does not garden to decompress from intellectual work that happens elsewhere, in an office, at a desk, before a screen. He gardens as intellectual work. The soil, the seasons, the resistance of the earth — these are not interruptions of his thinking. They are its medium.
Segal, in The Orange Pill, treats Han's garden with admiration tinged with distance. "I will never tend one," he writes. The garden is his "counter-life, the path I did not take." The concession is honest and revealing: it acknowledges the garden's value while placing it outside the narrator's own practice, in a domain of possibilities that are seen clearly but not inhabited. The garden remains, in Segal's account, primarily symbolic — a metonym for slowness, for refusal, for an alternative relationship with time and productivity that the builder's life does not accommodate.
Sheets-Johnstone's framework transforms the garden from symbol into mechanism. The garden is not merely slow. It is not merely resistant to optimization. It is a site of kinesthetic engagement whose cognitive function can be described with phenomenological precision — and whose value is not aesthetic or therapeutic but cognitive in the deepest sense.
Start with what the body does in a garden.
It bends. The act of bending — lowering the torso toward the ground, adjusting the center of gravity, recruiting the muscles of the lower back and abdomen and legs to support the altered posture — engages the proprioceptive system across its full range. The body that has been sitting at a screen, maintaining a single posture for hours, is suddenly asked to operate in three-dimensional space, to manage gravitational forces, to coordinate multiple muscle groups in patterns that typing never requires. The bending is not exercise in the recreational sense. It is the proprioceptive system being activated, recalibrated, reminded of its own range.
It lifts. A bag of soil, a pot, a rock, a watering can full of water. The lifting engages the body's force-calibration systems — the tactile-kinesthetic mechanisms through which the body learns to match effort to resistance. The can is heavier when full, lighter when tipped. The soil is dense when wet, loose when dry. Each variation in weight and texture requires adjustment, and each adjustment deposits a kinesthetic layer: this weight requires this effort; this material resists in this way. The layers are modest. Individually, they are negligible. Over months and years, they constitute a bodily knowledge of materials and forces that contributes to the person's cognitive fullness.
It digs. The spade encounters soil that varies in composition, moisture, density, root penetration. The resistance is not uniform. Each stroke of the spade produces tactile feedback through the handle, through the arms, through the shoulders, and the body adjusts its force and angle continuously in response. This is precisely the kind of tactile-kinesthetic engagement that Sheets-Johnstone identifies as cognitively significant — the body learning through its encounter with resistant material, building knowledge that is unavailable through any other means.
It kneels. The knees on the earth, the shift of weight, the hands now close to the ground and free to engage directly with soil, with roots, with seedlings. The hands pull weeds — grip, assess, apply differential force. The roots resist. Some yield easily; others are anchored deep and require sustained effort, a rocking motion, a different angle of extraction. The body learns through each extraction something about the root structure of this particular plant, the moisture content of this particular soil, the relationship between surface growth and subsurface grip. None of this information is useful in the sense that a technology metric would recognize as useful. All of it is kinesthetically rich — engaging the body's intelligence in ways that twelve hours at a screen does not approach.
The garden refuses optimization because its temporal structure is biological, not computational. Seeds germinate on their own schedule. Seasons change regardless of the gardener's preferences or productivity targets. The relationship between effort and result is mediated by forces — weather, soil chemistry, the genetic characteristics of the plants themselves — that the gardener influences but does not control. This resistance to control is kinesthetically significant, because it requires the body to adapt to conditions it did not choose. The screen environment is responsive: it does what you tell it, within its parameters, at the speed you request. The garden environment is independent: it does what its biology dictates, and the gardener's task is to observe, respond, accommodate. The distinction between a responsive environment and an independent one is kinesthetically significant because it determines whether the body is directing or adapting, commanding or listening, imposing its will or calibrating its effort to conditions it must learn to read.
Segal writes that Han "listens to music only in analog, where the friction between the sound and his attention cannot be eliminated." The garden is Han's kinesthetic analog — a medium that imposes its own friction, its own rhythm, its own resistance to the human desire for control. The attention that the garden demands is not the focused, narrow, screen-directed attention that AI-assisted work produces. It is diffuse, proprioceptive, whole-body attention — the kind of attention that registers the angle of sunlight, the moisture of the air, the texture of the soil, the particular green of a leaf that indicates health or stress. This attention is kinesthetically distributed across the entire organism, not concentrated in the eyes and fingertips. It is the kind of attention that Sheets-Johnstone's framework identifies as cognitively foundational — the animate body's attunement to its environment, operating through multiple sensory channels simultaneously, integrating information from the whole organism rather than from the narrow bandwidth of screen and keyboard.
The distinction between screen attention and garden attention maps onto a distinction in cognitive science between focal and ambient processing. Focal processing is narrow, directed, resource-intensive — the kind of attention required to read code, evaluate an AI's output, or track an argument through a complex document. Ambient processing is broad, distributed, often below the threshold of conscious awareness — the kind of attention that monitors the peripheral environment, registers changes in light and sound and temperature, maintains postural equilibrium, tracks the body's position in space. Screen work privileges focal processing almost exclusively. Garden work engages both, often simultaneously, and the simultaneous engagement is what gives garden work its kinesthetic richness.
The garden provides something else that screen-based work cannot: the experience of time as biological duration rather than computational speed. Computational time is measured in milliseconds. Claude responds in seconds. The feedback loop between prompt and output is nearly instantaneous, and the near-instantaneity trains the user to expect immediacy — to experience even brief delays as frustration, to fill every gap between intention and result with another task, another prompt, another optimization. The Berkeley researchers documented this as "task seepage" — the tendency for AI-accelerated work to colonize previously protected temporal spaces. The garden resists task seepage because its temporal structure is immune to acceleration. A tomato plant does not grow faster because the gardener is impatient. The season does not turn because the calendar app says it should.
This resistance is kinesthetically significant because it re-establishes the body's relationship with natural temporality — with the rhythms of growth, decay, recovery, and dormancy that characterized all human experience before the industrial revolution and that the body's systems are calibrated to accommodate. The circadian rhythm, the seasonal variations in energy and attention, the body's need for periodic rest and recovery — these are kinesthetic realities, built into the organism's physiology, that screen-based work systematically overrides. The garden does not override them. It reinforces them. The gardener who works through the seasons develops a kinesthetic relationship with temporal change that the screen worker, living in the eternal present of the digital interface, progressively loses.
When Segal writes about catching himself on a transatlantic flight, writing not because the book demanded it but because stopping had become intolerable, he is describing a body whose kinesthetic relationship with time has been deranged by the screen's temporal regime. The screen says: more is possible. The body says: enough. The screen wins, not through force but through the elimination of the kinesthetic cues — fatigue, restlessness, the felt sense of saturation — that would, in a kinesthetically rich environment, signal the need to stop.
The garden provides those cues in abundance. The body that has been digging for an hour feels the work in its back, its arms, its hands. The feeling is not pain. It is kinesthetic saturation — the body's signal that this particular pattern of engagement has reached its productive limit and that a different engagement, or genuine rest, is now appropriate. The signal is part of the cognitive process. It is the body's contribution to the regulation of effort, and it operates below conscious deliberation, in the kinesthetic substrate where the body's intelligence lives.
Han's garden is not metaphor. It is not nostalgia. It is not a Luddite refusal dressed in philosophical language. It is a kinesthetic practice — a deliberate engagement with the body's cognitive capacities in an environment that activates the full range of tactile-kinesthetic intelligence that screen-based work forecloses. The garden produces knowledge: knowledge of materials, of forces, of temporal rhythms, of the relationship between effort and result in a world that resists optimization. This knowledge does not appear on any productivity metric. It does not generate tokens or revenue or GitHub commits. It maintains the kinesthetic foundation on which all of those outputs ultimately depend — the animate body's capacity to think through movement, to know through touch, to regulate its own engagement with the world through the felt wisdom of a body that has been moving since birth.
The path Segal did not take is not merely an alternative lifestyle. It is an alternative mode of cognition — one that the age of AI has made simultaneously more necessary and more difficult to sustain.
---
A common objection to the embodied cognition argument runs as follows: typing is bodily engagement. The fingers move. The muscles activate. The proprioceptive system registers the position and movement of the hands on the keyboard. When a writer sits at a desk and types for eight hours, her body is not idle. It is working — performing a skilled motor task that has been refined through years of practice into a fluid, automatic, kinesthetically integrated activity. The pianist's fingers are celebrated as instruments of embodied intelligence. Why not the typist's?
The objection has surface plausibility and fundamental error. Sheets-Johnstone's framework reveals the error through a distinction between kinesthetic richness and kinesthetic poverty — between bodily activities that engage the organism's full motor and tactile capacities and activities that engage a narrow subset while leaving the rest dormant.
Consider the range of bodily engagement involved in shaping clay on a wheel. The potter uses her hands — both hands, all ten fingers, the palms, the backs, the wrists — in continuously varying configurations. She applies force in multiple directions simultaneously: inward to center the clay, upward to raise the wall, outward to widen the form. The force varies by orders of magnitude across the course of a single piece — from the gross pressure of centering a new lump to the delicate touch required to thin a lip to its final thickness. Her arms are involved, her shoulders, her torso — she leans her weight into the wheel at certain moments, draws back at others. Her feet control the wheel's speed through a pedal, and the speed regulation is kinesthetically coordinated with the hand movements: faster rotation for centering, slower for shaping, slower still for finishing. Her entire body is engaged in a coordinated kinesthetic act that varies continuously across dozens of parameters simultaneously.
Now consider typing. The hands rest on a flat surface. The fingers depress keys through a range of approximately four millimeters. The force required to depress each key is uniform — designed to be uniform, optimized for uniformity, because variation in key resistance would be experienced as a defect rather than as information. The tactile feedback is minimal and repetitive: the click of the key, identical across every keystroke. The arms do not move. The shoulders do not move. The torso is static. The feet are uninvolved. The body below the wrists is, in kinesthetic terms, inert.
This is not a value judgment about typing as an activity. Typing is a useful skill, and skilled typists achieve a fluency in which the hands produce language at the speed of thought — a genuine cognitive accomplishment. But the kinesthetic range of typing is a fraction of the kinesthetic range of making. The fraction is not small. It is vanishingly small. The body that types is engaging perhaps five percent of the motor and tactile capacities that the body possesses. The remaining ninety-five percent is sitting still.
The stillness is not rest, in the sense that Sheets-Johnstone's framework would recognize as cognitively relevant. Rest, in the kinesthetic sense, is the recovery that follows engagement — the body's return to equilibrium after a period of activity. The stillness of the typing body is not recovery. It is disuse. The muscles that would be engaged in making, in building, in manipulating resistant material — the large muscle groups of the back, the legs, the core, along with the fine motor capacities of the hands and fingers operating at their full range — are not resting. They are idle. And idleness, maintained over hours and days and months and years, produces not rest but atrophy.
Segal describes in The Orange Pill the productive intensity of his work with Claude — sessions that run for hours, that produce extraordinary output, that leave him exhilarated or exhausted or both. The description is accurate on its own terms. But Sheets-Johnstone's framework reveals what the description omits: during those hours, Segal's body was performing a single repetitive kinesthetic act — typing — while his cognitive system was operating at maximum capacity. The asymmetry between cognitive engagement and kinesthetic engagement is the structural signature of AI-assisted work, and it is precisely this asymmetry that Sheets-Johnstone's framework identifies as cognitively costly.
The cost is not immediate. The typing body does not announce its impoverishment through pain or obvious dysfunction. The cost is cumulative — a progressive narrowing of kinesthetic range that manifests not as a specific deficit but as a general thinning of cognitive capacity. The thinking becomes less textured, less dimensioned, less supported by the bodily resonance that gives thought its felt quality of depth. The person may not notice the thinning because the linguistic output — the words on the screen, the code that runs, the paragraphs that cohere — remains competent or even excellent. The output is the visible product. The kinesthetic foundation is the invisible substrate. And the substrate can erode significantly before the product shows signs of degradation.
The analogy to soil is instructive — and kinesthetically appropriate. Industrial agriculture produces impressive yields from soil that is being progressively depleted. The crops grow. The harvest is abundant. But the soil's organic matter, its microbial ecology, its structural complexity, its capacity to sustain growth over the long term — all of these are diminishing beneath the surface. The yields mask the depletion. The depletion continues until, eventually, the soil cannot sustain the yields, and the collapse is sudden rather than gradual, surprising to everyone who was measuring only the output.
Sheets-Johnstone's framework suggests an analogous trajectory for kinesthetic capacity. The person who works exclusively through typing and screen interaction produces outputs that mask the progressive depletion of her kinesthetic intelligence. The outputs are competent. The body is atrophying. The atrophy continues below the threshold of awareness until it manifests as a deficit that the person cannot immediately explain — a loss of confidence in judgment, a reduction in the felt sense of rightness that guides decision-making, a shallowing of understanding that linguistic competence can no longer compensate for.
The contrast with making — with the kinesthetic richness of bodily engagement with resistant material — clarifies what is being lost. The woodworker's body engages with wood through a range of kinesthetic activities that varies across dozens of parameters: the force and angle of the saw, the pressure and direction of the plane, the resistance of the chisel against grain that runs one way and not another. Each engagement teaches the body something about the material — its density, its grain structure, its response to different kinds of force. The knowledge is tactile-kinesthetic: it lives in the body's patterns of skilled response and is activated through the body's engagement with the material. Remove the engagement, and the knowledge does not persist in some other form. It was never separate from the body's activity. It was the body's activity, understood from the inside.
The maker, then, is not merely producing an object. She is producing knowledge — kinesthetic knowledge, deposited in the body through every stroke, every cut, every adjustment. The making is a cognitive process in its own right, not a mere execution of a pre-formed plan but an ongoing discovery of possibilities that the material reveals only through the body's engagement with it. The woodworker who begins with one intention often arrives at something different, because the wood's resistance — its grain, its knots, its tendency to split along certain lines — redirects the maker's hand and, through the hand, the maker's intention. The material thinks back. The dialogue between hand and material is a cognitive event that produces understanding unavailable through any other means.
Typing produces no such dialogue. The keyboard does not think back. It does not resist in meaningful ways. It does not redirect intention through the specificity of its material response. It accepts every input with the same uniform click. The flat, featureless, uniformly responsive surface of the keyboard is the kinesthetic equivalent of what Han calls the smooth — an environment from which meaningful resistance has been eliminated, and with it, the cognitive contribution that resistance provides.
When Segal describes the "aesthetics of the smooth" — the cultural preference for frictionless interfaces, seamless experiences, surfaces without seams or texture — he is describing, without quite naming it, the kinesthetic consequence of a design philosophy that treats bodily engagement as an obstacle to efficiency. The smooth interface is efficient precisely because it minimizes kinesthetic engagement. It asks nothing of the body beyond the minimal repetitive movements required to transmit linguistic instructions. The efficiency is real. The kinesthetic cost is real. And the cost does not appear on any dashboard, in any productivity metric, in any quarterly report — because the metrics measure output, and output can remain high long after the kinesthetic foundation has begun to erode.
The distinction between typing and making is not a call to abandon keyboards. It is a call to recognize what keyboards do not provide and to ensure that the cognitive life of the person at the keyboard includes kinesthetic engagement that the keyboard itself cannot supply. The builder who types all day and makes nothing with her hands all week and sits still all month is a builder whose cognitive resources are narrowing. Not because typing is harmful, but because typing alone is kinesthetically insufficient — a thin diet that leaves the body's broader cognitive capacities unfed.
Segal asks, in the context of democratization, whether a person with an idea and a tool is enough. Sheets-Johnstone's amendment is precise: a person with an idea, a tool, and a body that has been moving is enough. A person with an idea, a tool, and a body that has been sitting still for twelve hours is something less — not less capable in the immediate, linguistic sense, but less cognitively whole. Less grounded. Less available to the kinesthetic resonance that gives ideas their depth and their staying power.
Typing is doing. It is not making. The distinction matters, and it matters more with each hour that the typing body sits still.
---
Close your eyes. Without looking, you know where your hands are. You know whether you are standing or sitting. You know the angle of your head, the position of your feet, the degree of bend in your knees. You know these things not through calculation, not through inference, not through any cognitive process you could describe or replicate in language. You know them through proprioception — the body's awareness of its own position and movement in space, generated by receptors in the muscles, tendons, and joints that continuously report the body's configuration to the central nervous system.
Proprioception is sometimes called the sixth sense, but this characterization understates its significance. It is not an additional sense, supplementing sight and hearing and touch. It is the foundational sense — the sense that makes all other senses coherent by locating them in a body that has position, orientation, and the capacity for movement. Without proprioception, visual information is meaningless, because you cannot interpret what you see without knowing where your eyes are, which direction they face, how your head is oriented relative to your body and your body relative to the ground. Without proprioception, tactile information is incoherent, because you cannot interpret what you feel without knowing where the feeling hand is and how it is positioned relative to the thing it touches. Proprioception is the integrating sense — the body's continuous, sub-conscious awareness of itself that makes all other awareness possible.
Sheets-Johnstone identifies proprioception as the ground of the sense of self. Not the narrative self — not the self that has a name, a history, a social identity, a story it tells about who it is. The primordial self — the self that exists before narrative, before language, before reflection. The self that the infant possesses from birth: the felt sense of being a body that occupies space, that has boundaries, that moves and is moved, that encounters a world from a specific location. This primordial self is proprioceptive. It is the body knowing itself through its own internal signals — knowing its own position, its own movement, its own boundaries — before any external source of knowledge has been consulted.
The claim has support from multiple directions. Developmental psychologists have documented that infants as young as a few hours old display proprioceptive self-awareness — they can distinguish between self-generated tactile stimulation (their own hand touching their face) and externally generated stimulation (someone else touching their face). The distinction is proprioceptive: the self-touch is accompanied by simultaneous tactile and proprioceptive signals that the external touch lacks. The infant does not know, in any reflective sense, that she is a self. But her body knows, proprioceptively, the difference between self and not-self. The distinction is the earliest form of self-awareness, and it is entirely bodily.
Neurological evidence reinforces the claim. Patients with proprioceptive deficits — caused by rare neuropathies that destroy the receptors in the muscles and joints while leaving other sensory systems intact — report devastating consequences that extend far beyond the loss of motor coordination. They describe a loss of the sense of being embodied, of occupying a physical position in space, of having a body at all. Ian Waterman, whose proprioceptive loss was documented extensively, described the experience as losing his body — not in the sense of paralysis (his muscles still worked) but in the sense of ceasing to feel himself as a physical presence. He could still move, but only by watching his limbs and consciously directing them through visual feedback. The automatic, effortless, proprioceptively guided movement that characterizes normal bodily life was gone, and with it, the felt sense of being a body.
The loss is instructive for the AI discussion because it reveals what proprioception contributes to the cognitive life of the person who possesses it. When proprioception is intact, the body provides a continuous, sub-conscious foundation for the sense of self — a kinesthetic grounding that persists beneath every other form of self-awareness. You do not notice proprioception when it is working, any more than you notice the ground when it is solid. You notice it only when it fails — when the ground shifts, when the body's self-knowledge is disrupted, when the kinesthetic foundation is no longer there.
Screen-based work does not destroy proprioception. It understimulates it. The person sitting at a desk, typing, maintains a single posture for extended periods. The proprioceptive system, receiving a monotonous input — same position, same joint angles, same muscle tensions, hour after hour — reduces its activity. Not to zero, but to a level that Sheets-Johnstone's framework would characterize as kinesthetically impoverished. The body's self-awareness narrows. The felt sense of occupying three-dimensional space dims. The proprioceptive self — the primordial, pre-reflective sense of being a body — recedes into the background, and the foreground is dominated by the cognitive activity of the screen: the prompts, the outputs, the evaluation of language, the linguistic dance with a machine.
This recession of proprioceptive awareness has consequences that are difficult to measure and easy to feel. The person who has been at a screen for twelve hours feels something when she finally stands — not just the physical stiffness of muscles held in a single position, but a more diffuse disorientation. A lag between intention and movement. A clumsiness that lasts a few seconds as the proprioceptive system recalibrates. The sensation is familiar to anyone who works long hours at a computer: the world feels slightly unreal when you re-enter it. Objects feel slightly distant. Your own body feels slightly foreign, as though you have to relearn, briefly, how to inhabit it.
Sheets-Johnstone would characterize this sensation as a mild proprioceptive deficit — a temporary reduction in the body's self-awareness caused by hours of kinesthetic understimulation. The word temporary is important: the proprioceptive system recovers quickly once the body begins to move again. But if the body moves less and less — if the daily pattern becomes twelve hours at a screen, a brief commute, a few hours of passive rest, then twelve more hours at the screen — the recovery periods shorten, the understimulation becomes chronic, and the proprioceptive self recedes further into the background of experience.
The existential unease that Segal describes repeatedly in The Orange Pill — the sense that something is missing even when everything is productive, the "grinding compulsion" that replaces the exhilaration of flow, the inability to stop working not from satisfaction but from an agitation that stopping does not resolve — these may be, at their root, proprioceptive. The body, understimulated in its most fundamental self-sensing capacity, registers the deficit not as a specific sensation but as a diffuse unease — a feeling that something is wrong without the capacity to identify what. The unease drives more activity, because activity — any activity, even the repetitive typing that caused the deficit — is the body's default response to restlessness. But the activity does not resolve the deficit, because typing does not engage the proprioceptive system in the way that whole-body movement would. The person types more, feels worse, types more, cannot stop — and the cycle that Segal attributes to the tool's addictiveness and Han attributes to the achievement subject's self-exploitation may be, at a deeper level, the body's protest at its own proprioceptive deprivation.
This reading does not contradict Segal's or Han's. It deepens them. The tool is addictive in part because the activity it enables does not satisfy the kinesthetic need that drives the restlessness. The achievement subject exploits herself in part because the exploitation occurs in a kinesthetic environment — the screen, the keyboard — that cannot provide the proprioceptive satisfaction that would signal enough. The body cannot tell the mind to stop, because the body's signaling system has been dulled by hours of monotonous input. The kinesthetic circuit that would normally regulate engagement — producing the felt sense of saturation, of completion, of having done enough — is not functioning at full capacity. The person operates without the kinesthetic brakes that would, in a more embodied work environment, prevent the compulsive overwork that Segal, Han, and the Berkeley researchers all describe from different vocabularies.
The implication for the age of AI is structural, not therapeutic. The problem is not that individuals lack willpower or self-management skills. The problem is that the work environment — the screen, the keyboard, the chair, the linguistic partnership with an inanimate system — systematically understimulates the proprioceptive system that would, if properly engaged, provide the regulatory signals that prevent productive work from becoming self-destructive compulsion. The solution is not better time management. It is a different relationship between the working body and the world — a relationship that includes enough whole-body movement, enough proprioceptive stimulation, enough kinesthetic variety to keep the body's self-regulatory systems functioning.
Sheets-Johnstone's framework suggests that the sense of self is not an abstract achievement of reflective consciousness. It is a kinesthetic accomplishment of the proprioceptive body — built through movement, maintained through movement, diminished when movement is absent. The builder whose work is entirely screen-based is a builder whose sense of self is proprioceptively thinning. Not dissolving — the deficit is chronic, not catastrophic. But thinning, in a way that manifests as the diffuse unease, the restless compulsion, the existential flatness that so many people in AI-intensive work environments report without being able to name its source.
The source is the body. The body that is sitting still. The body that knows where it is but is not being asked to go anywhere. The body that possesses the most ancient and most fundamental form of self-awareness — the proprioceptive sense of being a living thing in a physical world — and is spending its days in a posture that provides almost no proprioceptive information at all.
The primordial self does not require a screen. It requires movement. And it will continue to signal its deprivation, in the language of restlessness and compulsion and unease, until the body is given what it needs.
The beaver builds with sticks and mud. The materials are unglamorous. The work is repetitive — chew, carry, place, pack, repeat. The dam does not appear all at once as a finished structure. It accumulates, stick by stick, through sustained bodily effort that is as monotonous as it is essential. And the dam holds not because any individual stick is strong but because the accumulated structure, maintained through daily attention, redirects a force that would otherwise sweep everything downstream.
Segal's beaver metaphor in The Orange Pill operates at the level of institutional and cultural design — the dams are labor laws, educational reforms, AI governance frameworks, organizational practices that redirect the flow of intelligence toward human flourishing. Sheets-Johnstone's framework adds a dimension that the institutional vocabulary cannot reach: the dams must also be built in the body. The body itself requires structures — habits, practices, environments — that maintain its kinesthetic intelligence against the erosive current of screen-based work. Without these bodily dams, the institutional ones lack a foundation. A labor law that mandates breaks does nothing for the worker who spends her break scrolling on a phone. An educational reform that emphasizes critical thinking over rote production fails if the students' bodies have been sitting still for six hours, their proprioceptive systems dulled, their kinesthetic intelligence dormant. The institutional dam and the kinesthetic dam are complementary. Neither works without the other.
The concept of a kinesthetic dam is precise. It is a practice, a habit, or an environmental structure that ensures the body's kinesthetic intelligence continues to be exercised even when the dominant mode of work is screen-based and linguistically mediated. It is not wellness programming — not yoga offered as a perk, not a standing desk purchased to signal corporate concern for employee health. It is cognitive infrastructure, as essential to the quality of intellectual work as the quality of the tools, the clarity of the goals, or the competence of the team.
The distinction between wellness and cognitive infrastructure matters because it determines whether kinesthetic practice is treated as optional or essential, as a benefit to be offered or a requirement to be built into the structure of work itself. A wellness program says: here is an opportunity to take care of your body, if you choose to. Cognitive infrastructure says: the body's engagement is a condition of cognitive fullness, and the organization's practices must ensure that engagement occurs, because the work depends on it.
Consider what such infrastructure would look like in practice. An organization that takes kinesthetic intelligence seriously would not merely permit breaks. It would structure them — building into the workday periods of bodily engagement that are as protected and as purposeful as the periods of screen-based production. Not arbitrary movement. Not the nervous fidgeting of a body that has been sitting too long and needs to discharge restless energy. Deliberate kinesthetic engagement — activities that recruit the full range of the body's motor and tactile capacities in ways that screen work does not.
The activities need not be athletic. They need to be kinesthetically rich — engaging the body through varied movements, tactile encounters, proprioceptive challenges. Walking meetings, conducted outdoors on uneven terrain rather than in corridors with flat floors, provide proprioceptive stimulation that a conference room chair cannot. Prototyping with physical materials — cardboard, clay, wire, wood — before committing to digital design engages the hands in the kind of tactile-kinesthetic exploration that produces knowledge unavailable through screen-based iteration. Even simple practices — standing to think, moving to a different space to work on a different problem, handling physical objects during discussion — recruit kinesthetic capacities that the seated, typing body leaves dormant.
The Berkeley researchers who studied AI's effects on work proposed what they called "AI Practice" — structured pauses, sequenced workflows, protected time for human-only interaction. Sheets-Johnstone's framework deepens this proposal by specifying what the pauses must contain. Not merely the absence of AI. The presence of kinesthetic engagement. The body must do something during the pause that activates the tactile-kinesthetic systems that screen work leaves idle. Otherwise the pause is a temporal break without a kinesthetic one — time away from the screen during which the body remains in the same impoverished state it occupied while at the screen, and the proprioceptive and kinesthetic systems that need activation continue to atrophy.
For educational institutions, the implications are more radical. If kinesthetic intelligence is foundational to cognition — if the body's movement is the substrate on which abstract thought is built — then education that immobilizes the body is education that undermines its own cognitive goals. The standard classroom, in which students sit at desks for hours while cognitive content is delivered through language, is a kinesthetic desert. The AI-augmented classroom, in which students interact with screens instead of teachers, is an even more extreme kinesthetic desert — the same immobility with the addition of the screen's temporal urgency, the instant feedback loop that trains the student to expect immediacy and penalizes the contemplative pause.
Sheets-Johnstone's framework suggests that the most effective educational response to AI is not to ban AI from classrooms or to integrate it more thoroughly, but to ensure that the classroom includes kinesthetic engagement rich enough to maintain the bodily foundations of the cognition that AI tools are meant to augment. This means hands-on activities — laboratories, workshops, studios, gardens — not as supplements to the real cognitive work of reading and writing, but as cognitive work in their own right. The student who shapes clay, builds a structure, dissects an organism, tends a plant is depositing kinesthetic layers that will support her abstract thinking for years. The student who reads and types exclusively is building cognitive capacity on an increasingly narrow kinesthetic base.
For parents — and Segal writes with particular urgency for parents — the kinesthetic dam takes its simplest and most essential form. The child needs to move. Not because movement is exercise, not because movement is healthy, not because movement burns off energy that would otherwise be disruptive. The child needs to move because movement is how cognition develops, how the body builds the kinesthetic foundation on which all subsequent thinking rests. The child who climbs a tree is learning something about gravity, about balance, about the relationship between effort and height, about the assessment of risk and the experience of fear managed through competence — learning all of this kinesthetically, in the body, through the body, in a way that no screen-based experience can replicate.
The parent's task is not to eliminate screens. Segal is right that this is neither possible nor desirable. The parent's task is to ensure that the child's daily life includes enough kinesthetic richness to maintain the bodily foundation on which cognitive development depends. Enough climbing, running, building, digging, swimming, throwing, catching, balancing, falling, recovering. Enough bodily engagement with a world that resists, surprises, responds to the child's actions in ways that are not programmed but physical — governed by the laws of material reality rather than the rules of software.
The kinesthetic dam is not a wall. It does not block the flow of AI through human cognitive life. It redirects that flow so that it does not erode the bodily foundations on which its own usefulness depends. The dam must be built deliberately, because the current runs in the other direction. Every economic incentive, every productivity metric, every organizational norm in the AI economy pushes toward more screen time, more linguistic mediation, more disembodied work. The kinesthetic dam is a structure built against the current — not to stop it, but to ensure that the pool behind the dam remains deep enough for the ecosystem to thrive.
The metaphor of the ecosystem is, from Sheets-Johnstone's perspective, more than metaphorical. The cognitive ecosystem of a human being includes the kinesthetic intelligence of the body, the proprioceptive sense of self, the tactile-kinesthetic knowledge built through engagement with resistant material, the whole-body awareness that integrates sensation into coherent experience. This ecosystem evolved over hundreds of millions of years. It was optimized — not by design but by selection — for an organism that moves through a physical world, manipulates objects, navigates terrain, encounters resistance, adapts to conditions it did not choose. The screen-based work environment is, in evolutionary terms, an instant — a flash of novel conditions for which the organism's kinesthetic systems have had no time to adapt.
The dam is what buys time. It is the structure that maintains the kinesthetic ecosystem while the organism adjusts to conditions that its evolutionary history did not anticipate. Without the dam, the ecosystem degrades — slowly, invisibly, beneath the surface of productive output that continues to look impressive on every metric except the ones that measure the body's intelligence.
The builder builds with what is available. Sticks and mud. Daily attention. The unglamorous repetition of a practice that does not announce its own importance. The beaver does not build the dam because the dam is beautiful. The beaver builds because the alternative is a bare channel and a barren bank — a river that flows fast and feeds nothing.
The kinesthetic dam is the same: an unglamorous, daily, bodily practice that maintains the cognitive richness of the animate organism in an environment that would otherwise reduce it to a pair of eyes, a set of fingertips, and a screen.
Build it. Maintain it. Build it again tomorrow.
---
Segal ends The Orange Pill where most technology books end — with a call to build, an affirmation of human agency, an insistence that the tool amplifies whatever signal it is fed and that the quality of the signal is therefore the central question of the age. "Are you worth amplifying?" is the book's final challenge, and it carries genuine moral weight. The question asks the reader to examine herself — her ideas, her judgment, her capacity for care — and to ensure that what she brings to the collaboration with AI is worthy of the extraordinary reach that AI now provides.
Sheets-Johnstone's framework accepts the challenge and adds a dimension that The Orange Pill does not fully develop. The signal that AI amplifies is not merely cognitive. It is kinesthetic. The quality of the human contribution to the human-AI collaboration depends not only on the quality of the person's ideas, judgment, and values, but on the quality of her embodied engagement with the world — on whether her thinking is supported by the full range of kinesthetic intelligence that the animate body can provide, or whether it has been progressively reduced to the narrow linguistic channel that screen-based work activates and screen-based work alone.
The distinction is not abstract. It produces measurable differences in the quality of human cognitive output, and therefore in the quality of what AI amplifies.
Consider two hypothetical builders — hypothetical in detail but representative of real patterns observed across the AI-assisted workforce. Both are competent. Both use AI tools with skill and intention. Both produce output that is, by conventional measures, impressive.
The first builder works exclusively through the screen. Her days are structured around linguistic interaction with AI: prompting, evaluating, iterating, refining. She sits at her desk for ten to fourteen hours. Her body's engagement with the world is limited to typing, occasional walking between rooms, and the passive kinesthetic experience of being transported in a car or train. Her kinesthetic life is, by any measure, impoverished. Her thinking, correspondingly, operates through a narrow channel — sophisticated within that channel, capable of producing linguistically complex and analytically sound output, but lacking the kinesthetic resonance that would give her ideas depth, texture, and the felt quality of rightness that experienced practitioners describe as judgment.
The second builder divides her time between screen work and kinesthetic engagement. She works with AI for focused periods, then moves — walks on varied terrain, works in a garden, builds physical prototypes with her hands, cooks a meal that requires the tactile-kinesthetic calibration of heat, seasoning, timing. Her body's engagement with the world is kinesthetically rich — varied in force, texture, spatial range, proprioceptive challenge. Her thinking, correspondingly, draws on a wider cognitive base. The ideas she brings to her AI collaboration carry kinesthetic depth — the metaphors are grounded, the intuitions are bodily, the judgment is supported by a proprioceptive sense of self that provides the felt calibration of enough, too much, not quite right.
Sheets-Johnstone's framework predicts that the second builder will produce better work — not because she is smarter or more skilled, but because her cognitive resources are fuller. The animate amplifier — the human contribution that AI amplifies — is richer when the body that generates it has been kinesthetically engaged with the world. The signal is stronger when it carries the full bandwidth of the animate organism, not just the linguistic component.
The prediction is consistent with the evidence that The Orange Pill itself provides, though the evidence is interpreted there through a different vocabulary. Segal's most generative moments of AI collaboration — the adoption-curves insight, the development of the beaver metaphor, the construction of the book's central argument — occurred during periods of intense but varied engagement: travel, conversation with Uri and Raanan on a Princeton campus, the physical demands of trade shows and cross-continental flights and the bodily disorientation of jet lag. His least generative moments — the grinding compulsion of the transatlantic flight, the inability to stop — occurred during periods of kinesthetic impoverishment: hours at a screen, body immobile, the animate organism reduced to its linguistic function.
The correlation is not proof. But it is consistent with what Sheets-Johnstone's framework predicts, and it points toward a practical principle that is more specific than The Orange Pill's general counsel to build dams: the quality of the human contribution to AI collaboration is a function of the kinesthetic fullness of the contributing human. Maintain kinesthetic fullness, and the amplifier produces extraordinary results. Deplete kinesthetic fullness, and the amplifier produces more of less — linguistically competent output that lacks the bodily grounding, the judgment, the felt depth that distinguishes genuine contribution from mere production.
The principle has implications for the entire arc of The Orange Pill's argument. The democratization of capability that Segal celebrates is real: AI tools lower the floor of who gets to build. But the quality of what is built depends on the builders, and the builders are animate organisms whose cognitive fullness depends on kinesthetic engagement that the tools themselves do not provide. The developer in Lagos whose ideas were previously blocked by lack of institutional infrastructure now has access to implementation tools of extraordinary power. But her capacity to use those tools with judgment — to decide what should be built, to evaluate whether the output serves its intended purpose, to bring the felt sense of rightness that separates a good product from a functional one — depends on whether her cognitive life includes the kinesthetic richness that supports deep thinking. Access to tools is necessary. Kinesthetic fullness is the condition under which access becomes transformative.
The ascending friction thesis, too, gains kinesthetic dimension. Segal argues that each technological abstraction removes mechanical friction and relocates it upward — to the level of judgment, vision, and strategic thinking. Sheets-Johnstone's framework specifies what enables human beings to operate at that higher level: the kinesthetic intelligence built through embodied engagement with the world, the proprioceptive self-awareness that provides the felt calibration of enough, the tactile knowledge that makes judgment something richer than rule-following. If the ascending friction is real — if AI's primary effect is to relocate human work from execution to judgment — then kinesthetic intelligence is not a luxury. It is the capacity on which the relocated work depends.
The question "Are you worth amplifying?" is, from this perspective, partly a question about the body. Not exclusively — ideas, values, judgment, the capacity for care all matter independently of kinesthetic engagement. But partly. The body's contribution to the thinking process is not optional. It is foundational. The animate organism that brings its full kinesthetic intelligence to the collaboration with AI is an organism whose signal is richer, deeper, more textured, more grounded than the organism that has been sitting still.
Sheets-Johnstone has not addressed AI directly in her published work. The absence is itself significant. Her entire intellectual project has been directed at a single claim: that animation — self-generated movement — is the foundational reality from which all cognition emerges. The claim was developed decades before large language models existed, decades before Claude or GPT or the December threshold that Segal describes. It was not formulated as a response to AI. It was formulated as a correction to a philosophical error that is centuries old — the error of treating the mind as separate from the body, thought as separate from movement, cognition as separate from animation.
But the correction has never been more urgently relevant than it is now. The age of AI is the age of de-animated cognition — cognition conducted through language, by systems that have no bodies, in partnership with humans whose bodies are increasingly idle. The philosophical error that Sheets-Johnstone has spent her career correcting has been given its most extreme practical expression in the form of a technology that succeeds brilliantly at linguistic processing while lacking entirely the animate foundation that her framework identifies as the origin of all thought.
The response is not to reject the technology. The response is to ensure that the animate foundation remains intact — that the body continues to move, to make, to engage with resistant material, to maintain the kinesthetic intelligence from which all judgment ultimately derives. The response is to build the kinesthetic dams that keep the body's intelligence in the cognitive equation, even as AI assumes more of the production process.
The animate amplifier is not a slogan. It is a description of what happens when a kinesthetically full human being collaborates with a powerful AI system. The collaboration produces results that neither could achieve alone — the AI's processing power combined with the human's embodied judgment, the machine's pattern-recognition combined with the organism's felt sense of rightness, the system's tireless efficiency combined with the body's hard-won knowledge of what enough means.
The collaboration works when the human is whole. When the body has been moving. When the hands have been making. When the proprioceptive self is awake and calibrated and present. When the animate organism brings its full evolutionary inheritance — three and a half billion years of kinesthetic intelligence, refined through every ancestor that moved, reached, grasped, built, and thought through the movement of a body in a world that resisted and responded — to the partnership with a system that is powerful, useful, and profoundly, categorically inanimate.
The question is not whether AI will amplify. It will. The question is what it amplifies: the full signal of an animate organism engaged with the world through its body, or the thin signal of a body that has forgotten how to move.
The body remembers. It has been remembering for longer than language, longer than tools, longer than civilization. It will remember this, too — if given the chance.
---
My hands were shaking, and I did not know why.
It was late — the kind of late where the house has its own breathing, the walls settling, the refrigerator cycling. I had been working with Claude for seven hours straight. The book was coming together. The arguments were connecting. The prose was flowing in that way that feels effortless, which is precisely when Sheets-Johnstone's warning should have been loudest, and was instead silent.
I stood up to get water and almost fell. Not dramatically — no one would have noticed if anyone had been watching. But my legs did not know where they were. There was a half-second delay between intending to walk and walking, a brief proprioceptive gap that I recognized, dimly, as the consequence of having sat without moving for longer than my body could absorb without protest.
I have experienced this hundreds of times. Every builder who works long hours at a screen knows the feeling. I had always classified it as stiffness — a physical inconvenience, the body's complaint about a posture held too long. After spending months inside Sheets-Johnstone's framework, I understand it differently. The stiffness was not the message. It was the medium. What my body was communicating, in the only language it possesses, was that it had been excluded from the cognitive process for seven hours, that its kinesthetic intelligence had been sitting idle while my linguistic intelligence raced ahead, and that the exclusion had costs I could not see from my chair.
Sheets-Johnstone's argument is the most uncomfortable one in the entire Orange Pill Cycle. Not because it is hostile to AI — she is not hostile to it any more than a geologist is hostile to the weather. She simply describes what the terrain actually is, beneath the buildings we have erected on it. And what the terrain actually is, according to five decades of her work, is kinesthetic. The foundation of thought is movement. The foundation of self is proprioception. The foundation of judgment is the body's engagement with a world that resists.
Every other thinker in this cycle engages AI at the level of language, economics, power, narrative, or ethics. Sheets-Johnstone engages it at the level of the body — and the body is where I live. Where all of us live. Where the twelve-year-old lying in bed asking what am I for? lives, in the tiredness of her muscles and the weight of her blanket and the warmth of the pillow against her face.
The question I carried out of her work is not one I can answer fully. It is one I try to live with honestly: Am I a whole organism engaging with the most powerful tool in human history, or have I become, without noticing, a little more like the machine?
Most days, I do not know. Most days, I am typing. But some days — the days I walk before I work, the days I build something with my hands before I build something with Claude, the days I remember that the body is not the mind's luggage but its origin — the work is different. Not more productive in any way a dashboard would recognize. Fuller. More grounded. More mine.
The garden I said I would never tend — Han's garden, the kinesthetic practice that produces knowledge the screen cannot — I have not tended it. That confession stands. But I have started, tentatively, inadequately, to build the kinesthetic dams that Sheets-Johnstone's work demands. A morning walk before the laptop opens. Fifteen minutes with a physical notebook before the first prompt. The small, unglamorous, daily practices that keep the body in the conversation.
It is not enough. Sheets-Johnstone would say it is not enough, and she would be right. The river runs faster every quarter, and the sticks I am placing are modest against the current.
But the beaver does not wait for perfect materials. The beaver builds with what is available. And the body — my body, your body, the oldest and most sophisticated cognitive instrument in the known universe — is always available.
It just needs to move.
Every conversation about artificial intelligence happens in language -- tokens, prompts, arguments, outputs. Maxine Sheets-Johnstone spent five decades proving that language is the second floor. The ground floor is the moving body. The infant who reaches before she speaks. The surgeon whose hands know what her mind cannot articulate. The potter whose fingers read the clay. Thought begins in movement, and it never fully leaves.
This book applies Sheets-Johnstone's framework to the central question of The Orange Pill: what happens when we partner with machines that think without bodies? Not to reject AI, but to see what the partnership costs when the human half stops moving. The kinesthetic intelligence that built every concept we possess is eroding beneath screens that reward only our linguistic capacities.
The dams we need most are not institutional. They are bodily. The most powerful cognitive instrument in the known universe is the one you are sitting on.
-- Maxine Sheets-Johnstone, The Primacy of Movement

A reading-companion catalog of the 23 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Maxine Sheets-Johnstone — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →