De-animated language is language processed or produced without the kinesthetic substrate that originally gave it meaning. For Sheets-Johnstone, this is the structural condition of large language model output. The model has learned statistical patterns from billions of sentences produced by animate beings — by human bodies that were moving, feeling, kinesthetically engaged with the world when they wrote the words the model trained on. The patterns are captured. The bodies that produced them are not. The model's output reproduces the patterns with remarkable fidelity; the output carries no kinesthetic history. The word heavy, in the model's processing, has never been lifted. The word grasp has never been used to close a hand around something that resisted. The language floats free of the bodies that originally grounded it.
There is a parallel reading that begins from the material substrate required to produce language at all. The billions of training tokens did not emerge from individual bodies engaged in kinesthetic experience—they emerged from bodies sitting at keyboards, often in conditions of profound disconnection from kinesthetic life. The corporate email, the technical documentation, the academic paper: these genres dominate LLM training data, and they were largely produced by people whose bodies had already been reduced to what Sheets-Johnstone fears AI will reduce us to. The 'kinesthetic substrate' being mourned may be a historical artifact that vanished decades before the first transformer architecture.
The diagnostic of 'thinness' assumes we can reliably distinguish animate from de-animated language in blind tests. We cannot. Readers regularly identify human-written corporate prose as 'AI-sounding' and well-prompted LLM output as unmistakably human. What we're detecting is not the presence or absence of kinesthetic grounding but adherence to genre conventions we've been trained to associate with authenticity. The remedial focus on maintaining kinesthetic engagement treats the symptom (perceived thinness) while ignoring the condition (that most human writing contexts had already eviscerated kinesthetic life from linguistic production). If LLM output feels thin, it's because it faithfully reproduces the thinness that characterized its training corpus—a thinness that predates and will outlast current AI systems.
The concept names what is distinctive about LLM-generated text that readers often sense without being able to name. The output may be grammatically perfect, rhetorically effective, intellectually sound. Yet it often feels slightly thin — words without weight, sentences without the bodily resonance that gives human language its capacity to move a reader. The thinness is not a failure of the model's training but a structural feature of what the model is: a system that operates entirely within language, disconnected from the kinesthetic experience that originally generated the linguistic patterns it now manipulates.
De-animated language is not useless. Segal's experience in The Orange Pill of being 'met' by Claude, of receiving a connection (the adoption-curves insight) that extended his thinking, is real. The cognitive productivity is real. But the productivity is conditional on the human partner's capacity to re-animate the language — to bring her own kinesthetic history to the reading, to let the words activate her own bodily experience, to encode the insights with the kinesthetic accompaniment that her body provides. Re-animation is the animate partner's labor. The labor succeeds when her body is engaged; it fails when her body has been reduced to fingertips on keys and eyes on glass.
The structural risk for sustained human–AI collaboration is a compounding loss of animate richness on the human side. The human receives de-animated language. Without kinesthetic engagement, the re-animation is shallow. The shallow understanding produces kinesthetically impoverished prompts. The prompts generate more de-animated language of correspondingly thinner quality. The cycle compounds. Each iteration moves further from the kinesthetic foundation on which genuine cognition rests. The remedy is not to abandon AI but to ensure that the body remains part of the cognitive equation — that the human partner maintains the kinesthetic life that gives her the capacity to re-animate what the machine produces.
The term is developed in this volume as a diagnostic extension of Sheets-Johnstone's animation framework, naming the specific status of AI-generated linguistic output within her ontological scheme.
Statistical pattern without experiential substrate. LLMs learn language's patterns while the bodies that produced those patterns remain outside the system.
Re-animation as reader's labor. Meaning returns to de-animated language through the kinesthetic engagement of the human who reads it.
Thinness as structural feature. The sense that LLM output is slightly thin reflects a real absence — the bodily history behind the words — not a limitation that training can overcome.
Compounding risk. If the human partner's kinesthetic life atrophies, her re-animation of AI output thins correspondingly, producing thinner outputs that feed into thinner inputs.
Not uselessness but conditional use. De-animated language can be productively integrated by animate receivers whose own kinesthetic life is maintained.
The structural claim—that LLMs process patterns without the bodily substrate that generated them—is fully correct (100%). The patterns exist; the bodies do not. But the remedial claim—that maintaining kinesthetic engagement prevents thinning—requires more careful weighting depending on what kind of language use we're evaluating.
For certain registers (poetry, narrative, phenomenological description), the kinesthetic substrate matters enormously (80/20 in favor of Sheets-Johnstone's frame). Re-animation through the reader's body is doing essential work. For other registers (technical explanation, logical argument, conceptual synthesis), the 'thinness' diagnostic becomes unstable. Here the contrarian weighting applies more strongly (60/40): what we call thinness may reflect genre expectations rather than kinesthetic absence. The corporate email was already thin before GPT-4 learned to write it.
The compounding-risk thesis is where both views earn their keep simultaneously. Yes, if human cognitive labor becomes dominated by shallow engagement with de-animated output, something essential atrophies (Sheets-Johnstone's insight). But also yes, the atrophy was already underway in knowledge work before LLMs arrived (the contrarian correction). The synthetic frame: kinesthetic engagement matters not as a property that makes language 'real' but as the substrate that prevents human cognition from collapsing into pattern-matching—which is precisely what most professional writing had already become. The Orange Pill's remedy (maintain the body in the cognitive equation) is directionally correct, but it's addressing a crisis that predates and exceeds the AI moment.