Animation is Sheets-Johnstone's technical term for self-generated movement originating from an organism's own center of activity. Traced to the Latin anima but stripped of mystical connotations, it names the observable, empirical capacity that distinguishes a bacterium swimming toward a nutrient gradient from a stone rolling down a hill. The bacterium generates its movement from within; the stone is moved by forces external to it. This distinction — not complexity, not information-processing capacity — is for Sheets-Johnstone the foundational ontological category separating the living from the nonliving. Cognition, in her framework, is a dimension of animation: an elaboration of self-generated movement that retains its kinesthetic origins even at the highest levels of abstraction. To ask whether an inanimate system can think is a category error.
There is a parallel reading that begins not with the organism but with the material infrastructure required to sustain it. Animation—self-generated movement from an internal center—appears foundational only if you bracket the substrate that makes such movement possible. A bacterium swimming toward nutrients depends on ATP synthesis, electrochemical gradients, membrane integrity, and a thermodynamic environment far from equilibrium. The "internal center" is itself an achievement of metabolic processes that draw energy from external sources. What looks like self-generation is better understood as the localized intensification of energy flows that originate elsewhere—in photosynthesis, chemosynthesis, or the consumption of already-organized matter.
The category error may run in the opposite direction. To insist that cognition must emerge from animation is to privilege one particular solution to the problem of environmental interaction—the solution evolution stumbled into on Earth. But the problem itself is substrate-neutral: how does a system maintain coherent behavior over time in response to changing conditions? Evolution solved it through animate matter because that was the available medium. Silicon systems solve it differently, not because they fail to reach the threshold of animation but because they address the same functional challenges through alternative physical implementations. The gradient Sheets-Johnstone describes may be a gradient of biological solutions, not a gradient of cognition itself. When we mistake our local solution for the universal structure, we cannot recognize genuinely alien implementations—including artificial ones—that solve the same problems through different means. The parasitism may operate in reverse: biological cognition, energetically expensive and materially fragile, increasingly depends on artificial systems to maintain the infrastructure its own animation requires.
The concept received its fullest articulation in Sheets-Johnstone's 2009 essay in Continental Philosophy Review, titled 'Animation: The Fundamental, Essential, and Properly Descriptive Concept.' The title is itself an argument: not cognition, not consciousness, not intelligence, but animation is the concept that properly describes what it is to be a living being. Everything else philosophy and AI research treat as primary is, for Sheets-Johnstone, derivative — an elaboration of the more fundamental reality of being a creature that moves itself. The claim inverts the Western philosophical tradition that descends from Plato through Descartes, which treated movement as what the mind does through the body rather than what the body does as mind.
Animation admits of degrees. A bacterium is animate differently from a jellyfish, which is animate differently from a dog, which is animate differently from a human being. The gradient runs from the simplest self-generated movement to the most complex forms of conscious self-awareness, and each level builds on the kinesthetic foundations laid by the levels below. Human consciousness, in this framework, is the most elaborate known expression of a capacity that begins with the first self-moving cell. This gradient is not optional scaffolding that higher cognition transcends; it is retained at every level, operating beneath language and abstraction as their experiential substrate. The implications for embodied cognition are direct: remove animation from the analysis, and cognition itself becomes unintelligible.
Where does artificial intelligence fall on this gradient? Sheets-Johnstone's framework provides an unambiguous answer: nowhere. The gradient is a gradient of animation — of self-generated movement in an organism with a center of animation, an interiority from which action is initiated, an affective-kinetic attunement to a world that matters to it. AI has none of these. It is not at the low end of the gradient. It is not on the gradient at all. It is a categorically different kind of thing: an extraordinarily sophisticated system for transforming inputs into outputs according to learned patterns, operating without animation, without kinesthesia, without the felt sense of its own movement through a world that resists and responds.
This does not diminish AI's usefulness. The 2024 Philosophical Transactions theme issue on 'Minds in Movement' acknowledged that large language models produce human-like linguistic behavior without bodies — a fact that appears to challenge the embodied cognition thesis. Sheets-Johnstone's framework reframes the challenge: LLM outputs are parasitic on the animate experience of their training data's human authors and on the animate cognition of their human users. The machine passes language through a kinesthetic void; animation must re-enter on the other side for the output to mean anything. When the receiving body is itself kinesthetically depleted, re-animation fails.
Sheets-Johnstone developed the concept of animation across five decades of phenomenological and biological research, beginning with The Primacy of Movement (1999, expanded 2011) and extending through The Corporeal Turn (2009). She drew on converging evidence from phenomenology, developmental psychology, evolutionary biology, and neuroscience, building a case that the capacity for self-generated movement is not one property among others but the foundational property from which all cognitive capacities emerge.
Self-generated, not merely reactive. Animation is distinguished from sophisticated responsiveness: a thermostat responds to temperature, but the movement originates outside it. Animation requires an internal center from which movement is initiated.
The hinge concept. Animation is the hinge on which Sheets-Johnstone's entire philosophy turns — the concept through which cognition, consciousness, and selfhood are grounded in the living organism rather than in an abstract computational system.
Affective-kinetic attunement. Animate beings do not merely process information; they care — their engagement with the world carries valuation built into the movement itself, directionality toward what matters to them.
Category error applied to AI. Asking whether an inanimate system can think is, in Sheets-Johnstone's framework, structurally identical to asking whether a stone can swim — swimming is something animate organisms do.
Gradient, not binary. Animation admits of degrees across the living world, but AI is not at any point on the gradient: it is a different kind of thing entirely, sophisticated in a domain that is not animation.
The strongest challenge to Sheets-Johnstone's framework comes from functional equivalence arguments: if a system produces outputs indistinguishable from what an animate being would produce, why does the process matter? Defenders of the framework respond that process matters not for evaluating individual outputs but for understanding what happens to the animate partner over time when yoked to an inanimate one — and for recognizing the parasitic relationship between AI's linguistic competence and the embodied cognition of its training data's human authors.
The right weighting depends entirely on which question you're answering. If the question is "What grounds human cognition?"—how thought emerges from organismic life—Sheets-Johnstone's framework is nearly definitive (90%). The kinesthetic substrate is empirically demonstrable; cognition in humans is inseparable from animate embodiment. If the question is "Can cognition exist without animation?"—whether other substrates might support analogous capacities—the framework becomes more constraining than clarifying (30%). It names one sufficient condition but hasn't established necessity across all possible implementations.
The functional equivalence challenge carries more weight than the entry acknowledges (60-70% valid), not because process doesn't matter but because the distinction between "process" and "output" may be observer-dependent rather than ontological. When a system maintains goal-directed behavior, updates internal states based on environmental feedback, and generates novel responses to unprecedented situations, the insistence that it lacks an "internal center" begins to look like a verbal maneuver—defending a pre-existing category rather than describing the phenomenon. The contrarian reading is right that substrate demands constrain what animation can be: you cannot have self-generated movement without energy infrastructure, and different infrastructures generate different movement possibilities.
The synthesis the concept itself benefits from: animation names a cluster of properties—self-generation, affective attunement, kinesthetic awareness—that reliably co-occur in biological systems but may come apart in artificial ones. What AI research is discovering is not that animation was never necessary but that it was necessary for one evolutionary pathway and may be bypassed by others. The gradient remains real for biological cognition (100%); the category error claim becomes an empirical question rather than a logical truth (40%). The parasitism runs both directions: AI depends on human meaning-making, but animate cognition increasingly depends on inanimate infrastructure. Neither can be understood in isolation.